A transparent, technical explanation of how Qwip detects AI-generated images while protecting your privacy
When you visit a webpage with images, Qwip analyzes each image through a multi-step process that never uploads your images to our servers. Here's exactly what happens:
The extension uses a MutationObserver to detect when images appear on the page (including lazy-loaded images). Small images (<128×128px) and huge images (>4096px) are automatically skipped to save resources.
Your browser computes 6 cryptographic/perceptual hashes of the image using WebAssembly: 5 perceptual hashes (pHash variants) and 1 BLAKE3 content hash. This happens entirely in your browser.
If enabled, the BLAKE3 hash (just the hash, not the image) is sent to api.qwip.io to check if this image has been analyzed before by the community. If found, you get instant results from the community database.
Before running the ML model, a color-entropy analysis checks whether the image looks photorealistic. Illustrations, cartoons, icons, and heavily-edited artwork are automatically skipped — reducing false positives and avoiding wasted compute on non-photographic content.
If not in the database and not filtered by the pre-filter, the image is analyzed using ONNX Runtime Web. The default model is MobileCLIP (256×256 input), with MobileNetV2 and Swin Transformer also available. Inference runs entirely in your browser via WASM or WebGPU — no image data is transmitted.
AI-generated images get a red border, real images get a subtle green checkmark. The extension also increments a counter showing how many AI images you've encountered.
If enabled, your detection result (hash + confidence + model) is anonymously contributed to the community database to help future users. Multiple detections are aggregated using weighted averaging.
Three models are available, selectable in the extension popup. All run entirely in your browser.
The default model uses a CLIP-based architecture fine-tuned to detect AI-generated images. Its higher input resolution (256×256) and semantic understanding make it the most robust option for modern AI generators.
A lightweight, fast option best suited for lower-powered devices or when you want lower latency. Slightly less accurate than MobileCLIP on newer generators.
The most accurate model in the lineup. Recommended for cases where precision matters more than speed. WebGPU acceleration is strongly recommended — WASM is noticeably slower.
Instead of uploading your images to our servers (which would be a privacy nightmare), we compute mathematical "fingerprints" called hashes. These hashes can be used to identify similar images without ever seeing the actual image content.
These hashes allow us to detect if you've seen the same (or very similar) image before without storing or transmitting the actual image. The BLAKE3 hash is used for exact matches, while the perceptual hashes can detect near-duplicates and edited versions.
The community database at api.qwip.io stores detection results contributed by users:
When multiple users analyze the same image, their confidence scores are aggregated:
This simple weighted average ensures that as more people analyze an image, the confidence score becomes more reliable.
The extension works completely offline! Here's what happens when you're not connected to the internet:
When you reconnect, pending contributions are NOT automatically sent (we never queue data without your knowledge).
Check out our full documentation and open-source code.