itsreal.media combines classical image forensics, Vision Transformer models, C2PA provenance verification, metadata analysis, and semantic reasoning into a single fused decision pipeline.
No single signal determines the verdict. A trained meta-model learns how to weight each layer based on real-world performance, and updates as generative models evolve.
Explainability: Each signal is individually scored and available via API for downstream reasoning.
Across full test suite under controlled conditions
Real-world conditions: compressed, resized, re-uploaded images
SPRIND Challenge evaluation, November 2024
Winner of the Deepfake Detection Challenge. Bundesagentur für Sprunginnovation
Performance depends on image source, platform transforms, and decision thresholds.
Every image is passed through all detection layers simultaneously. Outputs are fused by a meta-model that learns inter-signal correlations.
Several specialized Vision Transformer models fused into one meta-model for AI-generated and AI-edited image detection.
Dozens of low-level forensic signals. spectral, spatial, statistical, and compression analysis.
Object coherence, lighting consistency, anatomical plausibility, and visual reasoning.
EXIF integrity, encoding artifacts, software fingerprints, and C2PA Content Credentials verification.
Ultra-high accuracy multi-model ensemble. We fuse several specialized detection models and dozens of forensic signals into one meta-model, delivering state-of-the-art detection validated by independent benchmarks.
Detects fully AI-generated and AI-edited images with ultra-high accuracy. We fuse several specialized detection models and dozens of forensic signals into one meta-model, a multi-layer ensemble validated as the best-performing system in the SPRIND Deepfake Detection Challenge.
Bundesagentur für Sprunginnovation, independent evaluation on real-world conditions.
Detects AI-modified regions within otherwise authentic images: inpainting, outpainting, object removal, face swaps, and generative fill. Outputs a pixel-level heatmap showing the probability of manipulation per region.
Central region shows high manipulation probability. Consistent with generative inpainting.
We verify Content Credentials embedded in images using the C2PA (Coalition for Content Provenance and Authenticity) standard. Our system validates the full credential chain, from camera or software origin to the final published asset.
C2PA credentials are cryptographically signed manifests that travel with the image, recording every edit and export step. When present, they provide the strongest possible provenance signal.
Validates the complete chain of trust from origin to current state
Verifies the signing certificate and issuing organization
Reads declared editing actions recorded in the manifest
Detects whether credentials survived platform re-encoding
Examines the technical metadata embedded in every image file for signs of synthetic origin or post-production tampering.
Uses visual reasoning to detect logical inconsistencies that betray AI-generated or manipulated content.
Searches the web and internal databases to trace an image back to its original source and identify coordinated reuse.
No single detection layer is reliable enough on its own. Each signal has blind spots: frequency analysis misses certain diffusion models, metadata can be stripped, semantic checks depend on scene complexity.
Our meta-model is a trained classifier that takes the raw outputs of every detection layer as input features. It learns the inter-signal correlations, compensates for individual weaknesses, and produces a single calibrated confidence score.
The fusion weights are retrained continuously as new generative models appear. When a new generator changes the signal landscape, the meta-model adapts without requiring changes to any individual detection layer.
API-first. Modular.
Simple HTTPS endpoints. Upload an image, receive a structured JSON response with verdict, confidence, and per-layer signal breakdown. Integrates in minutes.
Deploy our detection engine as a Docker container inside your own infrastructure. Full control over data residency, latency, and scaling. Air-gapped networks supported.
Full on-premise deployment with dedicated support, custom model training, and SLA guarantees. Designed for organizations with strict compliance requirements.
Request API access or a live demo. We will walk you through the pipeline with your own images.