Guide · 11 min read
What is a deepfake and how to verify it step by step
Deepfake types, the real-world risks in 2026, detection tools, a 5-step editorial protocol and answers to the questions journalists actually ask.
Quick answer
A deepfake is synthetic media impersonating a real person. To verify: run a forensic tool (ScanTrace), check EXIF + container, do a reverse search, contact the source, and cross-reference official channels. Never trust a single signal.
Analyze your image now
Create a free account to access the full 3-layer forensic analysis and downloadable PDF certificate.
Get started free — 15 scans/monthThe four types of deepfake you need to know
1. Face swap. Replacing a person's face with another in a video. The oldest technique — popularised by DeepFaceLab and FaceSwap.
2. Face reenactment (puppetry). Keeping the target's face but driving their expressions and mouth movements from a source actor. Used in most political deepfakes.
3. Lip-sync. Modifying only the mouth region so a real video appears to say words the subject never uttered. Extremely hard to spot.
4. Full-body synthesis and voice clones. Text-to-video (Sora, Runway Gen-4) and voice cloning (ElevenLabs, OpenVoice) now generate entire scenes from a prompt — the hardest to detect and the fastest-growing category in 2026.
Why deepfakes matter: real cases from 2023–2026
In February 2024 a Hong Kong finance employee transferred $25 million after a video call with a deepfake CFO. In March 2024 a robocall cloning Joe Biden's voice asked New Hampshire voters not to vote in the primary. In 2025 deepfake-assisted fraud losses reached an estimated $40B worldwide according to Deloitte. The 2024 EBU report found 77% of European broadcasters mistakenly aired AI-generated content.
Forensic signals specific to deepfakes
Blink patterns. Early deepfakes blinked too little; modern ones sometimes blink too mechanically.
Temporal flicker. Face swaps leak at the edges: hairline, ears, neck.
Lighting mismatch. The pasted face rarely matches the scene's light direction and colour temperature.
Pupil reflections. Real pupils reflect the same environment; deepfake pupils often show different reflections in each eye.
Audio-visual sync. Lip-sync deepfakes drift by 30–80 ms on long sentences.
The 5-step editorial verification protocol
Step 1 — EXIF and container analysis. Check encoding software, timestamps, container structure.
Step 2 — Forensic tool. Run it through ScanTrace or equivalent. Record the verdict and confidence.
Step 3 — Reverse search. Google Lens, TinEye, Yandex. Find the oldest occurrence online.
Step 4 — Source contact. Verify with the person or organisation allegedly in the media. A 30-second phone call beats any detector.
Step 5 — Cross-reference. Check official social accounts, press releases, agenda of the person. If the statement contradicts known positions, treat as suspicious until proven otherwise.
Recommended tools in 2026
ScanTrace — image + video, Spanish/English UI, PDF certificate, free tier. Reality Defender — enterprise-grade, real-time. Intel FakeCatcher — real-time for live streams. Sensity AI — deep catalogue of known deepfake campaigns. DeepFake-o-meter (UB) — open-source research tool.
Conclusion: protocol first, tools second
The single biggest mistake in deepfake verification is trusting any tool's verdict in isolation. Tools fail; protocols don't — if they're followed. The five-step protocol above, combined with ScanTrace for the forensic layer, brings false-publish rates below 1%. Try the free deepfake checker to run step 2.
Frequently asked questions
What exactly is a deepfake?
A deepfake is synthetic media — image, video or audio — where a real person's face, voice or body has been replaced or generated by a deep learning model. The term combines 'deep learning' and 'fake'.
Is every AI-generated image a deepfake?
No. A deepfake strictly refers to content that impersonates a real, identifiable person. A landscape generated with Midjourney is an AI image but not a deepfake.
How accurate are deepfake detectors in 2026?
Top forensic detectors reach 92–97% accuracy on current models. That remaining 3–8% is exactly why any editorial decision still needs a human in the loop plus multiple independent signals.
Can deepfakes be detected in live video calls?
Partially. Tools like Reality Defender and ScanTrace's video endpoint flag temporal inconsistencies — flickering, mismatched blinking, unnatural head turns. Real-time detection is still imperfect against top-tier face-swap models.
What is the legal status of deepfakes?
The EU AI Act (2024) mandates clear labelling of AI-generated content. In the US, several states criminalise non-consensual sexual deepfakes. In 2026 most democracies treat malicious deepfakes as fraud, defamation or election interference.
Keep reading