'Seeing Is Believing' Is Dead: AI Deepfakes Have Broken Visual Evidence

'Seeing Is Believing' Is Dead: AI Deepfakes Have Broken Visual Evidence
Source: Forbes

A claims adjuster reviews photos of hail damage on a roof. A detective examines cell phone video from a domestic violence case. A family law attorney presents screenshots of threatening text messages in a custody hearing. Each is making a consequential decision based on visual evidence, and none can confirm with certainty that what they are seeing is real.

That is not hypothetical. It is the daily reality of working with digital evidence in 2026. The best AI generated images and video have reached a level of quality where even trained professionals frequently cannot distinguish them from authentic media. Controlled studies in recent years find that many people struggle to spot deepfakes, often performing only a little better than chance, and sometimes worse when the fakes are high quality. Modern generative AI does not simply alter existing images; it creates entirely new ones from scratch, pixel by pixel, leaving none of the telltale editing artifacts that traditional manipulation tools often introduce.

In courtrooms, insurance offices and law enforcement agencies, we have built decision making processes around the assumption that photographs and video depict something that actually happened. That assumption is now dangerously outdated.

Insurance is feeling it first. UK loss adjuster McLarens reported a 300 percent rise in suspected fake documents in its claims in the first quarter of 2023, and Allianz has warned of a similar threefold jump in manipulated images, video and documents across one of its reporting periods.

Swiss Re's 2025 SONAR report flags deepfakes and other synthetic media as an emerging risk for insurers, with fabricated evidence and AI assisted fraud highlighted as a growing concern. Claimants have submitted AI generated damage photos that passed initial review, manipulated CCTV with altered timestamps, and, in at least one documented case, a completely fabricated telehealth video to support a disability claim.

The courtroom is next. The Federal Judicial Conference's Advisory Committee on Evidence Rules has been considering a draft amendment to Rule 901, sometimes described as a new subdivision (c), aimed at evidence that may have been fabricated in whole or in part by artificial intelligence, and exploring how courts should handle suspected deepfakes at the admissibility stage. Louisiana is among the first states to move at the legislative level, passing a 2025 law that directs lawyers to take reasonable steps to verify the authenticity of digital evidence and related disclosures in court. The old standard of "does it look right?" no longer holds up when "looking right" is the easiest part to fake.

Cases are already testing these boundaries. In the Rittenhouse trial in Wisconsin, the defense challenged prosecution video evidence on the grounds that Apple's pinch to zoom function uses processing that could alter pixels, which led to extended argument over how the footage had been handled before the court allowed the zoomed in version to be shown. In a UK custody dispute, a mother submitted what investigators later determined was heavily doctored audio designed to portray the father as violent. These are not edge cases. They are the leading edge of a wave.

One idea is to build databases of known AI generated content, cataloging fakes using digital fingerprints the way child exploitation images are tracked. That system works for a redistribution problem, where the same image may be shared thousands of times. Fingerprint it once, catch it everywhere. AI generated images are the opposite kind of problem. Every prompt can produce a unique image with a unique fingerprint. You are not cataloging copies. You are trying to inventory an effectively infinite number of originals. The math does not work, and the content mills are only speeding up.

Then there is the C2PA standard and its "Content Credentials," essentially a cryptographic notary stamp embedded in media at the moment of capture. It can record which device took a photo, when it was created and which edits followed. The National Security Agency endorsed content credential approaches in a January 2025 cybersecurity information sheet, and NIST's AI 100-4 report identifies provenance systems like C2PA as among the most promising methods available for tracking the origins of media. The concept is exactly right: verify the origin, not the output.

C2PA also has limits. It needs very broad adoption across cameras, phones, editing software and platforms to work at scale. Major camera makers, including Leica, Nikon, Sony and Canon, have joined the initiative or related content authenticity efforts. That is real progress. Yet the vast majority of photos submitted as evidence today carry no provenance data at all. The infrastructure is being built. It will not help the adjuster reviewing a suspicious claim this afternoon, or the prosecutor walking into a hearing next week.

What about detection tools? I wish I had better news. AI detection software can perform well in controlled lab settings on benchmark datasets, but accuracy drops sharply when the tools confront real world content created with techniques outside their training data or compressed and reposted on social platforms. NIST's AI 100-4 report and several independent evaluations have concluded that no single detection approach currently offers reliable, standalone performance across content types and attack methods. The tools are improving, but in many real world scenarios they struggle to keep up with the pace of new generative techniques.

That brings me to what I have seen hold up most consistently in more than sixteen years of examining digital evidence as a digital forensic expert: examination of the source device. Not the file. The device.

When I examine a phone or computer in a case, I am not just looking at a single photo. I am looking at that photo in context, sitting in a camera roll alongside thousands of others, with consistent metadata, file system artifacts, application attribution and creation timestamps.

The operating system tracks when files are created and which application generated them. Network logs, GPS coordinates and sequential file naming conventions build a web of corroborating details that either supports authenticity or exposes fabrication. If one "threatening message" screenshot appears out of nowhere with no matching entry in the messaging database, notifications or backups, that is a powerful sign the image was manufactured.

A photo with that kind of provenance tells a story you can evaluate under oath. A photo scraped from social media, or submitted by email with no chain of custody, is just pixels. You can argue about it, but without provenance and context it is very difficult to authenticate it with any real rigor, if possible at all.

This matters enormously for the people I work with every day: insurance professionals trying to determine whether damage photos are legitimate; attorneys whose clients' futures may hinge on whether a video is authentic; judges weighing evidence where one party claims the other fabricated threatening messages. People's money, freedom and families hang on whether someone can prove a piece of digital evidence is what it claims to be.

No method is perfect, and I will not pretend otherwise. Someone can photograph an AI generated image displayed on a monitor, and the source device will dutifully record it as a new, genuine looking photo. An experienced examiner can often spot traces of this, but there will always be edge cases.

There is also a boundary that courts and investigators have to understand: the difference between enhancement and generation. Lawful enhancement means controlled operations that clarify existing pixels, such as adjusting contrast, reducing noise or using interpolation that does not invent new content.

Generative edits are different. When software removes a mask, alters clothing, claims to "unblur" a face by hallucinating details, or fills in missing areas using a model’s best guess, the result is no longer a record of what any camera saw. It is a synthetic reconstruction. Those outputs may be useful leads but they have no business being treated as evidence of identity or action.

What matters is that, in my experience and in the current state of practice, device level forensics offers a higher and more testable level of reliability than other available methods for establishing whether an image or video is authentic. The shift that needs to happen in courtrooms, claims departments and any organization that makes decisions based on visual evidence is deceptively simple: we need to stop asking "Is this image real?" and start asking "Can you prove it?"

That means treating digital photos and video with the same chain of custody discipline we apply to physical evidence, including who collected it, how it was stored and who handled it on the way to court. It means requiring device level examination when images and video factor into consequential decisions rather than accepting orphaned files at face value.

The era of taking digital evidence at face value ended the moment AI could generate a damage photo that an experienced adjuster would have trouble distinguishing from the real thing. C2PA and similar provenance systems will likely close part of this gap over time, but "eventually" does not help the attorney picking a jury next month. In my experience, when authenticity is seriously disputed, a forensic examination of the source device is the only method that consistently provides a reliable basis for saying an image or video is genuine. Today, relying on anything less is building your evidentiary foundation on sand.