We Have Six Months Until AI Breaks Reality, This Tool May Buy Us Time

We Have Six Months Until AI Breaks Reality, This Tool May Buy Us Time
Source: Forbes

Forbes contributors publish independent expert analyses and insights.

Deepfakes first entered the chat in 2017. And in the eight years since, reality's been slowly melting at the edges.

We now live in a world where everything around you might be fake to the point where Plato need not bother with the whole cave to make us suspicious about everything around us.

That video of your CEO praising a new supplier and asking to push that invoice through ASAP? That "voicemail" from your partner left asking for help after a car crash? Or what about that Zoom call you swear you had with the banker before wiring the downpayment?

Good luck with that in 2025.

We've already seen banks move millions after deepfaked video calls with synthetic executives. North Korean operatives have quietly embedded fake remote workers into unsuspecting companies. And every 12-year-old with a phone can now clone your voice from less than an Instagram post's worth of audio and hijack your meetings in real time.

And it's not only the malicious attacks that are having us rethink our expectations about reality. Just this week, Vogue was caught with its haute-couture pants down right in the middle of a controversy as it was found publishing an AI-generated Guess ad that blew up for all the wrong reasons, showing how even 'white hat' use cases are literal live wires for companies.

Adobe's Content Authenticity Initiative, OpenAI's watermarking schemes, and Meta's attempts at labeling AI content are valiant efforts to protect reality, sure. But if we're honest, all of them are simply Band-Aids on a ruptured mainline that has long since burst at its seams.

The fundamental problem isn't that we can't tell what's real. We can find ways to deal with that, just as we've come to terms with living with fraud of the more traditional kind. It's that we've built an AI ecosystem that constantly eats its own tail in a positive feedback loop that never allows us to sigh in relief of having finally fixed the broken dam. Each generation of fake content becomes training data for the next, and reality itself is now recursive.

And it's catching the once all-powerful flat-footed and ill-equipped as it marches along. In fact, seeing how far the deepfakes have gone can be an entirely shocking experience.

"We had recently taken on the habit of starting our sales meetings by showing up as whomever we are pitching to, to showcase just how dangerous it is," says Ben Colman, CEO of Reality Defender, a pioneering deepfake detection platform. "The reaction is an instant jaw drop and a very visceral one at that. Shortly after we started we realized we actually have to start asking for permission before we shock people like that."

At a federal presentation earlier this year, their demo was cut short when security realized the implications. "We were pulled from the stage mid-sentence in case the impersonation leaks and moves markets," he adds.

In the span of a few quarters, impersonation has graduated from trickery to full-spectrum mimicry. We once thought our avatars would empower us but it turns out that they've just made it that much easier to steal our identities wholesale.

Cybersecurity has always been an arms race, red team versus pitted against blue team. It's an incredibly asymmetrical arms race at that, where offense gets to be fast, cheap, and unregulated while defense gets bureaucracy, audits, and paperwork. And when they do their job, everything is simply as they should be, no celebration needed.

"The blue team has been underwater for years," Colman says. "Every time we get a new verification method, voice, face, behavior it gets cracked next month. There's no such thing as stable identity anymore."

That isn't to say that the defense stack hasn't grown more capable itself. Companies like Estonia's Veriff built its way up to unicorn status based on identity verification, layering passive liveness detection and anti-spoofing protocols. Pindrop rose to fame securing voice authentication systems with signal-based anomaly detection, and many others have followed suit. But no matter how many bricks we stack on top of each other, the wall isn't holding.

Not least because of how we now have Big Bad AI blowing at it with the goal of eating us all up.

"We're seeing non-human identities and AI agents accessing sensitive data without proper guardrails," says Hed Kovetz, CEO of Silverfort. "AI agents are a gift to enterprises, but they're just as powerful in the hands of attackers. For this part of the arms race nobody has the definite playbook yet, and we need to move fast."

Kovetz's company recently rolled out an agentic identity security product which is the enterprise's way of acknowledging that we now need to manage our AI assistants and API entry points like we do with humans.

Reality Defender is also cognizant of the latest stage of the arms race being decidedly tipped against the enterprise.

"AI has made it dead simple to offer attacks at scale," Colman explains. "Malware-as-a-service was the start and impersonation-as-a-service is next. Send us a LinkedIn profile and three YouTube clips, and we'll give you a working clone for your next scam."

Give it a month or two, and we'll have a Cameo marketplace for deepfakes?

Or perhaps somewhere deep in the bowels of the dark web something it has already set up shop right next to malware-as-a-service kits like RedLine and LummaC2 that are already flooding Telegram groups. The leap to commercialized deepfake impersonation is a small one compared to the jumps the red team has already done before.

So how do you fight a threat that scales like spam but devastates like fraud?

Colman's team offers one solution that hinges on an insight others will surely follow with. In order for us to have a fighting chance against the deepfakes, we need to make detecting happen instantly, and at scale.

In pursuit of this goal, Reality Defender has launched their public API that lets any app developer build in deepfake defense in minutes. "We've gone modality by modality and we've aimed for immediacy and access through and through," says Colman. "First audio, then images and from there video. We want to move faster than the red team does, and that means detecting deception as it happens -- not after it's gone viral."

What Reality Defender's launch has us acknowledge head on is how the old model of verify once and you're done, is over. Identity is now a moving target that has to be tracked, interrogated, and confirmed in motion everywhere.

The mask can come off at any time. And you need to be there when it does.

"Trust infrastructure can't be bolted on later," Colman explains. "It has to be in the bloodstream. We're not solving this with quarterly audits or occasional ID checks. We need persistent, real-time defense."

Alex Lisle, Reality Defender's CTO, puts it like this: "We want deepfake detection to be as routine as spam filtering. If you're still thinking about whether to implement it, you're already behind."

And he's not alone in that thinking. "I've been a supporter of the Reality Defender team since the beginning," said Zoe Weinberg, Founder of ex/ante ventures which participated in the company's recent raise. "Making this platform accessible to every developer is how we win."

Accessible, yes, but even that is just the beginning.

Because soon every vendor will need to evolve from point-in-time checks to identity streams. Instead of one-offs, we'll need a steady pulse of verification, detection, and authentication constantly running in the background like the heartbeat of your infrastructure.

You see, reality, in 2025, is no longer a stable and objective construct. It's a contested space where those who want to uphold it are at risk of being outgunned by those who want to undermine it. Reality Defender may have handed the good guys a better weapon but it's up on the rest of the industry to scale this into an arsenal.