Not only is AI getting harder to spot, but now we don't even know that we're wrong. Australian scientists found that people are becoming overconfident about their ability to tell real and digital faces apart, which can make us susceptible to misinformation and fraud.
"People have been confident of their ability to spot a fake face," said study author Dr. James Dunn of the University of South Wales' School of Psychology. "But the faces created by the most advanced face-generation systems aren't so easily detectable anymore."
To test our AI detection abilities, the Aussie researchers surveyed 125 people -- 89 people with average face-identifying prowess and 36 people with exceptional powers of recognition, termed super recognizers, per the study published in the "British Journal of Psychology."
Participants were shown images of faces -- which were vetted beforehand for obvious flaws -and had them to determine whether they were real or AI.
Researchers found that people with "average face-recognition ability" performed only a tad better than chance, per Dunn.
For instance, Post guinea pigs scored an unimpressive 3 out of 6 on this "human test," meaning we would've fared the same had we flipped a coin.
Meanwhile, super recognizers performed better than the control group in the face-off, but it was only by a "slim margin," according to Dr. Dunn.
One constant? A misplaced belief in their powers of detection. "What was consistent was people's confidence in their ability to spot an AI-generated face -- even when that confidence wasn't matched by their actual performance," Dunn quipped.
Part of the problem is that AI facial technology has become so sophisticated we can't spot the fake using familiar cues. While AI faces previously sported "distorted teeth, glasses that merged into faces" and other "head" giveaways, advanced generators have made these imperfections much less common.
However, as we still look for the regular red flags, this instills us with the aforementioned "fake" bravado.
Nowadays, the AI-mpersonators are paradoxically identified not by their flaws, but by their lack thereof.
"Ironically, the most advanced AI faces aren't given away by what's wrong with them, but by what's too right," said fellow author Dr. Amy Dawel, a psychologist with Australian National University (ANU). "Rather than obvious glitches, they tend to be unusually average -- highly symmetrical, well-proportioned and statistically typical."
"It's almost as if they're too good to be true as faces," she lamented
And, given how frequently super recognizers were fooled, it's clear that AI detection is not a skill people can easily learn.
Our lacking powers of detection -- as well as our misplaced confidence in them -- are concerning given the rise of increasingly naturalistic catfishing schemes and other digital trickery. Last winter, TikTok users exposed hyperrealistic AI-generated deepfake doctors who were hornswoggling social media users with unfounded medical advice.
As such, we need to have a "healthy level of skepticism," per Dr. Dunn. "For a long time, we've been able to look at a photograph and assume we're seeing a real person," he said. "That assumption is now being challenged."
Scientists believe that the solution could perhaps lie with a new type of facial recognition wizard that they inadvertently stumbled upon during the experiment.
"Our research has revealed that some people are already sleuths at spotting AI-faces, suggesting there may be 'super-AI-face-detectors' out there," he said. "We want to learn more about how these people are able to spot these fake faces, what clues they are using, and see if these strategies can be taught to the rest of us."