Forbes contributors publish independent expert analyses and insights.
Technology evolves faster than we can adapt, but the human brain remains the constant target. At the HIP 2025 Conference, Professor Mary Aiken—a pioneering cyberpsychologist and one of the world's foremost experts on human behavior in digital contexts—delivered a keynote that was part wake-up call, part manifesto.
Her message was clear: hybrid identity isn't just an IT architecture—it's a psychological ecosystem. And if we continue to secure systems without understanding the minds using them, we'll keep losing ground.
When I spoke with Aiken ahead of her keynote, she explained that hybrid identity systems will "succeed or fail not just on the technical infrastructure part, but on how people trust, perceive and interact with them." It's a deceptively simple point that's easy to overlook.
For decades, the security industry has treated users as liabilities—"the weakest link." Aiken flips that idea on its head. Humans, she argues, are not the weakest link; they're the most targeted one. And the psychology behind that targeting is where the battle is really being fought.
Attackers increasingly exploit what she calls cognitive shortcuts—automatic decision-making habits hardwired into the brain. Aiken gave an example: "You have a thing called the authority heuristic. You perceive that your boss has told you to do something, and immediately the shortcut is that you do what you're told." In the wrong hands, that reflex becomes a precision tool for social engineering.
Overconfidence bias is another. Many of us—especially those in cybersecurity—believe we're immune to phishing because we "know better." That misplaced confidence can be fatal. "AI is changing the psychology of attacks," Aiken told me. "Imagine you have a dark AI with the power of GPT focused on you in real time. It adapts as fast as you do."
I've written before about this dynamic—the way AI allows attackers to weaponize the details we casually share online. Aiken's framework gives it a scientific backbone: these aren't just smarter scams; they're adaptive cognitive assaults.
One of the most fascinating threads in our conversation was Aiken's critique of conventional user training. The endlessly repeated mantra—Don't click the link—has become digital wallpaper. "People don't even hear it anymore," she said.
Instead, she teaches people to "think like a profiler." That means analyzing the structure of a phishing message the way a behavioral analyst would: noticing capital letters, the urgency cues, the timing. "Look at every single word," she said. "See where you're being gamed."
It's a mindset shift—from compliance to curiosity. Rather than treating users as obstacles to be managed, Aiken's approach empowers them to understand why they're being manipulated. As she put it, "We have to stop telling people what not to do and start teaching them how attackers think."
Our discussion turned philosophical when Aiken described what she calls the trust paradox. Humans are evolutionarily wired to trust; it's how we survived as social creatures. But in cyberspace, that instinct can be catastrophic.
"In cyber contexts, you have to have zero trust -- always verify," she said. "Zero trust in the trust paradox actually leads to trust." In other words, only when systems enforce skepticism can genuine confidence emerge. Hybrid identity, therefore, is not just a technical balancing act -- it's a psychological one, where fairness, transparency and cognitive ease determine whether users comply or rebel.
I couldn't help connecting Aiken's ideas to the AI-driven identity discussions dominating cybersecurity today. Her point about fatigue hit especially close to home. "Trust and fatigue are the new battleground," she said. "Multi-factor authentication fatigue attacks succeed not because MFA is weak, but because humans are overloaded."
That insight reframes the usability debate. Too often, we talk about security friction as a UX issue. Aiken treats it as a cognitive one. Humans can only process so many demands for attention before vigilance collapses. Her prescription: adaptive authentication that feels fair and low-friction—biometric, behavioral and transparent enough to preserve trust without crossing into surveillance.
She's even exploring new biometric frontiers, such as using the optic nerve head—the "blind spot" at the back of the eye—as a living identifier. It's a fascinating example of technology informed by psychology: invisible, dynamic and resistant to deepfakes.
Toward the end of our conversation, Aiken offered a vision for the next frontier: intelligence augmentation (IA). "I believe the future lies in IA," she told me, "where you have smart agents trained and conceptualized in a behavioral context who work with you to provide that additional layer in a behavioral sense."
That vision—machines built to understand not just data but human decision-making—marks a shift from automation to empathy. Imagine AI that shadows analysts and employees alike, catching cognitive slips before attackers can exploit them. It's the natural evolution of cyber defense once we accept that the mind itself is the perimeter.
What Aiken ultimately calls for is a merger of two disciplines: cybersecurity and cyber-behavioral science. As she put it, "It's critical that our data and systems and networks are robust, resilient and secure. But equally, it's critical that the humans who operate those systems are psychologically robust, resilient, safe and secure."
That idea—360-degree resilience—is where her work feels most urgent. The AI era isn't just changing what we defend; it's changing how we think while defending it.
Her closing comment still echoes: "AI doesn't just hack machines—it hacks minds." And that, more than any new exploit or zero-day, may be the defining threat of our time.