I Tried to Prove I'm Not AI. My Aunt Wasn't Convinced – Are You Next?

In an era of advanced deepfakes, proving you're real is harder than ever. Discover why even a prime minister failed and I tried to prove I'm not AI, leaving my aunt unconvinced. Learn how to protect yourself.

Admin

Admin

I Tried to Prove I'm Not AI. My Aunt Wasn't Convinced – Are You Next?

Mar 26, 2026

The Unnerving Reality: What Happens When You Can’t Prove You’re Real?

The line between what’s real and what’s artificially generated is blurring at an alarming rate. Deepfakes are no longer just a sci-fi trope; they're a daily reality, so convincing that even a sitting prime minister struggled to prove his existence. And if you think you’re immune, think again. The unsettling truth? You might be next in the crosshairs of digital doubt. This begs the critical question: how do you prove you’re genuinely human when even your own family begins to second-guess?

My Personal Deepfake Experiment: When Family Isn't Sure

To truly grasp the gravity of this digital dilemma, I embarked on a personal experiment. I called my aunt, Eleanor, someone who has known me my entire life, with a simple yet profound request: could she distinguish between the real me and an AI deepfake? I explained I’d call her twice – once as myself, once as an AI-generated voice clone – challenging her to identify the imposter.

Her initial reaction was reassuring. “Well, it sounds like you,” she observed, noting, “I think a real person uses a lot more inflection than I would expect an AI-generated voice to use.” I gently pushed back, reminding her just how sophisticated AI has become. A long, telling pause followed. “I was like 90% sure,” she finally admitted, her voice laced with hesitation. “But that sounded more artificial.” The realization was chilling:I tried to prove I'm not AI, and my aunt wasn't convinced.

The Prime Minister's Predicament: A Global Proof-of-Life Failure

While the typical concern around deepfakes revolves around individuals being scammed or misled, a more sinister scenario is emerging: what ifyouare accused of being a deepfake? How do you then assert your authenticity to the world?

This very question recently plagued Israeli prime minister Benjamin Netanyahu. A video he posted sparked internet rumors of his demise after a trick of light made it appear he had a glitchy sixth finger – once a telltale sign of early AI deepfakes. The digital uproar was immediate, forcing him to post a follow-up video from a coffee shop, holding up his hands to prove his ordinary number of digits. This was, experts confirmed, the first time a major world leader openly attempted to prove they weren't AI. Tragically, it failed miserably.

Why Even Experts Struggle to Verify Reality

Despite Netanyahu's efforts, a significant number of people remain convinced he's an AI fabrication. His proof-of-life videos, while seemingly straightforward, made fundamental errors in a world increasingly skeptical of digital content. But could anyone do better?

According to Jeremy Carrasco, co-founder of Riddance, a publication dedicated to AI-generated media, Netanyahu's videos were unequivocally real. The supposed 'sixth finger' was merely light reflecting off his palm. “Six fingers is not an AI thing anymore,” Carrasco explains, noting that advanced AI models moved past such crude errors years ago. Furthermore, continuity errors, like Netanyahu bumping a microphone and interrupting his own audio – a notoriously difficult feat for AI to replicate – strongly pointed to authenticity.

Hany Farid, a digital forensics professor at UC Berkeley and co-founder of GetReal Security, corroborated this. His team’s extensive analysis, including voice analysis and frame-by-frame inspections, found “no evidence that this is AI-generated.” Yet, even a third video from Netanyahu couldn't sway the determined skeptics. If a prime minister can’t prove his reality, what hope is there for the rest of us?

My Unsettling Verdict: 'No, It's Over'

Intrigued by Netanyahu's struggle, I posed a direct question to Professor Farid during our interview: could I, right then and there, prove to him that I wasn't an AI? His answer was stark and unequivocal: “No.”

While he noted subtle cues – my typing sounds, a consistent shadow, reflections in my glasses, the natural way I looked down to take notes – he stressed the inherent limitations. “You’re in New York. I’m in Berkeley, California,” he pointed out. “We’re on a video call. The reality is that you could be faking this.” Without pre-arranged verification steps, there was nothing I could do to make him 100% certain of my identity. “No,” he reiterated. “It’s over.”

Samuel Woolley, chair of disinformation studies at the University of Pittsburgh, echoed this sentiment. “For the average person, and even for people who are savvy to technological manipulation, it is very difficult to verify that someone is real.” In his eyes, I could just be another robot.

The Liar's Dividend: A Double-Edged Sword of Distrust

This pervasive digital skepticism has a name: the “liar’s dividend.” It describes a world where proving something is real is costly and complex, but casting doubt is cheap and effortless. This phenomenon allows individuals, even those in power, to dismiss genuine content as fake, using the specter of AI as a convenient shield.

However, it’s a double-edged sword. The very politicians who, in Woolley’s words, “pushed for this lack of moderation” are now suffering the consequences of a global atmosphere of distrust that boomerangs back to bite them. Netanyahu’s struggle is a prime example; his team’s use of sophisticated camera work with a narrow depth of field (sharp foreground, blurry background) ironically made his videos resemble many AI-generated productions, fueling suspicion.

The Ancient Solution: Codewords in a Digital Age

In this digital maelstrom, what’s the most effective defense? The world’s leading experts have arrived at a solution so simple, your grandparents might have conceived it: codewords. It's a low-tech answer to a high-tech problem. Families, business partners, and anyone discussing critical matters should agree on a secret phrase, unknown to anyone else, to verify identities in an emergency. Think of it as a convoluted, human-centric form of multi-factor authentication.

Professor Farid and his wife, for instance, have a codeword for unusual calls. “We haven’t needed to use it yet,” he shares, “but sometimes I ask just to test her to make sure we don’t forget it.”

The Escalating Threat of Deepfake Scams

This isn't mere hypothetical concern. Deepfake scams, where AI voice or video cloning tricks victims into believing they’re speaking to someone else, are a rapidly growing criminal tactic. The American Association of Retired Persons (AARP) reports an astonishing 20-fold rise in AI-enabled scams between 2023 and 2025. These attacks target everyone from ordinary individuals to major corporations. The British engineering firm Arup reportedly lost a staggering $25 million (£18.7 million) when attackers used a deepfaked version of the company’s CFO to manipulate an employee.

The problem is intensifying globally. From “super clumsy” deepfakes during the early days of the Ukraine conflict to the “bizarro land” of fake content seen in Venezuela and Iran, the sophistication and sheer volume of AI-generated media are skyrocketing, making genuine content increasingly difficult to discern.

My Aunt’s Lingering Doubt: A Human Connection Tested

Back on the phone with my aunt Eleanor, the line between reality and artifice still felt thin. She had heard about the codeword advice and even had one with her husband and kids – but I wasn’t privy to it. “I’ve read a lot of stories like that, where they talk about voices being cloned from YouTube videos,” she confessed. “That concerns me. It’s terrifying.”

She tried to test me, reading jokes from Facebook to gauge my authentic reaction. My laughter helped, but her certainty remained elusive. When I changed my mind about the color of a sweater she was knitting me, opting for black instead of the gold we’d discussed, she flagged it as a suspicious robotic trait. “I expected you to say that you wanted another gold sweater.” Even after I confessed I hadn't used AI, the experience clearly distressed her. “I can’t be sure,” she admitted as we ended the call, before adding, “But I love you, kid.”

Her parting words underscored a stark new reality: in the age of AI, trust is a luxury, and proving your own humanity has become an unprecedented challenge. The digital floodgates are open, and the only certainty is that navigating this new landscape demands vigilance, skepticism, and perhaps, a secret phrase or two.

Related Articles

Stay in the loop

Get the latest insights delivered to your inbox

Built with v0