Share this @internewscast.com
Your CFO is on the video call asking you to transfer $25 million. He gives you all the bank info. Pretty routine. You got it.
But, What the — ? It wasn’t the CFO? How can that be? You saw him with your own eyes and heard that unmistakable voice you always half-listen for. Even the other colleagues on the screen weren’t really them. And yes, you already made the transaction.
Ring a bell? That’s because it actually happened to an employee at the global engineering firm Arup last year, which lost $25 million to criminals. In other incidents, individuals were deceived when “Elon Musk” and “Goldman Sachs executives” took to social media touting amazing investment opportunities. An agency leader at WPP, the largest advertising company in the world at the time, was nearly fooled into giving money during a Teams meeting with a deepfake they believed was the CEO Mark Read.
Experts have been alerting us for years about deepfake AI technology evolving to a perilous stage, and now it’s coming to fruition. Used maliciously, these digital replicas are permeating the culture from Hollywood to the White House. Although most businesses remain silent about deepfake attacks to avoid alarming clients, insiders reveal they’re happening with increasing frequency. Deloitte forecasts fraud losses from such occurrences to reach $40 billion in the United States by 2027.
Obviously, we have a problem — and entrepreneurs love nothing more than finding something to solve. But this is no ordinary problem. You can’t sit and study it, because it moves as fast as you can, or even faster, always showing up in a new configuration in unexpected places.
The U.S. government has started to pass regulations on deepfakes, and the AI community is developing its own guardrails, including digital signatures and watermarks to identify their content. But scammers are not exactly known to stop at such roadblocks.
That’s why many people have pinned their hopes on “deepfake detection” — an emerging field that holds great promise. Ideally, these tools can suss out if something in the digital world (a voice, video, image, or piece of text) was generated by AI, and give everyone the power to protect themselves. But there is a hitch: In some ways, the tools just accelerate the problem. That’s because every time a new detector comes out, bad actors can potentially learn from it — using the detector to train their own nefarious tools, and making deepfakes even harder to spot.
So now the question becomes: Who is up for this challenge? This endless cat-and-mouse game, with impossibly high stakes? If anyone can lead the way, startups may have an advantage — because compared to big firms, they can focus exclusively on the problem and iterate faster, says Ankita Mittal, senior consultant of research at The Insight Partners, which has released a report on this new market and predicts explosive growth.
Here’s how a few of these founders are trying to stay ahead — and building an industry from the ground up to keep us all safe.
Image Credit: Terovesalainen
If deepfakes had an origin story, it might sound like this: Until the 1830s, information was physical. You could either tell someone something in person, or write it down on paper and send it, but that was it. Then the commercial telegraph arrived — and for the first time in human history, information could be zapped over long distances instantly. This revolutionized the world. But wire transfer fraud and other scams soon followed, often sent by fake versions of real people.
Western Union was one of the first telegraph companies — so it is perhaps appropriate, or at least ironic, that on the 18th floor of the old Western Union Building in lower Manhattan, you can find one of the earliest startups combatting deepfakes. It’s called Reality Defender, and the guys who founded it, including a former Goldman Sachs cybersecurity nut named Ben Colman, launched in early 2021, even before ChatGPT entered the scene. (The company originally set out to detect AI avatars, which he admits is “not as sexy.”)
Colman, who is CEO, feels confident that this battle can be won. He claims that his platform is 99% accurate in detecting real-time voice and video deepfakes. Most clients are banks and government agencies, though he won’t name any (cybersecurity types are tight-lipped like that). He initially targeted those industries because, he says, deepfakes pose a particularly acute risk to them — so they’re “willing to do things before they’re fully proven.” Reality Defender also works with firms like Accenture, IBM Ventures, and Booz Allen Ventures — “all partners, customers, or investors, and we power some of their own forensics tools.”
So that’s one kind of entrepreneur involved in this race. On Zoom, a few days after visiting Colman, I meet another: He is Hany Farid, a professor at the University of California, Berkeley, and cofounder of a detection startup called GetReal Security. Its client list, according to the CEO, includes John Deere and Visa. Farid is considered an OG of digital image forensics (he was part of a team that developed PhotoDNA to help fight online child sexual abuse material, for example). And to give me the full-on sense of the risk involved, he pulls an eerie sleight-of-tech: As he talks to me on Zoom, he is replaced by a new person — an Asian punk who looks 40 years younger, but who continues to speak with Farid’s voice. It’s a deepfake in real time.
Truth be told, Farid wasn’t originally sure if deepfake detection was a good business. “I was a little nervous that we wouldn’t be able to build something that actually worked,” he says. The thing is, deepfakes aren’t just one thing. They are produced in myriad ways, and their creators are always evolving and learning. One method, for example, involves using what’s called a “generative adversarial network” — in short, someone builds a deepfake generator, as well as a deepfake detector, and the two systems compete against each other so that the generator becomes smarter. A newer method makes better deepfakes by training a model to start with something called “noise” (imagine the visual version of static) and then sculpt the pixels into an image according to a text prompt.
Because deepfakes are so sophisticated, neither Reality Defender or GetReal can ever definitively say that something is “real” or “fake.” Instead, they come up with probabilities and descriptions like strong, medium, weak, high, low, and most likely — which critics say can be confusing, but supporters argue can put clients on alert to ask more security questions.
To keep up with the scammers, both companies run at an insanely fast pace — putting out updates every few weeks. Colman spends a lot of energy recruiting engineers and researchers, who make up 80% of his team. Lately, he’s been pulling hires straight out of Ph.D. programs. He also has them do ongoing research to keep the company one step ahead.
Both Reality Defender and GetReal maintain pipelines coursing with tech that’s deployed, in development, and ready to sunset. To do that, they’re organized around different teams that go back and forth to continually test their models. Farid, for example, has a “red team” that attacks and a “blue team” that defends. Describing working with his head of research on a new product, he says, “We have this very rapid cycle where she breaks, I fix, she breaks — and then you see the fragility of the system. You do that not once, but you do it 20 times. And now you’re onto something.”
Additionally, they layer in non-AI sleuthing techniques to make their tools more accurate and harder to dodge. GetReal, for example, uses AI to search images and videos for what are known as “artifacts” — telltale flaws that they’re made by generative AI — as well as other digital forensic methods to analyze inconsistent lighting, image compression, whether speech is properly synched to someone’s moving lips, and for the kind of details that are hard to fake (like, say, if video of a CEO contains the acoustic reverberations that are specific to his office).
“The endgame of my world is not elimination of threats; it’s mitigation of threats,” Farid says. “I can defeat almost all of our systems. But it’s not easy. The average knucklehead on the internet, they’re going to have trouble removing an artifact even if I tell ’em it’s there. A sophisticated actor, sure. They’ll figure it out. But to remove all 20 of the artifacts? At least I’m gonna slow you down.”
All of these strategies will fail if they don’t have one thing: the right data. AI, as they say, is only as good as the data it’s trained on. And that’s a huge hurdle for detection startups. Not only do you have to find fakes made by all the different models and customized by various AI companies (detecting one won’t necessarily work on another), but you also have to compare them against images, videos, and audio of real people, places, and things. Sure, reality is all around us, but so is AI, including in our phone cameras. “Historically, detectors don’t work very well once you go to real world data,” says Phil Swatton at The Alan Turing Institute, the United Kingdom’s national institute for AI and data science. And high-quality, labeled datasets for deepfake detection remain scarce, notes Mittal, the senior consultant from The Insight Partners.
Colman has tackled this problem, in part, by using older datasets to capture the “real” side — say from 2018, before generative AI. For the fake data, he mostly generates it in house. He has also focused on developing partnerships with the companies whose tools are used to make deepfakes — because, of course, not all of them are meant to be harmful. So far, his partners include ElevenLabs (which, for example, translates popular podcaster and neuroscientist Andrew Huberman’s voice into Hindi and Spanish, so that he can reach wider audiences) along with PlayAI and Respeecher. These companies have mountains of real-world data — and they like sharing it, because they look good by showing that they’re building guardrails and allowing Reality Defender to detect their tools. In addition, this grants Reality Defender early access to the partners’ new models, which gives it a jump start in updating its platform.
Colman’s team has also gotten creative. At one point, to gather fresh voice data, they partnered with a rideshare company — offering their drivers extra income by recording 60 seconds of audio when they weren’t busy. “It didn’t work,” Colman admits. “A ridesharing car is not a good place to record crystal-clear audio. But it gave us an understanding of artificial sounds that don’t indicate fraud. It also helped us develop some novel approaches to remove background noise, because one trick that a fraudster will do is use an AI-generated voice, but then try to create all kinds of noise, so that maybe it won’t be as detectable.”
Startups like this must also grapple with another real-world problem: How do they keep their software from getting out into the public, where deepfakers can learn from it? To start, Reality Defender’s clients have a high bar for whom within the organizations can access their software. But the company has also started to create some novel hardware.
To show me, Colman holds up a laptop. “We’re now able to run all of our magic locally, without any connection to the cloud on this,” he says. The loaded laptop, only available to high-touch clients, “helps protect our IP, so people don’t use it to try to prove they can bypass it.”
Some founders are taking a completely different path: Instead of trying to detect fake people, they’re working to authenticate real ones.
That’s Joshua McKenty’s plan. He’s a serial entrepreneur who cofounded OpenStack and worked at NASA as Chief Cloud Architect, and this March launched a company called Polyguard. “We said, ‘Look, we’re not going to focus on detection, because it’s only accelerating the arms race. We’re going to focus on authenticity,'” he explains. “I can’t say if something is fake, but I can tell you if it’s real.”
To execute that, McKenty built a platform to conduct a literal reality check on the person you’re talking to by phone or video. Here’s how it works: A company can use Polyguard’s mobile app, or integrate it into their own app and call center. When they want to create a secure call or meeting, they use that system. To join, participants must prove their identities via the app on their mobile phone (where they’re verified using documents like Real ID, e-passports, and face scanning). Polyguard says this is ideal for remote interviews, board meetings, or any other sensitive communication where identity is critical.
In some cases, McKenty’s solution can be used with tools like Reality Defender. “Companies might say ‘We’re so big, we need both,'” he explains. His team is only five or six people at this point (whereas Reality Defender and GetReal both have about 50 employees), but he says his clients already include recruiters, who are interviewing candidates remotely only to discover that they’re deepfakes, law firms wanting to protect attorney-client privilege, and wealth managers. He’s also making the platform available to the public for people to establish secure lines with their attorney, accountant, or kid’s teacher.
This line of thinking is appealing — and gaining approval from people who watch the industry. “I like the authentication approach; it’s much more straightforward,” says The Alan Turing Institute’s Swatton. “It’s focused not on detecting something going wrong, but certifying that it’s going right.” After all, even when detection probabilities sound good, any margin of error can be scary: A detector that catches 95% of fakes will still allow for a scam 1 out of 20 times.
That error rate is what alarmed Christian Perry, another entrepreneur who’s entered the deepfake race. He saw it in the early detectors for text, where students and workers were being accused of using AI when they weren’t. Authorship deceit doesn’t pose the level of threat that deepfakes do, but text detectors are considered part of the scam-fighting family.
Perry and his cofounder Devan Leos launched a startup called Undetectable in 2023, which now has over 19 million users and a team of 76. It began by building a sophisticated text detector, but then pivoted into image detection, and is now close to launching audio and video detectors as well. “You can use a lot of the same kind of methodology and skill sets that you pick up in text detection,” says Perry. “But deepfake detection is a much more complicated problem.”
Finally, instead of trying to prevent deepfakes, some entrepreneurs are seeing the opportunity in cleaning up their mess.
Luke and Rebekah Arrigoni stumbled upon this niche accidentally, by trying to solve a different terrible problem — revenge porn. It started one night a few years ago, when the married couple were watching HBO’s Euphoria. In the show, a character’s nonconsensual intimate image was shared online. “I guess out of hubris,” Luke says, “our immediate response was like, We could fix this.”
At the time, the Arrigonis were both working on facial recognition technologies. So as a side project in 2022, they put together a system specifically designed to scour the web for revenge porn — then found some victims to test it with. They’d locate the images or videos, then send takedown notices to the websites’ hosts. It worked. But valuable as this was, they could see it wasn’t a viable business. Clients were just too hard to find.
Then, in 2023, another path appeared. As the actors’ and writers’ strikes broke out, with AI being a central issue, Luke checked in with former colleagues at major talent agencies. He’d previously worked at Creative Artists Agency as a data scientist, and he was now wondering if his revenge-porn tool might be useful for their clients — though in a different way. It could also be used to identify celebrity deepfakes — to find, for example, when an actor or singer is being cloned to promote someone else’s product. Along with feeling out other talent reps like William Morris Endeavor, he went to law and entertainment management firms. They were interested. So in 2023, Luke quit consulting to work with Rebekah and a third cofounder, Hirak Chhatbar, on building out their side hustle, Loti.
“We saw the desire for a product that fit this little spot, and then we listened to key industry partners early on to build all of the features that people really wanted, like impersonation,” Luke says. “Now it’s one of our most preferred features. Even if they deliberately typo the celebrity’s name or put a fake blue checkbox on the profile photo, we can detect all of those things.”
Using Loti is simple. A new client submits three real images and eight seconds of their voice; musicians also provide 15 seconds of singing a cappella. The Loti team puts that data into their system, and then scans the internet for that same face and voice. Some celebs, like Scarlett Johansson, Taylor Swift, and Brad Pitt, have been publicly targeted by deepfakes, and Loti is ready to handle that. But Luke says most of the need right now involves the low-tech stuff like impersonation and false endorsements. A recently-passed law called the Take It Down Act — which criminalizes the publication of nonconsensual intimate images (including deepfakes) and requires online platforms to remove them when reported — helps this process along: Now, it’s much easier to get the unauthorized content off the web.
Loti doesn’t have to deal with probabilities. It doesn’t have to constantly iterate or get huge datasets. It doesn’t have to say “real” or “fake” (although it can). It just has to ask, “Is this you?”
“The thesis was that the deepfake problem would be solved with deepfake detectors. And our thesis is that it will be solved with face recognition,” says Luke, who now has a team of around 50 and a consumer product coming out. “It’s this idea of, How do I show up on the internet? What things are said of me, or how am I being portrayed? I think that’s its own business, and I’m really excited to be at it.”
Will it all pay off?
All tech aside, do these anti-deepfake solutions make for strong businesses? Many of the startups in this space are early-stage and venture-backed, so it’s not yet clear how sustainable or profitable they can be. They’re also “heavily investing in research and development to stay ahead of rapidly evolving generative AI threats,” says The Insight Partners’ Mittal. That makes you wonder about the economics of running a business that will likely always have to do that.
Then again, the market for these startups’ services is just beginning. Deepfakes will impact more than just banks, government intelligence, and celebrities — and as more industries awaken to that, they may want solutions fast. The question will be: Do these startups have first-mover advantage, or will they have just laid the expensive groundwork for newer competitors to run with?
Mittal, for her part, is optimistic. She sees significant untapped opportunities for growth that go beyond preventing scams — like, for example, helping professors flag AI-generated student essays, impersonated class attendance, or manipulated academic records. Many of the current anti-deepfake companies, she predicts, will get acquired by big tech and cybersecurity firms.
Whether or not that’s Reality Defender’s future, Colman believes that platforms like his will become integral to a larger guardrail ecosystem. He compares it to antivirus software: Decades ago, you had to buy an antivirus program and manually scan your files. Now, these scans are just built into your email platforms, running automatically. “We’re following the exact same growth story,” he says. “The only problem is the problem is moving even quicker.”
No doubt, the need will become glaring at some point soon. Farid at GetReal imagines a nightmare like someone creating a fake earnings call for a Fortune 500 company that goes viral.
If GetReal’s CEO, Matthew Moynahan, is right, then 2026 will be the year that gets the flywheel spinning for all these deepfake-fighting businesses. “There’s two things that drive sales in a really aggressive way: a clear and present danger, and compliance and regulation,” he says. “The market doesn’t have either right now. Everybody’s interested, but not everybody’s troubled.” That will likely change with increased regulations that push adoption, and with deepfakes popping up in places they shouldn’t be.
“Executives will connect the dots,” Moynahan predicts. “And they’ll start saying, ‘This isn’t funny anymore.'”