Some Fortune 500 companies have begun testing software that can spot a deepfake of a real person in a live video call, following a spate of scams involving fraudulent job seekers who take a signing bonus and run.
The detection technology comes courtesy of GetReal Labs, a new company founded by Hany Farid, a UC-Berkeley professor and renowned authority on deepfakes and image and video manipulation.
GetReal Labs has developed a suite of tools for spotting images, audio, and video that are generated or manipulated either with artificial intelligence or manual methods. The company’s software can analyze the face in a video call and spot clues that may indicate it has been artificially generated and swapped onto the body of a real person.
“These aren’t hypothetical attacks, we’ve been hearing about it more and more,” Farid says. “In some cases, it seems they're trying to get intellectual property, infiltrating the company. In other cases, it seems purely financial, they just take the signing bonus.”
The FBI issued a warning in 2022 about deepfake job hunters who assume a real person’s identity during video calls. UK-based design and engineering firm Arup lost $25 million to a deepfake scammer posing as the company’s CFO. Romance scammers have also adopted the technology, swindling unsuspecting victims out of their savings.
Impersonating a real person on a live video feed is just one example of the kind of reality-melting trickery now possible thanks to AI. Large language models can convincingly mimic a real person in online chat, while short videos can be generated by tools like OpenAI’s Sora. Impressive AI advances in recent years have made deepfakery more convincing and more accessible. Free software makes it easy to hone deepfakery skills, and easily accessible AI tools can turn text prompts into realistic-looking photographs and videos.
But impersonating a person in a live video is a relatively new frontier. Creating this type of a deepfake typically involves using a mix of machine learning and face-tracking algorithms to seamlessly stitch a fake face onto a real one, allowing an interloper to control what an illicit likeness appears to say and do on screen.
Farid gave WIRED a demo of GetReal Labs’ technology. When shown a photograph of a corporate boardroom, the software analyzes the metadata associated with the image for signs that it has been modified. Several major AI companies including OpenAI, Google, and Meta now add digital signatures to AI-generated images, providing a solid way to confirm their inauthenticity. However, not all tools provide such stamps, and open source image generators can be configured not to. Metadata can also be easily manipulated.
Most PopularThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian Chokkattu CultureConfessions of a Hinge Power UserBy Jason Parham GearHow Do You Solve a Problem Like Polestar?By Carlton Reid SecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty
GearGetReal Labs also uses several AI models, trained to distinguish between real and fake images and video, to flag likely forgeries. Other tools, a mix of AI and traditional forensics, help a user scrutinize an image for visual and physical discrepancies, for example highlighting shadows that point in different directions despite having the same light source, or that do not appear to match the object that cast them.
Lines drawn on different objects shown in perspective will also reveal if they converge on a common vanishing point, as would be the case in a real image.
Other startups that promise to flag deepfakes rely heavily on AI, but Farid says manual forensic analysis will also be crucial to flagging media manipulation. “Anybody who tells you that the solution to this problem is to just train an AI model is either a fool or a liar,” he says.
Most PopularThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian Chokkattu CultureConfessions of a Hinge Power UserBy Jason Parham GearHow Do You Solve a Problem Like Polestar?By Carlton Reid SecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty
GearThe need for a reality check extends beyond Fortune 500 firms. Deepfakes and manipulated media are already a major problem in the world of politics, an area Farid hopes his company’s technology could do real good. The WIRED Elections Project is tracking deepfakes used to boost or trash political candidates in elections in India, Indonesia, South Africa, and elsewhere. In the United States, a fake Joe Biden robocall was deployed last January in an effort to dissuade people from turning out to vote in the New Hampshire Presidential primary. Election-related “cheapfake” videos, edited in misleading ways, have gone viral of late, while a Russian disinformation unit has promoted an AI-manipulated clip disparaging Joe Biden.
Vincent Conitzer, a computer scientist at Carnegie Mellon University in Pittsburgh and coauthor of the book Moral AI, expects AI fakery to become more pervasive and more pernicious. That means, he says, there will be growing demand for tools designed to counter them.
“It is an arms race,” Conitzer says. “Even if you have something that right now is very effective at catching deepfakes, there's no guarantee that it will be effective at catching the next generation. A successful detector might even be used to train the next generation of deepfakes to evade that detector.”
GetReal Labs agrees it will be a constant battle to keep up with deepfakery. Ted Schlein, a cofounder of GetReal Labs and a veteran of the computer security industry, says it may not be long before everyone is confronted with some form of deepfake deception, as cybercrooks become more conversant with the technology and dream up ingenious new scams. He adds that manipulated media is a top topic of concern for many chief security officers. “Disinformation is the new malware,” Schlein says.
With significant potential to poison political discourse, Farid notes that media manipulation can be considered a more challenging problem. “I can reset my computer or buy a new one,” he says. “But the poisoning of the human mind is an existential threat to our democracy.”