The Biden Deepfake Robocall Is Only the Beginning

A deepfake robocall impersonating President Biden is rocking the New Hampshire primary, and no one knows who’s behind it.

Over the weekend, voters in New Hampshire and New England received a call from a voice that sounded eerily like Biden, touting election “malarkey” and discouraging them to vote in the Tuesday primaries.

“We know the value of voting Democratic,” the robocall says. “It’s important that you save your vote for the November election. We’ll need your help electing Democrats up and down the ticket. Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again. Your vote makes a difference in November, not this Tuesday.” Biden is not listed on the ballot because the Democratic party changed its first official primary this year from New Hampshire to South Carolina. There is, however, a state-level campaign encouraging voters to write in his name.

The call, first reported by NBC News, was spoofed to appear as if to have come from Kathy Sullivan, a former New Hampshire Democratic Party Chair who leads a super PAC urging voters to write in Biden’s name on their ballots. Everyone from Sullivan and the Biden campaign to the president’s opponents, like Trump and Dean Phillips, have denied any involvement in placing the calls.

In the year since ChatGPT launched, generative AI has quickly become a focus of concern for lawmakers worried about its potential to displace jobs and spread disinformation. Now, the deepfake robocall imitating Biden has further sparked these fears and highlighted the regulatory gaps around AI accountability and transparency.

“The political deepfake moment is here,” said Robert Weissman, president of the progressive advocacy group Public Citizen, in a statement Monday. “Policymakers must rush to put in place protections, or we’re facing electoral chaos. The New Hampshire deepfake is a reminder of the many ways that deepfakes can sow confusion and perpetuate fraud.”

In May, images seeming to show an attack on the Pentagon went viral on the social media site X. The fake images caused real impact: a dip in the stock market. As news of the conflict between Israel and Hamas flooded social platforms after October 7, fake images and videos of dead children and destroyed homes caused real outrage as experts and journalists scrambled to verify their authenticity.

In July 2023, several tech companies, including Microsoft, OpenAI, and Google, made voluntary commitments to watermark video and photos that had been manipulated by AI that would essentially allow regular consumers to differentiate it from organic content. In October, the Biden administration issued an executive order with further guidance for companies developing AI technologies. Over the past few months, Senate majority leader Chuck Schumer has led a series of AI Insight Forums, inviting tech leaders like Meta CEO Mark Zuckerberg and Elon Musk to meet with lawmakers and discuss regulation.

But only a few bills have come out of these discussions. Senator Amy Klobuchar and Representative Yvette Clarke have been behind a handful of bills that would ban the use of AI to create deceptive content, similar to the Biden robocall, outright. They’ve also sponsored a separate bill requiring political ads to disclose when they use AI.

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamGearHow Do You Solve a Problem Like Polestar?By Carlton ReidSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty

“In American politics, disinformation has unfortunately become commonplace. But now, misinformation and disinformation coupled with new generative AI tools are creating an unprecedented threat that we are ill-prepared for,” Clarke said in a statement to WIRED on Monday. “This is a problem both Democrats and Republicans should be able to address together. Congress needs to get a handle on this before things get out of hand.”

Advocacy groups like Public Citizen have petitioned the Federal Election Commission to issue new rules requiring political ad disclosures similar to what Clarke and Klobuchar have proposed but have yet to make any formal decision. Earlier this month, FEC chair Sean Cooksey, a Republican, told The Washington Post that the commission plans to make a decision by early summer. By then, the GOP will have likely already chosen Trump as its nominee, and the general election will be well underway.

“Whether you are a Democrat or a Republican, no one wants to see fake ads or robocalls where you cannot even tell if it’s your candidate or not,” Klobuchar told WIRED on Monday. “We need federal action to ensure this powerful technology is not used to deceive voters and spread disinformation.”

Audio fakes are especially pernicious because, unlike faked photos or videos, they lack many of the visual signals that might help someone identify that they’ve been altered, says Hany Farid, a professor at the UC Berkeley School of Information. “With robocalls, the audio quality on a phone is not great, and so it is easier to trick people with fake audio.”

Farid also worries that phone calls, unlike fake posts on social media, would be more likely to reach an older demographic that’s already susceptible to scams.

“One might argue that many people figured out that this audio was fake, but the issue in a state primary is that even a few thousands votes could have an impact on the results,” he says. “Of course, this type of election interference could be carried out without deepfakes, but the concern is that AI-powered deepfakes makes these campaigns more effective and easier to carry out.”

Concrete regulation has largely lagged behind, even as deepfakes like the one used by the robocall become cheaper and easier to produce, says Sam Gregory, program director at Witness, a nonprofit that helps people use technology to promote human rights. “It doesn’t sound like a robot anymore,” he says.

“Folks in this area have really wrestled with how you mark audio to show that its provenance is synthetic,” he says. “For example, you can oblige people to put a disclaimer at the start of a piece of audio that says it was made with AI. If you're a bad actor or someone who is doing a deceptive robocall, you obviously don't do that.”

Even if a piece of audio content is watermarked, it may be done so in a way that’s evident to a machine but not necessarily to a regular person, says Claire Leibowicz, head of media integrity at the Partnership on AI. And doing so still relies on the goodwill of the platforms used to generate the deepfake audio. “We haven’t figured out what it means to have these tools be open source for those who want to break the law,” she adds.

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamGearHow Do You Solve a Problem Like Polestar?By Carlton ReidSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty

For public figures like Joe Biden, it’s easy to gather the audioclips necessary to create realistic-sounding fake audio.

“The terrifying fact is that Biden’s voice is out there, everywhere, all the time, and cloning it with an off-the-shelf voice cloning tool is easy enough to do in seconds and for free, by even the least technically minded person to accomplish,” says Ben Colman, the CEO of Reality Defender, a deepfake detection company. “This will undoubtedly be the first of many US election-related deepfakes, as telecoms, platforms, and other content distribution areas are not even legally required to check or block this kind of content."

Though Colman says his company’s detection models were able to quickly identify the audio as fake, most regular people don’t have access to these kinds of tools—and might not think to use them anyway.

“If you saw a video of Biden saying something like [the robocall], you might, if you're a literate video consumer, you might go and try and find and see if there's another source,” says Gregory. “That's completely different from receiving something in a phone call or on a messaging app.”

Even without regulation regarding the involvement of generative AI, this kind of voting interference would likely still be illegal. Last year, Douglass Mackey, who went by Ricky Vaughn online, was sentenced to seven months in prison for attempting to use his platform to mislead voters during the 2016 election. Prosecutors said that Mackey, along with other right-wing influencers, attempted to suppress the votes of non-white people, using the Clinton campaign hashtag #ImWithHer to promote ads encouraging people to vote by text (you can’t).

The New Hampshire Attorney General’s Office has announced that it is investigating the robocalls, saying in a statement that the “messages appear to be an unlawful attempt to disrupt” the state’s primary process. Biden spokesperson Julie Chavez Rodriguez told WIRED that it was “actively discussing additional action to take place immediately.”

“Spreading disinformation to suppress voting and deliberately undermine free and fair elections will not stand, and fighting back against any attempt to undermine our democracy will continue to be a top priority for this campaign,” Rodriguez said.

It’s still unclear who is behind the calls. Civox, a company that creates AI voicebots for political campaigns, told WIRED that it had nothing to do with the call. The company helped create a bot for Pennsylvania Democrat Shamaine Daniels’ campaign to contact voters and answer questions over the phone. Civox’s CEO, Ilya Mouzykantskii, said that the company bars its clients from using unauthorized voices or creating deceitful calls like the one circulating in New Hampshire.

“Campaigning using artificial intelligence is going to be the defining story of the 2024 election cycle,” Mouzykantskii told WIRED on Monday. “Things are going to get a lot weirder as we get closer to November. This is the first story, and there will be many, many more.”

Kate Knibbs contributed reporting.

About

Check Also

Anti-Immigrant Election Deniers Have Turned Their Online Following Into an Army of Activists

On Tuesday morning, celebrities like Stephen Curry, Olivia Rodrigo, and Kerry Washington used their social …

Leave a Reply