Worried About Deepfakes? Don’t Forget “Cheapfakes”

Over the summer, a political action committee (PAC) supporting Florida governor and presidential hopeful Ron DeSantis uploaded a video of former president Donald Trump on YouTube in which he appeared to attack Iowa governor Kim Reynolds. It wasn’t exactly real—though the text was taken from one of Trump’s tweets, the voice used in the ad was AI-generated. The video was subsequently removed, but it has spurred questions about the role generative AI will play in the 2024 elections in the US and around the world.

While platforms and politicians are focusing on deepfakes—AI-generated content that might depict a real person saying something they didn’t or an entirely fake person—experts told WIRED there's a lot more at stake. Long before generative AI became widely available, people were making “cheapfakes” or “shallowfakes.” It can be as simple as mislabeling images, videos, or audio clips to imply they’re from a different time or location, or editing a piece of media to make it look like something happened that didn’t. This content can still have a profound impact if they’re allowed to circulate on social platforms. As more than 50 countries prepare for national elections in 2024, mis- and disinformation are still powerful tools for shaping public perception and political narratives.

Meta and YouTube have both responded in recent months with new policies around the use of generative AI in political ads like the one in support of DeSantis. Last month, Meta announced that it would require political advertisers to disclose whether an ad uses generative AI, joining Google, which owns YouTube, in responding to concerns that newly available tools could be used to mislead voters. In a note on its blog post about how the company is approaching the upcoming 2024 elections, Meta says that it will require advertisers “to disclose when they use AI or other digital techniques to create or alter a political or social issue ad in certain cases.”

“The scope is only political ads, which is a tiny part of the political ecology where people are increasingly using AI-generated media,” says Sam Gregory, program director at the nonprofit Witness, which helps people use technology to promote human rights. “It's not clear that it covers the broad range of shallowfakes or cheapfake approaches that are already being deployed both in political ads, but also, of course, in much broader ways that people use in political context.”

And not all misleading political content ends up in advertisements. For instance, in 2020, a video went viral that made it appear like Representative Nancy Pelosi was slurring her speech. The video itself wasn’t fake, but it had been slowed down to make Pelosi appear drunk. Though Twitter, TikTok, and YouTube removed the video, it remained live on Facebook, with a label noting it was “partly false.” The video itself wasn’t an ad, though it was clearly targeting a political figure.

Earlier this year, Meta’s Oversight Board took on a case reviewing doctored video of President Joe Biden, which had not been generated or edited using AI, and which the company left up on the platform. The board will use the case to further review the company’s “manipulated media policy,” which stipulates that videos where “a subject … said words that they did not say” and which are “the product of artificial intelligence or machine learning” will be removed. Content like the Pelosi and Biden videos don't clearly violate this policy. Manipulated audio, particularly problematic in many non-English contexts, is nearly completely left out.

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamGearHow Do You Solve a Problem Like Polestar?By Carlton ReidSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty

“Political ads are deliberately designed to shape your emotions and influence you. So, the culture of political ads is often to do things that stretch the dimensions of how someone said something, cut a quote that's placed out of context,” says Gregory. “That is essentially, in some ways, like a cheap fake or shallow fake.”

Meta did not respond to a request for comment about how it will be policing manipulated content that falls outside the scope of political advertisements, or how it plans to proactively detect AI usage in political ads.

But companies are only now beginning to address how to handle AI-generated content from regular users. YouTube recently introduced a more robust policy requiring labels on user-generated videos that utilize generative AI. Google spokesperson Michael Aciman told WIRED that in addition to adding “a label to the description panel of a video indicating that some of the content was altered or synthetic,” the company will include a more “more prominent label” for “content about sensitive topics, such as elections.” Aciman also noted that “cheapfakes” and other manipulated media may still be removed if it violates the platform’s other policies around, say, misinformation or hate speech.

“We use a combination of automated systems and human reviewers to enforce our policies at scale,” Aciman told WIRED. “This includes a dedicated team of a thousand people working around the clock and across the globe that monitor our advertising network and help enforce our policies.”

But social platforms have already failed to moderate content effectively in many of the countries that will host national elections next year, points out Hany Farid, a professor at the UC Berkeley School of Information. “I would like for them to explain how they're going to find this content,” he says. “It's one thing to say we have a policy against this, but how are you going to enforce it? Because there is no evidence for the past 20 years that these massive platforms have the ability to do this, let alone in the US, but outside the US.”

Both Meta and YouTube require political advertisers to register with the company, including additional information such as who is purchasing the ad and where they’re based. But these are largely self-reported, meaning some ads can slip through the company’s cracks. In September, WIRED reported that the group PragerU Kids, an extension of the right-wing group PragerU, had been running ads that clearly fell within Meta’s definition of “political or social issues”—the exact kinds of ads for which the company requires additional transparency. But PragerU Kids had not registered as a political advertiser (Meta removed the ads following WIRED’s reporting).

Meta did not respond to a request for comment about what systems it has in place to ensure advertisers properly categorize their ads.

But Farid worries that the overemphasis on AI might distract from the larger issues around disinformation, misinformation, and the erosion of public trust in the information ecosystem, particularly as platforms scale back their teams focused on election integrity.

“If you think deceptive political ads are bad, well, then why do you care how they’re made?” asks Farid. “It’s not that it’s an AI-generated deceptive political ad, it’s that it’s a deceptive political ad period, full stop.”

About Vittoria Elliott

Check Also

Anti-Immigrant Election Deniers Have Turned Their Online Following Into an Army of Activists

On Tuesday morning, celebrities like Stephen Curry, Olivia Rodrigo, and Kerry Washington used their social …

Leave a Reply