Foreign Influence Campaigns Don’t Know How to Use AI Yet Either

Today, OpenAI released its first threat report, detailing how actors from Russia, Iran, China, and Israel have attempted to use its technology for foreign influence operations across the globe. The report named five different networks that OpenAI identified and shut down between 2023 and 2024. In the report, OpenAI reveals that established networks like Russia’s Doppleganger and China’s Spamoflauge are experimenting with how to use generative AI to automate their operations. They're also not very good at it.

And while it’s a modest relief that these actors haven’t mastered generative AI to become unstoppable forces for disinformation, it’s clear that they’re experimenting, and that alone should be worrying.

The OpenAI report reveals that influence campaigns are running up against the limits of generative AI, which doesn’t reliably produce good copy or code. It struggles with idioms—which make language sound more reliably human and personal—and also sometimes with basic grammar (so much so that OpenAI named one network “Bad Grammar.”) The Bad Grammar network was so sloppy that it once revealed its true identity: “As an AI language model, I am here to assist and provide the desired comment,” it posted.

One network used ChatGPT to debug code that would allow it to automate posts on Telegram, a chat app that has long been a favorite of extremists and influence networks. This worked well sometimes, but other times it led to the same account posting as two separate characters, giving away the game.

In other cases, ChatGPT was used to create code and content for websites and social media. Spamoflauge, for instance, used ChatGPT to debug code to create a WordPress website that published stories attacking members of the Chinese diaspora who were critical of the country’s government.

According to the report, the AI-generated content didn’t manage to break out from the influence networks themselves into the mainstream, even when shared on widely used platforms like X, Facebook, and Instagram. This was the case for campaigns run by an Israeli company seemingly working on a for-hire basis and posting content that ranged from anti-Qatar to anti-BJP, the Hindu-nationalist party currently in control of the Indian government.

Taken altogether, the report paints a picture of several relatively ineffective campaigns with crude propaganda, seemingly allaying fears that many experts have had about the potential for this new technology to spread mis- and disinformation, particularly during a crucial election year.

But influence campaigns on social media often innovate over time to avoid detection, learning the platforms and their tools, sometimes better than the employees of the platforms themselves. While these initial campaigns may be small or ineffective, they appear to be still in the experimental stage, says Jessica Walton, a researcher with the CyberPeace Institute who has studied Doppleganger’s use of generative AI.

In her research, the network would use real-seeming Facebook profiles to post articles, often around divisive political topics. “The actual articles are written by generative AI,” she says. “And mostly what they’re trying to do is see what will fly, what Meta’s algorithms will and won’t be able to catch.”

In other words, expect them only to get better from here.

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'FlahertyGearHow Do You Solve a Problem Like Polestar?By Carlton Reid

About Vittoria Elliott

Check Also

Anti-Immigrant Election Deniers Have Turned Their Online Following Into an Army of Activists

On Tuesday morning, celebrities like Stephen Curry, Olivia Rodrigo, and Kerry Washington used their social …

Leave a Reply