Stories about AI-generated political content are like stories about people drunkenly setting off fireworks: There’s a good chance they’ll end in disaster. WIRED is tracking AI usage in political campaigns across the world, and so far examples include pornographic deepfakes and misinformation-spewing chatbots. It’s gotten to the point where the US Federal Communications Commission has proposed mandatory disclosures for AI use in television and radio ads.
Despite concerns, some US political campaigns are embracing generative AI tools. There’s a growing category of AI-generated political content flying under the radar this election cycle, developed by startups including Denver-based BattlegroundAI, which uses generative AI to come up with digital advertising copy at a rapid clip. “Hundreds of ads in minutes,” its website proclaims.
BattlegroundAI positions itself as a tool specifically for progressive campaigns—no MAGA types allowed. And it is moving fast: It launched a private beta only six weeks ago and a public beta just last week. Cofounder and CEO Maya Hutchinson is currently at the Democratic National Convention trying to attract more clients. So far, the company has around 60, she says. (The service has a freemium model, with an upgraded option for $19 a month.)
“It’s kind of like having an extra intern on your team,” Hutchinson, a marketer who got her start on the digital team for President Obama’s reelection campaign, tells WIRED. We’re sitting at a picnic table inside the McCormick Place Convention Center in Chicago, and she’s raising her voice to be heard over music blasting from a nearby speaker. “If you’re running ads on Facebook or Google, or developing YouTube scripts, we help you do that in a very structured fashion.”
BattlegroundAI’s interface asks users to select from five different popular large language models—including ChatGPT, Claude, and Anthropic—to generate answers; it then asks users to further customize their results by selecting for tone and “creativity level,” as well as how many variations on a single prompt they might want. It also offers guidance on whom to target and helps craft messages geared toward specialized audiences for a variety of preselected issues, including infrastructure, women’s health, and public safety.
BattlegroundAI declined to provide any examples of actual political ads created using its services. However, WIRED tested the product by creating a campaign aimed at extremely left-leaning adults aged 88 to 99 on the issue of media freedom. “Don't let fake news pull the wool over your bifocals!” one of the suggested ads began.
BattlegroundAI offers only text generation—no AI images or audio. The company adheres to various regulations around the use of AI in political ads.
“What makes Battleground so well suited for politics is it’s very much built with those rules in mind,” says Andy Barr, managing director for Uplift, a Democratic digital ad agency. Barr says Uplift has been testing the BattlegroundAI beta for a few weeks. “It’s helpful with idea generation,” he says. The agency hasn’t yet released any ads using Battleground copy yet, but it has already used it to develop concepts, Barr adds.
Most PopularThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian Chokkattu CultureConfessions of a Hinge Power UserBy Jason Parham SecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty GearHow Do You Solve a Problem Like Polestar?By Carlton Reid
GearI confess to Hutchinson that if I were a politician, I would be scared to use BattlegroundAI. Generative AI tools are known to “hallucinate,” a polite way of saying that they sometimes make things up out of whole cloth. (They bullshit, to use academic parlance.) I ask how she’s ensuring that the political content BattlegroundAI generates is accurate.
“Nothing is automated,” she replies. Hutchinson notes that BattlegroundAI’s copy is a starting-off point, and that humans from campaigns are meant to review and approve it before it goes out. “You might not have a lot of time, or a huge team, but you’re definitely reviewing it.”
Of course, there’s a rising movement opposing how AI companies train their products on art, writing, and other creative work without asking for permission. I ask Hutchinson what she’d say to people who might oppose how tools like ChatGPT are trained. “Those are incredibly valid concerns,” she says. “We need to talk to Congress. We need to talk to our elected officials.”
I ask whether BattlegroundAI is looking at offering language models that train on only public domain or licensed data. “Always open to that,” she says. “We also need to give folks, especially those who are under time constraints, in resource-constrained environments, the best tools that are available to them, too. We want to have consistent results for users and high-quality information—so the more models that are available, I think the better for everybody.”
And how would Hutchinson respond to people in the progressive movement—who generally align themselves with the labor movement—objecting to automating ad copywriting? “Obviously valid concerns,” she says. “Fears that come with the advent of any new technology—we’re afraid of the computer, of the light bulb.”
Hutchinson lays out her stance: She doesn’t see this as a replacement for human labor so much as a way to reduce grunt work. “I worked in advertising for a very long time, and there's so many elements of it that are repetitive, that are honestly draining of creativity,” she says. “AI takes away the boring elements.” She sees BattlegroundAI as a helpmeet for overstretched and underfunded teams.
Taylor Coots, a Kentucky-based political strategist who recently began using the service, describes it as “very sophisticated,” and says it helps identify groups of target voters and ways to tailor messaging to reach them in a way that would otherwise be difficult for small campaigns. In battleground races in gerrymandered districts, where progressive candidates are major underdogs, budgets are tight. “We don’t have millions of dollars,” he says. “Any opportunities we have for efficiencies, we’re looking for those.”
Will voters care if the writing in digital political ads they see is generated with the help of AI? “I'm not sure there is anything more unethical about having AI generate content than there is having unnamed staff or interns generate content,” says Peter Loge, an associate professor and program director at George Washington University who founded a project on ethics in political communication.
“If one could mandate that all political writing done with the help of AI be disclosed, then logically you would have to mandate that all political writing”—such as emails, ads, and op-eds—“not done by the candidate be disclosed,” he adds.
Still, Loge has concerns about what AI does to public trust on a macro level, and how it might impact the way people respond to political messaging going forward. “One risk of AI is less what the technology does, and more how people feel about what it does,” he says. “People have been faking images and making stuff up for as long as we've had politics. The recent attention on generative AI has increased peoples' already incredibly high levels of cynicism and distrust. If everything can be fake, then maybe nothing is true.”
Hutchinson, meanwhile, is focused on her company’s shorter-term impact. “We really want to help people now,” she says. “We’re trying to move as fast as we can.”