How a Small Iowa Newspaper’s Website Became an AI-Generated Clickbait Factory

In his spare time, Tony Eastin likes to dabble in the stock market. One day last year, he Googled a pharmaceutical company that seemed like a promising investment. One of the first search results Google served up on its news tab was listed as coming from the Clayton County Register, a newspaper in northeastern Iowa. He clicked, and read. The story was garbled and devoid of useful information—and so were all the other finance-themed posts filling the site, which had absolutely nothing to do with northeastern Iowa. “I knew right away there was something off,” he says. There’s plenty of junk on the internet, but this struck Eastin as strange: Why would a small Midwestern paper churn out crappy blog posts about retail investing?

Eastin was primed to find online mysteries irresistible. After years in the US Air Force working on psychological warfare campaigns he had joined Meta, where he investigated nastiness ranging from child abuse to political influence operations. Now he was between jobs, and welcomed a new mission. So Eastin reached out to Sandeep Abraham, a friend and former Meta colleague who previously worked in Army intelligence and for the National Security Agency, and suggested they start digging.

What the pair uncovered provides a snapshot of how generative AI is enabling deceptive new online business models. Networks of websites crammed with AI-generated clickbait are being built by preying on the reputations of established media outlets and brands. These outlets prosper by confusing and misleading audiences and advertisers alike, “domain squatting” on URLs that once belonged to more reputable organizations. The scuzzy site Eastin was referred to no longer belonged to the newspaper whose name it still traded in the name of.

Although Eastin and Abraham suspect that the network which the Register’s old site is now part of was created with straightforward moneymaking goals, they fear that more malicious actors could use the same sort of tactics to push misinformation and propaganda into search results. “This is massively threatening,” Abraham says. “We want to raise some alarm bells.” To that end, the pair have released a report on their findings and plan to release more as they dig deeper into the world of AI clickbait, hoping their spare-time efforts can help draw awareness to the issue from the public or from lawmakers.

Faked News

The Clayton County Register was founded in 1926 and covered the small town of Ekader, Iowa, and wider Clayton County, which nestle against the Mississippi River in the state’s northeast corner. “It was a popular paper,” says former coeditor Bryce Durbin, who describes himself as “disgusted” by what’s now published at its former web address, claytoncountyregister.com. (The real Clayton County Register merged in 2020 with The North Iowa Times to become the Times-Register, which publishes at a different website. It’s not clear how the paper lost control of its web domain; the Times-Register did not return requests for comment.)

As Eastin discovered when trying to research his pharma stock, the site still brands itself as the Clayton County Register but no longer offers local news and is instead a financial news content mill. It publishes what appear to be AI-generated articles about the stock prices of public utility companies and Web3 startups, illustrated by images that are also apparently AI-generated.

“Not only are the articles we looked at generated by AI, but the images included in each article were all created using diffusion models,” says Ben Colman, CEO of deepfake detection startup Reality Defender, which ran an analysis on several articles at WIRED’s request. In addition to that confirmation, Abraham and Eastin noticed that some of the articles included text admitting their artificial origins. “It’s important to note that this information was auto-generated by Automated Insights,” some of the articles stated, name-dropping a company that offers language-generation technology.

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamGearHow Do You Solve a Problem Like Polestar?By Carlton ReidSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty

When Eastin and Abraham examined the bylines on the Register’s former site they found evidence that they were not actual journalists—and probably not even real people. The duo’s report notes that many writers listed on the site shared names with well-known people from other fields and had unrealistically high output.

One Emmanuel Ellerbee, credited on recent posts about bitcoin and banking stocks, shares a name with a former professional football player. When Eastin and Abraham started their investigation in November 2023, the journalist database Muck Rack showed that he had bylined an eye-popping 14,882 separate news articles in his “career,” including 50 published the day they checked. By last week, the Muck Rack profile for Ellerbee showed that output has continued apace—he’s credited with publishing 30,845 articles. Muck Rack’s CEO Gregory Galant says the company “is developing more ways to help our users discern between human-written and AI-generated content." He points out that Ellerbee’s profile is not included in Muck Rack’s human-curated database of verified profiles.

The Register’s domain appears to have changed hands in August 2023, data from analytics service Similar Web shows, around the time it began to host its current financial news churn. Eastin and Abraham used the same tool to confirm that the site was attracting most of its readership through SEO, targeting search keywords about stock purchasing to lure clicks. Its most notable referrals from social media came from crypto news forums on Reddit where people swap investment tips.

The whole scheme appears aimed at winning ad revenue from the page views of people who unwittingly land on the site’s garbled content. The algorithmic posts are garnished with ads served by Google’s ad platform. Sometimes those ads appear to be themed on financial trading, in line with the content, but others are unrelated—WIRED saw an ad for the AARP. Using Google's ad network on AI-generated posts with fake bylines could fall foul of the company's publisher policies, which forbid content that “misrepresents, misstates, or conceals” information about the creator of content. Occasionally, sites received direct traffic from the CCR domain, suggesting its operators may have struck up other types of advertising deals, including a financial brokerage service and an online ad network.

Unknown Operator

Eastin and Abraham’s attempts to discover who now owns the Clayton County Register’s former domain were inconclusive—as were WIRED’s—but they have their suspicions. The pair found that records of its old security certificates linked the domain to a Linux server in Germany. Using the internet device search engine Shodan.io, they found that a Polish website that formerly advertised IT services appeared associated with the Clayton County Register and several other domains. All were hosted on the same German server and published strikingly similar, apparently AI-generated content. An email previously listed on the Polish site was no longer functional and WIRED’s LinkedIn messages to a man claiming to be its CEO got no reply.

One of the other sites within this wider network was Aboutxinjiang.com. When Eastin and Abraham began their investigation at the end of 2023 it was filled with generic, seemingly-AI-generated financial news posts, including several about the use of AI in investing. The Internet Archive showed that it had previously served a very different purpose. Originally, the site had been operated by a Chinese outfit called “the Propaganda Department of the Party Committee of the Xinjiang Uyghur Autonomous Region,” and hosted information about universities in the country’s northwest. In 2014, though, it shuttered, and sat dormant until 2022, when its archives were replaced with Polish-language content, which was later replaced with apparently-automated clickbait in English. Since Eastin and Abraham first identified the site it has gone through another transformation. Early this month it began redirects to a page with information about Polish real estate.

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamGearHow Do You Solve a Problem Like Polestar?By Carlton ReidSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty

Altogether, Eastin and Abraham pinpointed nine different websites linked to the Polish IT company that appeared to comprise an AI clickbait network. All the sites appeared to have been chosen because they had preestablished reputations with Google that could help win prominence in search rankings to draw clicks.

Google claims to have systems in place to address attempts to game search rankings by buying expired domains, and says that it considers using AI to create articles with the express purpose of ranking well to be spam. “The tactics described as used with these sites are largely in violation of Search’s spam policies,” says spokesperson Jennifer Kutz. Sites determined to have breached those policies can have their search ranking penalized, or be delisted by Google altogether.

Still, this type of network has become more prominent since the advent of generative AI tools. McKenzie Sadeghi, a researcher at online misinformation tracking company NewsGuard, says her team has seen an over 1,000 percent increase in AI-generated content farms within the past year.

WIRED recently reported on a separate network of AI-generated clickbait farms, run by Serbian DJ Nebojša Vujinović Vujo. While he was forthcoming about his motivations, Vujo did not provide granular details about how his network—which also includes former US-based local news outlets—operates. Eastin and Abraham’s work fills in some of the blanks about what this type of operation looks like, and how difficult it can be to identify who runs these moneymaking gambits. “For the most part, these are anonymously run,” Sadeghi says. “They use special services when they register domains to hide their identity.”

That’s something Abraham and Eastin want to change. They have hopes that their work might help regular people think critically about how the news they see is sourced, and that it may be instructive for lawmakers thinking about what kinds of guardrails might improve our information ecosystem. In addition to looking into the origins of the Clayton County Register’s strange transformation, the pair have been investigating additional instances of AI-generated content mills, and are already working on their next report. “I think it’s very important that we have a reality we all agree on, that we know who is behind what we’re reading,” Abraham says. “And we want to bring attention to the amount of work we’ve done just to get this much information.”

Other researchers agree. “This sort of work is of great interest to me, because it’s demystifying actual use cases of generative AI,” says Emerson Brooking, a resident fellow at the Atlantic Council’s Digital Forensic Research Lab. While there’s valid concern about how AI might be used as a tool to spread political misinformation, this network demonstrates how content mills are likely to focus on uncontroversial topics when their primary aim is generating traffic-based income. “This report feels like it is an accurate snapshot of how AI is actually changing our society so far—making everything a little bit more annoying.”

About Kate Knibbs

Check Also

The Hottest Startups in Helsinki in 2024

Helsinki’s startup scene evolved around behemoths such as Nokia, games giant Supercell, and food delivery …

Leave a Reply