AI Detection Startups Say Amazon Could Flag AI Books. It Doesn't

Amazon has an artificial intelligence problem. Namely, that its “everything store” is filled with books authored by bots. Several AI detection startups say they have a straightforward solution to help customers: Tell them when a book is AI-generated. Thus far, though, Amazon isn’t embracing detection tech.

It’s unclear exactly how many AI-generated books are currently for sale on the platform, which is both the largest bookseller in the world and the purveyor of the most popular self-publishing system in Kindle Direct Publishing. Certain genres, like travel guides and foraging handbooks, have been hit noticeably hard with a glut of low-quality titles. So have the bestseller lists for teen and young adult romance novels, which author Caitlyn Lynch referred to as “AI nonsense” on X (FKA Twitter) earlier this summer.

Some of these AI-generated books even use the names and likenesses of human authors; this summer, for example, writer and publishing industry expert Jane Friedman found five separate books falsely listed as her work for sale on Amazon. One of the fraudulent titles: “How to Write and Publish an eBook Quickly.”

Of course, AI bots aren’t generating these books on their own. (Yet.) There’s a whole strain of hustle culture YouTube videos encouraging people to get rich quick by churning out AI-generated works for Kindle, with titles like “How to Make UNDETECTABLE AI Content for Amazon KDP (Guaranteed Method).” Unfortunately for regular people trying to buy books online, this new moneymaking scheme makes it harder to find high-quality books written by humans.

In the past few weeks, Amazon has taken steps to address the issue, capping the number of books an author can self-publish and introducing a policy requiring self-published authors to disclose if a work is AI-generated. But three different AI detection startups have separately told WIRED that Amazon and other online booksellers could use their services to proactively flag AI-generated materials.

Reality Defender, one of the most prominent new AI-detection startups, started out focusing on deepfaked images but has since expanded its services to offer text detection too. Its roster of clients already includes some big names, including Visa, Microsoft, the US Department of Defense, and the US Department of Homeland Security. CEO and cofounder Ben Colman says the company currently has the capability to offer those services on the scale that Amazon would require. “In English language for now,” he says.

Other AI-detection startups share this high level of confidence. “We can absolutely provide the technology to ecommerce platforms, and we're currently in the midst of conversations to do so with various vendors,” GPTZero’s chief of staff, Jonathan White, says.

Winston AI’s founder John Renaud also says his company could “absolutely” equip ecommerce platforms like Amazon to detect AI-generated ebooks. The company already counts several publishers among its clients.

A growing chorus of voices in the publishing world believe that Amazon and other booksellers have a responsibility to at least try to tell customers whether the books they’re looking at were written by humans or machines.

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

“Amazon is ethically obligated to disclose this information. The authors and publishers should be disclosing it already, but if they don't, then Amazon needs to mandate it—along with every retailer and distributor,” Jane Friedman says. “By not doing so, as an industry we're breeding distrust and confusion. The author and the book will begin to lose the considerable authority they've enjoyed until now.”

“We've been advocating for legislation that requires AI-generated material to be flagged as such by the platforms or the publishers, across the board,” Authors Guild CEO Mary Rasenberger says.

There’s an obvious incentive for Amazon to do this. “They want happy customers,” Rasenberger says. “And when somebody buys a book they think is a human-written work, and they get something that is AI-generated and not very good, they’re not happy.”

So why doesn’t the company use AI-detection tools? Why wait on authors disclosing if they used AI? When asked directly by WIRED if proactive AI flagging was under consideration, the company declined to answer. Instead, spokesperson Ashley Vanicek provided a written statement about the company’s updated guidelines and volume limits for self-published authors. “Amazon is constantly evaluating emerging technologies and is committed to providing the best possible shopping, reading, and publishing experience for authors and customers,” Vanicek added.

This doesn’t mean that Amazon is out on this kind of technology, of course—only that it’s currently staying silent on any deliberations that might be happening behind the scenes. There are a number of reasons why the company might approach AI detection cautiously. For starters, there is skepticism about how accurate the results from these tools currently are.

Last March, researchers at the University of Maryland published a paper faulting AI detectors for inaccuracy. “These detectors are not reliable in practical scenarios,” they wrote. This July, researchers at Stanford published a paper highlighting how detectors show bias against authors who aren’t native English writers.

Some detectors have shut down after deciding they weren’t good enough. OpenAI retired its own AI classification feature after it was criticized for abysmal accuracy.

Problems with false positives have led some universities to discontinue use of different versions of these tools on student papers. “We do not believe that AI detection software is an effective tool that should be used,” Vanderbilt University’s Michael Coley wrote in August, after a failed experiment with Turnitin’s AI detection program. Michigan State, Northwestern, and the University of Texas at Austin have also abandoned the use of Turnitin’s detection software for now.

While the Authors Guild encourages AI flagging, Rasenberger says she’s anticipating that false positives will be an issue for its members. “That’s something we’ll end up hearing a lot about, I assure you,” she says.

Concerns about accuracy in the current crop of detection programs are entirely sensible—and even the most dialed-in detectors will never be flawless—but they don’t negate how welcome AI flagging would be for online book buyers, especially for people searching for nonfiction titles who expect human expertise. “I don't think it's controversial or unreasonable to say that readers care about who is responsible for producing the book they might purchase,” Friedman says.

About Kate Knibbs

Check Also

‘The Legend of Zelda: Echoes of Wisdom’ Trades Tropes for New Tricks

Nintendo, after nearly four decades, is taking The Legend of Zelda somewhere new. With Echoes …

Leave a Reply