Amazon Is Investigating Perplexity Over Claims of Scraping Abuse

Amazon’s cloud division has launched an investigation into Perplexity AI. At issue is whether the AI search startup is violating Amazon Web Services rules by scraping websites that attempted to prevent it from doing so, WIRED has learned.

AWS spokesperson Patrick Neighorn confirmed the company's investigation of Perplexity following a WIRED inquiry about the startup's apparent scraping practices. WIRED had previously found that the Perplexity—which has backing from the Jeff Bezos family fund and Nvidia, and was recently valued at $3 billion—appears to rely on content from scraped websites that had forbidden access through the Robots Exclusion Protocol, a common web standard. While the Robots Exclusion Protocol is not legally binding, terms of service generally are.

The Robots Exclusion Protocol is a decades-old web standard that involves placing a plaintext file (like wired.com/robots.txt) on a domain to indicate which pages should not be accessed by automated bots and crawlers. While companies that use scrapers can choose to ignore this protocol, most have traditionally respected it. Neighorn told WIRED that AWS customers must adhere to the robots.txt standard while crawling websites.

“AWS’s terms of service prohibit abusive and illegal activities and our customers are responsible for complying with those terms," Neighorn said in a statement. “We routinely receive reports of alleged abuse from a variety of sources and engage our customers to understand those reports.”

Scrutiny of Perplexity’s practices follows a June 11 report from Forbes that accused the startup of stealing at least one of its articles. WIRED investigations confirmed the practice and found further evidence of scraping abuse and plagiarism by systems linked to Perplexity’s AI-powered search chatbot. Engineers for Condé Nast, WIRED’s parent company, block Perplexity’s crawler across all its websites using a robots.txt file. But WIRED found the company had access to a server using an unpublished IP address—44.221.181.252—which visited Condé Nast properties at least hundreds of times in the past three months, apparently to scrape Condé Nast websites.

The machine associated with Perplexity appears to be engaged in widespread crawling of news websites that forbid bots from accessing their content. Spokespeople for The Guardian, Forbes, and The New York Times also say they detected the IP address repeatedly visiting their servers.

WIRED traced the IP address to a virtual machine known as an Elastic Compute Cloud (EC2) instance hosted on AWS, which launched its investigation after we asked whether using AWS infrastructure to scrape websites that forbade it violated the company’s terms of service.

Last week, Perplexity CEO Aravind Srinivas responded to WIRED’s investigation first by saying the questions we posed to the company “reflect a deep and fundamental misunderstanding of how Perplexity and the Internet work.” Srinivas then told Fast Company that the secret IP address WIRED observed scraping Condé Nast websites and a test site we created was operated by a third-party company that performs web crawling and indexing services. He refused to name the company, citing a nondisclosure agreement. When asked if he would tell the third party to stop crawling WIRED, Srinivas replied, “It’s complicated.”

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'FlahertyGearHow Do You Solve a Problem Like Polestar?By Carlton Reid

Sara Platnick, a Perplexity spokesperson, tells WIRED that the company responded to Amazon’s inquiries on Wednesday and characterized the investigation as standard procedure. Platnick says Perplexity made no changes to its operation in response to Amazon’s concerns.

“Our PerplexityBot—which runs on AWS—respects robots.txt, and we confirmed that Perplexity-controlled services are not crawling in any way that violates AWS Terms of Service,” Platnick says. She adds, however, that PerplexityBot will ignore robots.txt when a user enters a specific URL in their prompt—a use-case Platnick describes as “very infrequent.”

“When a user prompts with a specific URL, that doesn’t trigger crawling behavior,” Platnick says. “The agent acts on the user’s behalf to retrieve the URL. It works the same way as if the user went to a page themselves, copied the text of the article, and then pasted it into the system.”

This description of Perplexity’s functionality confirms WIRED’s findings that its chatbot is ignoring robots.txt in certain instances.

Digital Content Next is a trade association for the digital content industry whose members include The New York Times, The Washington Post, and Condé Nast. Last year, the organization shared draft principles for governing generative AI to prevent potential copyright violations. CEO Jason Kint tells WIRED that if the allegations against Perplexity are true, the company is violating many of those principles.

“By default, AI companies should assume they have no right to take and reuse publishers' content without permission,” Kint says. If Perplexity is skirting terms of service or robots.txt, he adds, "the red alarms should be going off that something improper is going on.”

Update 6/28/24 4:39pm ET: This story includes an updated statement from AWS spokesperson Patrick Neighorn.

About

Check Also

Iranian Hackers Tried to Give Hacked Trump Campaign Emails to Dems

The week was dominated by news that thousands of pagers, walkie-talkies and other devices were …

Leave a Reply