A Pornhub Chatbot Stopped Millions From Searching for Child Abuse Videos

For the past two years, millions of people searching for child abuse videos on Pornhub’s UK website have been interrupted. Each of the 4.4 million times someone has typed in words or phrases linked to abuse, a warning message has blocked the page, saying that kind of content is illegal. And in half the cases, a chatbot has also pointed people to where they can seek help.

The warning message and chatbot were deployed by Pornhub as part of a trial program, conducted with two UK-based child protection organizations, to find out whether people could be nudged away from looking for illegal material with small interventions. A new report analyzing the test, shared exclusively with WIRED, says the pop-ups led to a decrease in the number of searches for child sexual abuse material (CSAM) and saw scores of people seek support for their behavior.

“The actual raw numbers of searches, it’s actually quite scary high,” says Joel Scanlan, a senior lecturer at the University of Tasmania, who led the evaluation of the reThink Chatbot. During the multiyear trial, there were 4,400,960 warnings in response to CSAM-linked searches on Pornhub’s UK website—99 percent of all searches during the trial did not trigger a warning. “There’s a significant reduction over the length of the intervention in numbers of searches,” Scanlan says. “So the deterrence messages do work.”

Millions of images and videos of CSAM are found and removed from the web every year. They are shared on social media, traded in private chats, sold on the dark web, or in some cases uploaded to legal pornography websites. Tech companies and porn companies don’t allow illegal content on their platforms, although they remove it with different levels of effectiveness. Pornhub removed around 10 million videos in 2020 in an attempt to eradicate child abuse material and other problematic content from its website following a damning New York Times report.

Pornhub, which is owned by parent company Aylo (formerly MindGeek), uses a list of 34,000 banned terms, across multiple languages and with millions of combinations, to block searches for child abuse material, a spokesperson for the company says. It is one way Pornhub tries to combat illegal material, the spokesperson says, and is part of the company’s efforts aimed at user safety, after years of allegations it has hosted child exploitation and nonconsensual videos. When people in the UK have searched for any of the terms on Pornhub’s list, the warning message and chatbot have appeared.

The chatbot was designed and created by the Internet Watch Foundation (IWF), a nonprofit which removes CSAM from the web, and the Lucy Faithfull Foundation, a charity which works to precent child sexual abuse. It appeared alongside the warning messages a total of 2.8 million times. The trial counted the number of sessions on Pornhub, which could mean people are counted multiple times, and it did not look to identify individuals. The report says there was a “meaningful decrease” in searches for CSAM on Pornhub and that at least “in part” the chatbot and warning messages appear to have played a role.

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

The chatbot was relatively simple: It asked people a series of questions, allowing them to click buttons to answer or type out a response. Ultimately, it explained that the material people were searching for may be illegal and pointed them toward the Lucy Faithfull Foundation’s help services. There were 1,656 requests for more information made through the chatbot, while 490 people clicked through to the charity’s Stop It Now website. Around 68 people called or chatted with Lucy Faithfull’s confidential helpline, the report says.

Donald Findlater, the director of the Stop It Now helpline, says that while the numbers are “relatively modest” compared to the overall number of warnings displayed, they are still seen as a “big success” as it’s a sign that people may want to get help. “If people have been doing something dodgy on a site, clicking through is quite a bold step to make,” Findlater says.

The vast majority of people who received the warning message and chatbot did so only once, the report says. Around 1.7 million people saw a warning before leaving Pornhub or making other searches related to legal material. “They didn't just disappear. They typically remained on the site and looked for other stuff,” Findlater says. “The influence for the millions of people that actually did a dubious search and then stopped doing that dubious search is a big win.” Not everyone was deterred, however. In the most persistent cases, around 400 people made 10 searches that triggered the message.

Cynthia Najdowski, an associate professor of psychology at the State University of New York at Albany, who was not involved in the research, says the chatbot appears to show promise for interrupting some people’s efforts to access CSAM. Warning messages and small behavioral nudges have been used in multiple ways to change people’s behavior online, from piracy and copyright infringement to gambling. Google has used some deterrence messages around child abuse searches since 2013, and other studies have found decreases in searches and millions of views or warnings.

Najdowski says there are three things known about deterring people from engaging in criminal behavior: People must know what they’re doing is illegal; they need to apply that “legal knowledge” to their own behavior; and they need to believe the cost of the behavior may outweigh any benefits they expect. “A chatbot that delivers notice of the potential illegality of certain searches can certainly accomplish the first step in the deterrence process, and that alone is a significant contribution,” Najdowski says. It may struggle to help cases where people are more persistent in their behavior or more complex scenarios though.

Scanlan, who conducted the analysis into the chatbot trial, says there were some complexities with the work. The data provided by Pornhub, the IWF, and the Lucy Faithfull Foundation wasn’t always complete, and there weren’t any figures from before the warnings were introduced to compare the results against. However, Scanlan says the results show the method could be one part of broader education and deterrence efforts against people finding CSAM online. “If someone's doing that sort out of curiosity, you want to nudge them away from it before they get involved in it, because we can't arrest our way out of the problem,” Scanlan says.

Most PopularGearPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric RavenscraftGear13 Great Couches You Can Order OnlineBy Louryn StrampeGearThe Best Portable Power StationsBy Simon HillGearThe Best Wireless Earbuds for Working OutBy Adrienne So

Scanlan’s findings say that over time, the web traffic being referred to the Stop It Now website appeared to decrease, perhaps as people who continued to search became used to the messaging. However, helpline calls, emails, and online chats showed an increase over the duration of the trial. The report says that in the future, a variety of messages could be used—potentially including existing deterrence videos—and the chatbot could directly connect people to a live chat session with Lucy Faithfull’s helpline.

The chatbot itself could also be improved. Since it was initially designed and created, says Dan Sexton, the chief technology officer at the IWF, generative AI has changed people’s perceptions of what chatbots are and how they interact with people. The reThink Chatbot could respond to only a limited number of queries. Sexton says there may be ways to make the chatbot more approachable and better handle questions it was not programmed to deal with.

While the trial period has ended, the chatbot and warnings are still in place on Pornhub’s UK website. “There's certainly no plans to turn it off. It is in production. It was a pilot project, but it is having an effect right now,” Sexton says. Those involved in the study say that other porn companies could look to introduce similar nudges across their services and deter people from looking for child abuse content. “They all should be doing this; it should become the norm,” Scanlan says. “This report and technology are significant steps forward in identifying, removing, and reporting harmful and illegal content,” a spokesperson for Pornhub says. “We feel all other major tech and social media platforms should explore the implementation of similar deterrence technology to create a safer internet for all.”

Findlater, from the Stop It Now helpline, says he hopes other companies, such as social media websites and file-hosting platforms, can look at the results of the trial and introduce similar nudges where people are seeking CSAM. “The more places you can put it, the greater chance you're going to catch those people that might be at a stage where they can still be helped, or those people that are looking for help but don't know about it,” Sexton says.

About Matt Burgess

Check Also

Iranian Hackers Tried to Give Hacked Trump Campaign Emails to Dems

The week was dominated by news that thousands of pagers, walkie-talkies and other devices were …

Leave a Reply