Google’s Nonconsensual Explicit Images Problem Is Getting Worse

In early 2022, two Google policy staffers met with a trio of women victimized by a scam that resulted in explicit videos of them circulating online—including via Google search results. The women were among the hundreds of young adults who responded to ads seeking swimsuit models only to be coerced into performing in sex videos distributed by the website GirlsDoPorn. The site shut down in 2020, and a producer, a bookkeeper, and a cameraman subsequently pleaded guilty to sex trafficking, but the videos kept popping up on Google search faster than the women could request removals.

The women, joined by an attorney and a security expert, presented a bounty of ideas for how Google could keep the criminal and demeaning clips better hidden, according to five people who attended or were briefed on the virtual meeting. They wanted Google search to ban websites devoted to GirlsDoPorn and videos with its watermark. They suggested Google could borrow the 25-terabyte hard drive on which the women’s cybersecurity consultant, Charles DeBarber, had saved every GirlsDoPorn episode, take a mathematical fingerprint, or “hash,” of each clip, and block them from ever reappearing in search results.

The two Google staffers in the meeting hoped to use what they learned to win more resources from higher-ups. But the victim’s attorney, Brian Holm, left feeling dubious. The policy team was in “a tough spot” and “didn’t have authority to effect change within Google,” he says.

His gut reaction was right. Two years later, none of those ideas brought up in the meeting have been enacted, and the videos still come up in search.

WIRED has spoken with five former Google employees and 10 victims’ advocates who have been in communication with the company. They all say that they appreciate that because of recent changes Google has made, survivors of image-based sexual abuse such as the GirlsDoPorn scam can more easily and successfully remove unwanted search results. But they are frustrated that management at the search giant hasn’t approved proposals, such as the hard drive idea, which they believe will more fully restore and preserve the privacy of millions of victims around the world, most of them women.

The sources describe previously unreported internal deliberations, including Google’s rationale for not using an industry tool called StopNCII that shares information about nonconsensual intimate imagery (NCII) and the company’s failure to demand that porn websites verify consent to qualify for search traffic. Google’s own research team has published steps that tech companies can take against NCII, including using StopNCII.

The sources believe such efforts would better contain a problem that’s growing, in part through widening access to AI tools that create explicit deepfakes, including ones of GirlsDoPorn survivors. Overall reports to the UK’s Revenge Porn hotline more than doubled last year, to roughly 19,000, as did the number of cases involving synthetic content. Half of over 2,000 Brits in a recent survey worried about being victimized by deepfakes. The White House in May urged swifter action by lawmakers and industry to curb NCII overall. In June, Google joined seven other companies and nine organizations in announcing a working group to coordinate responses.

Right now, victims can demand prosecution of abusers or pursue legal claims against websites hosting content, but neither of those routes is guaranteed, and both can be costly due to legal fees. Getting Google to remove results can be the most practical tactic and serves the ultimate goal of keeping violative content out of the eyes of friends, hiring managers, potential landlords, or dates—who almost all likely turn to Google to look up people.

A Google spokesperson, who requested anonymity to avoid harassment from perpetrators, declined to comment on the call with GirlsDoPorn victims. She says combating what the company refers to as nonconsensual explicit imagery (NCEI) remains a priority and that Google’s actions go well beyond what is legally required. “Over the years, we’ve invested deeply in industry-leading policies and protections to help protect people affected by this harmful content,” she says. “Teams across Google continue to work diligently to bolster our safeguards and thoughtfully address emerging challenges to better protect people.”

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'FlahertyGearHow Do You Solve a Problem Like Polestar?By Carlton Reid

In an interview with WIRED, a Google search product manager overseeing anti-harm work says blocking videos using hashes is challenging to adopt because some websites don’t publish videos in a way that search engines can compare against. Speaking on condition of anonymity, she says Google has encouraged explicit websites to address that. She adds that there’s generally more for Google to do but refutes the allegation that executives had held up the work.

Advocates of bolder action by Google point to the company’s much tighter restrictions on searching for child sexual abuse material (CSAM) as evidence it could do much more. Typing “deepfake nudes kids” into Google prompts a warning that such content is illegal and ultimately directs users to news articles and support groups. Google also finds and blocks from its results almost 1 million new CSAM-containing webpages annually.

A recent Google search for “deepfake nudes jennifer aniston” yielded seven results purporting to offer just that. The search engine offered no warning or resources in response to the query, despite nearly every US state and many countries having criminalized unpermitted distribution of intimate content of adults. Google declined to comment on the lack of a warning.

The product manager says comparisons to CSAM are invalid. Virtually any image of a naked child is illegal and can be automatically removed, she says. Separating NCEI from consensual porn requires some indication that the content was shot or distributed without permission, and that context often isn’t clear until a victim files a report and a human analyzes it. But the manager wouldn’t directly answer whether Google has tried to overcome the challenge.

Adam Dodge, founder of advocacy and education group Ending Tech-Enabled Abuse, says that until Google proactively removes more NCII, victims have to be hypervigilant about finding and reporting it themselves. That’s “not something we should put on victims,” he says. “We’re asking them to go to the location where they were assaulted online to move past the trauma.”

Google started accepting removal requests for search results leading to nudity or sex in 2015 if the content was intended to be private and was never authorized to be published, according to its policy. That went largely unchanged until 2020, when the company added that being in an “intimate state” qualified.

A New York Times column that year triggered Google executives to dedicate resources to the issue, organizing projects, including one codenamed Sparrow, to help victims keep content off search for good, three former employees say. The product manager confirmed that executives at times have pushed teams to improve Google’s handling of NCEI.

Google made its takedown form friendlier to use, understand, and access, the sources say. The search giant axed legalese and outdated use of the term “revenge porn,” since porn is generally viewed as consensual. The company added instructions on submitting screenshots and greater detail on the review process.

The form became accessible by clicking the menu that appears next to every search result. Requests rose about 19-fold in one early test, one source says. A second source says that it has become among Google’s most-used forms for reporting abuse and that, after the edits, a far greater percentage of requests resulted in removal of results. Google disputes these figures, but it declined to share comprehensive data on NCEI.

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'FlahertyGearHow Do You Solve a Problem Like Polestar?By Carlton Reid

Government-mandated transparency reports show Google has removed most of the nearly 170,000 search and YouTube links reported for unwanted sexual content in South Korea since December 2020, the earliest data available, and nixed nearly 300 pieces of content in response to 380 complaints from users in India since May 2021. The limited data suggest Google is finding more reports credible than its smaller rival in search Microsoft, which took action in 52 percent of the nearly 8,400 cases it received globally for Bing and other services from 2015 through June 2023.

Launched in late 2021, the StopNCII system has amassed a database of over 572,000 hashed photos and videos and blocked that media from being shared more than 12,000 times across 10 services, including Instagram and TikTok. Google hasn’t adopted the tool to block content from search due to concerns about what’s actually in the database, according to three sources.

To protect victims’ privacy, StopNCII doesn’t review content they report, and hashes reveal nothing about the underlying content. Google is worried that it could end up blocking something innocent, the sources say. “We don’t know if it’s just an image of a cupcake,” one of them says. The sources add that Google also has opted against bankrolling a system it considers better, despite internal suggestions to do so.

The Google spokesperson declined to comment on StopNCII, but in April the company told UK lawmakers who questioned Google about its decision not to use the tool that it had “specific policy and practical concerns about the interoperability of the database,” without elaborating.

Internally, Google workers have come up with some bold ideas to improve takedowns. Employees have discussed booting explicit websites, including porn companies, from search results unless they are willing to assure that their content is consensual, according to four sources. The idea hasn’t been adopted. Google’s search unit has shied away from setting rules on a thorny and taboo subject like sexual imagery, three sources say. “They don’t want to be seen as regulators of the internet,” one former staffer says.

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'FlahertyGearHow Do You Solve a Problem Like Polestar?By Carlton Reid

Because Google sends significant traffic to explicit websites, it could force them to take stricter measures. About 15 percent of image searches and up to half of video searches among the billions Google receives daily are related to porn, says one former staffer, figures the company declined to comment on. “Google holds the keys to the kingdom,” the source says. Meanwhile, few others are stepping in. US lawmakers haven’t passed proposed legislation to impose consent checks on online uploads. And some popular services for sharing explicit content, such as Reddit and X, don’t require users to submit proof of subjects’ consent.

Porn producers, who collect identity information from performers as required by US law, support the sharing of a consent signal with search engines, says Mike Stabile, spokesperson for the industry trade body Free Speech Coalition. “Major adult sites already monitor and block NCII much more aggressively than mainstream platforms,” he says.

The Google spokesperson declined to comment on the consent idea but points to an existing penalty: Google last December began demoting—but not blocking—search results for websites that come up in “a high volume” of successful takedown requests.

The Google product manager and the spokesperson contend that the search team already has taken big steps over the past three years to ease the burden on survivors of image-based sexual abuse. But WIRED’s investigation shows that some improvements have come with caveats.

A system Google introduced that tries to automatically remove search links when previously reported content resurfaces on new websites doesn’t work on videos or altered images, and two sources say Google hadn’t dedicated staff to improving it. “It absolutely could be better, and there isn’t enough attention on how it could really solve victims’ problems,” one says. The spokesperson says staff are assigned to enhance the tool.

Another system called known victim protection tries to filter out results with explicit images from search queries similar to those from past takedown requests, the two sources say. It is designed to not disrupt results to legitimate porn and generally reduces the need for victims to stay vigilant for new uploads. But Google has acknowledged to South Korean regulators that the system isn’t perfect. “Given the dynamic and ever-changing nature of the web, automated systems are not able, 100 percent of the time, to catch every explicit result,” the company writes in its transparency reports.

In one of its biggest shifts, Google last August abandoned its policy of declining to remove links to content that included signs that it had been captured with consent. For years, if Google determined from the imagery and any audio that the subject knew they were being recorded without any signs of coercion or distress, it would reject the takedown ask unless the requester provided ample evidence that it had been published without consent. It was a “super-mushy concept,” one of the former employees says.

That same source says staff persuaded executives to update the policy in part by describing the importance of letting people who had become adult performers on OnlyFans out of financial necessity to later revoke their consent and shred any ties to sex work. The Google spokesperson didn’t dispute this.

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'FlahertyGearHow Do You Solve a Problem Like Polestar?By Carlton Reid

The Washington, DC-based National Center on Sexual Exploitation, an anti-porn group that’s become an authority on image-based sexual abuse, argues that even after the revision, Google is falling short. It wants Google to automatically honor all takedown requests and put the burden on websites to prove there was consent to record and publish the disputed content. The Google spokesperson says that potential policy updates are constantly considered.

In the eyes of advocates, Google is being nowhere near as resourceful or attentive as it could or should be. Brad Gilde of Gilde Law Firm in Houston says he came away disappointed when his client won a headline-grabbing $1.2 billion judgment against an ex-boyfriend last August but then couldn’t get Google to remove a highly ranked search link to a sexually explicit audio recording of her on YouTube. The upload, which included the victim’s name and drew over 100 views, came down last month only after WIRED inquired.

Developing a reliable AI system to proactively identify nonconsensual media may prove impossible. But better keeping an ear out for big cases shouldn’t be too complicated, says Dan Purcell, a victim who founded removal company Ceartas DMCA. Google employees had a proposal on this issue: The company could establish a priority flagger program—as it has for other types of problematic content, including CSAM—and formally solicit tips from outside organizations such as Purcell’s that monitor for NCII. But staffing to administer the idea never came through. “​​Google is the No. 1 discoverability platform,” Purcell says. “They have to take more responsibility.” The Google spokesperson declined to comment.

DeBarber, the removal consultant who spoke with Google alongside his clients victimized by GirlsDoPorn, did a search for one of them this month while on the phone with WIRED. No links surfaced to videos of her, because DeBarber has spent over 100 hours getting those pages removed. But one porn service was misusing her name to lure in viewers to other content—a new result DeBarber would have to ask Google to remove. And through a different Google search, he could access a problematic website on which people can look up videos of his client.

Harassers regularly text that client links to her NCII, a frustrating reminder of how her past has yet to be erased. “They want to be out of sight and out of mind,” DeBarber says of his clients. “We’re heading in the right direction.” But he and survivors are counting on Google to help knock out the offenders for good. "A lot more could have been done by Google and still could be."

About Paresh Dave

Check Also

The Hottest Startups in Helsinki in 2024

Helsinki’s startup scene evolved around behemoths such as Nokia, games giant Supercell, and food delivery …

Leave a Reply