How Big Tech is approaching explicit, nonconsensual deepfakes

In pursuit of technological innovation, generative AI‘s advocates have thrust the tools for highly-realistic, nonconsensual, synthetic forgeries, more commonly known as deepfake porn, into the hands of the Average Joe.

Ads for “nudify” undressing apps may appear on the sidebars of popular websites and in between Facebook posts, while manipulated sexual images of public figures spread as trending fodder for the masses. The problem has trickled down through the online sphere into the real lives of users, including young people. Implicated in it all are AI’s creators and distributors.

Government leaders are attacking the problem through piecemeal legislative efforts. The tech and social sectors are balancing their responsibility to users with the need for innovation. But deepfakes are a hard concept to fight with the weapon of corporate policy.

SEE ALSO:

Explicit deepfakes are traumatic. How to deal with the pain.

An alarming issue with no single solution

Solving the deepfake problem is made more difficult by just how hard it is to pinpoint deepfakes, not to mention widespread disagreement on who is responsible for nonconsensual synthetic forgeries.


Mashable Games
Mashable Games

Advocacy and research organization the Cyber Civil Rights Initiative, which fights against the nonconsensual distribution of intimate images (NDII), defines sexually explicit digital forgeries as any manipulated photos or videos that falsely (and almost indistinguishably) depict an actual person nude or engaged in sexual conduct. NDII doesn’t inherently involve AI (think Photoshop), but generative AI tools are now commonly associated with their ability to create deepfakes, which is a catchall term originally coined in 2017, that has come to mean any manipulated visual or auditory likeness.

Broadly, “deepfake” images could refer to minor edits or a completely unreal rendering of a person’s likeness. Some may be sexually explicit, but even more are not. They can be consensually made, or used as a form of Image-Based Sexual Abuse (IBSA). They can be regulated or policed from the moment of their creation or earlier through the policies and imposed limitations of AI tools themselves, or regulated after their creation, as they’re spread online. They could even be outlawed completely, or curbed by criminal or civil liabilities to their makers or distributors, depending on the intent.

Companies, defining the threat of nonconsensual deepfakes independently, have chosen to view sexual synthetic forgeries in several ways: as a crime addressed through direct policing, as a violation of existing terms of service (like those regulating “revenge porn” or misinformation), or, simply, not their responsibility.

Here’s a list of just some of those companies, how they fit into the picture, and their own stated policies touching on deepfakes.

Anthropic

AI developers like Anthropic and its competitors have to be answerable for products and systems that can be used to generate artificial AI content. To many, that means they also hold more liability for their tools’ outputs and users.

Advertising itself as a safety-first AI company, Anthropic has maintained a strict anti-NSFW policy, using fairly ironclad terms of service and abuse filters to try to curb bad user behavior from the start. It’s also worth noting that Anthropic’s Claude chatbot is not allowed to generate images of any kind.

Apple

In contrast to companies like Anthropic, tech conglomerates play the role of host or distributor for synthetic content. Social platforms, for example, provide opportunity for users to swap images and videos. Online marketplaces, like app stores, become avenues for bad actors to sell or access generative AI tools and their building blocks. As companies dive deeper into AI, though, these roles are becoming more blurred.

Mashable Light Speed

Want more out-of-this world tech, space and science stories?
Sign up for Mashable’s weekly Light Speed newsletter.

By signing up you agree to our Terms of Use and Privacy Policy.

Thanks for signing up!

Recent scrutiny has fallen on Apple’s App Store and other marketplaces for allowing explicit deepfake apps. While it’s App Store policies aren’t as direct as its competitors, notably Google Play, the company has reinforced anti-pornography policies in both its advertising and store rules. But controversy remains among the wide array of Apple products. In recent months, the company has been accused of underreporting the role of its devices and services in the spread of both real and AI-generated child sexual abuse materials.

And Apple’s recent launch of Apple Intelligence will pose new policing questions.

GitHub

GitHub, as a platform for developers to create, store, and share projects, treats the building and advertising of any non-consensual explicit imagery as a violation of its Acceptable Use Policy — similar to misinformation. It offers its own generative AI assistant for coding, but doesn’t provide any visual or audio outputs.

Alphabet, Inc.

Google

Google plays a multifaceted role in the creation of synthetic images as both host and developer. It’s announced several policy changes to curb both access to and the dissemination of nonconsensual synthetic content in Search, as well as advertising of “nudify” apps in Google Play. This came after the tech giant was called out for its role in surfacing nonconsensual digital forgeries on Google.com.

YouTube

As a host for content, YouTube has prioritized moderating user uploads and providing reporting mechanisms for subjects of forgeries.

Microsoft

Microsoft offers its own generative AI tools, including image generators hosted on Bing and Copilot, that also harness external AI models like OpenAI’s DALL-E 3. The company applies its larger content policies to users engaging with this AI, and has instituted prompt safeguards and watermarking, but it likely bears the responsibility for anything that falls through the cracks.

OpenAI

OpenAI is one of the biggest names in AI development, and its models and products are incorporated into — or are the foundations of — many of the generative AI tools offered by companies worldwide. OpenAI retains strong terms of use to try to protect itself from the ripple effects of such widespread use of its AI models.

In May, OpenAI announced it was exploring the possibility of allowing NSFW outputs in age-appropriate content on its own ChatGPT and associated API. Up until that point, the company had remained firm in banning any such content. OpenAI told Mashable at the time that despite the potential chatbot uses, the company still prohibited AI-generated pornography and deepfakes.


Related Stories


The consequences of making a nonconsensual deepfake
Google announces new tactics to curb explicit deepfakes
Kamala Harris deepfakes are going viral on TikTok and Elon Musk’s X
Anti-deepfake legislation just took a major step toward becoming law
Explicit deepfakes are traumatic. How to deal with the pain.

Meta

Facebook

While parent company Meta continues to explore generative AI integration on its platforms, its come under intense scrutiny for failing to curb explicit synthetic forgeries and IBSA. Following widespread controversy, Facebook’s taken a more strict stance on nudify apps advertising on the site.

Meta, meanwhile, has turned toward stronger AI labelling efforts and moderation, as its Oversight Board reviews Meta’s power to address sexually explicit and suggestive AI-generated content.

Instagram

Instagram similarly moderates visual media posted to its site, bolstered by its community guidelines.

Snapchat

Snapchat’s generative AI tools do include limited image generation, so its potential liability stems from its reputation as a site known for sexual content swapping and as a possible creator of synthetic explicit images.

TikTok

TikTok, which has its own creative AI suite known as TikTok Symphony, has recently waded into murkier generative AI waters after launching AI-generated digital avatars. It appears the company’s legal and ethical standing will rest on establishing proof of consent for AI-generated likenesses. TikTok has general community guidelines rules against nudity, the exposure of young people’s bodies, and sexual activity or services.

X/Twitter

Elon Musk’s artificial intelligence investment, xAI, has recently added image generation to its platform chatbot Grok, and the image generator is capable of some eyebrow-raising facsimiles of celebrities. Grok’s interface is built right into to the X platform, which is in turn a major forum for users to share their own content, moderated haphazardly through the site’s community and advertising guidelines.

X recently announced new policies that allow consensual adult content on the platform, but did not specify the posting of sexual digital forgeries, consensual or otherwise.

This story will be periodically updated as policies evolve.

About Chase DiBenedetto

Check Also

Google searches will now detect origin of AI-manipulated images

Google is expanding efforts to properly label AI-generated content, updating its in-house “About This Image” …

Leave a Reply