Google searches will now detect origin of AI-manipulated images

Google is expanding efforts to properly label AI-generated content, updating its in-house “About This Image” tool with a global standard for detecting the origins of an AI-edited image.

The new label was formulated as part of Google’s work with the global Coalition for Content Provenance and Authenticity (C2PA). Members of the C2PA have committed to developing and adopting a standardized AI certification and detection process, enabled by a verification technology known as “Content Credentials.” Not all C2PA members, which include Amazon, Meta, and OpenAI, have implemented the authentication standards, however.

Google is taking the first step among key players, integrating the C2PA’s new 2.1 standard into products like Google Search and eventually Google Ads (the “About This Image” prompt is found by clicking on the three vertical dots located above a photo uncovered in a search). This standard includes an official “Trust List” of devices and technology that can help vet the origin of a photo or video through its metadata. “For example, if the data shows an image was taken by a specific camera model, the trust list helps validate that this piece of information is accurate,” Laurie Richardson, Google vice president of trust and safety, told the Verge. “Our goal is to ramp this up over time and use C2PA signals to inform how we enforce key policies.”

Mashable Light Speed

Want more out-of-this world tech, space and science stories?
Sign up for Mashable’s weekly Light Speed newsletter.

By signing up you agree to our Terms of Use and Privacy Policy.

Thanks for signing up!

SEE ALSO:

Sam Altman steps down as head of OpenAI’s safety group

After joining the C2PA in May, TikTok became the first video platform to implement the C2PA’s Content Credentials, including an automatic labeling system to read a video’s metadata and flag as AI. With the launch of Content Credentials for Google platforms, YouTube is set to follow in its footsteps.


Mashable Games
Mashable Games

Google has been vocal about widespread AI labeling and regulation, especially in its efforts to curb the spread of misinformation. In 2023, Google launched SynthID, its own digital watermarking tool designed to help detect and track AI-generated content made using Google DeepMind’s text-to-image generator, Imagen. It introduced (limited) AI labeling mandates for YouTube videos earlier this year, and has committed to addressing AI-generated deepfake content in Google Search.


Related Stories


Meta changes AI labels on Instagram and Facebook after backlash
Microsoft Bing amps up its ability to stop explicit deepfake images from appearing in Search results
How Big Tech is approaching explicit, nonconsensual deepfakes
Google is lowering deepfake porn in search rankings
OpenAI quietly lobbied for weaker AI regulations while publicly calling to be regulated

The company joined the C2PA steering committee in February, a group that includes other major industry players and even news organizations, like the BBC.

About Chase DiBenedetto

Check Also

Sam Altman steps down as head of OpenAI's safety group

OpenAI ‘s internal safety committee is becoming an independent oversight body, as CEO Sam Altman …

Leave a Reply