OpenAI supports California's AI-watermarking bill

OpenAI has expressed its support for a California bill requiring AI-generated content to be labeled as such.

According to Reuters, OpenAI sent a letter of support to California State Assembly member Buffy Wicks, who authored the bill, titled the California Provenance, Authenticity and Watermarking Standards Act (AB 3211). The legislation passed by the state Assembly vote would require AI companies to put an invisible watermark on all content made or “significantly modified” by their AI models. Next, the bill is up for vote at the state Senate and would go on to review by CA Governor Gavin Newsom if passed.

The bill would also require AI companies to provide “watermark decoders” so that users can easily identify whether content is AI-generated or not.

Mashable Light Speed

Want more out-of-this world tech, space and science stories?
Sign up for Mashable’s weekly Light Speed newsletter.

By signing up you agree to our Terms of Use and Privacy Policy.

Thanks for signing up!

SEE ALSO:

OpenAI has a ‘99% effective’ ChatGPT-detection tool ready. So why aren’t they releasing it?

Image-generating models on the market vary in their levels of photorealism, protective guardrails, and copyright protections. DALL-E 3 is OpenAI’s latest text-to-image model. As of February, images generated by the model on ChatGPT contain C2PA metadata, which provides the image’s provenance. Similarly, Google has its own SynthID tool for watermarking images created by its model, Gemini. Grok-2, which Elon Musk’s xAI company released, seems to have the least amount of restrictions since it can generate images of public figures and copyrighted works. Midjourney, one of the most advanced image models, is currently embroiled in a legal battle over copyright infringement.


Mashable Games
Mashable Games

In the letter viewed by Reuters, OpenAI underscored the importance of transparency of the provenance of images and other AI-generated content. “New technology and standards can help people understand the origin of content they find online, and avoid confusion between human-generated and photorealistic AI-generated content,” said OpenAI Chief Strategy Officer Jason Kwon in the letter.

The issue of AI-generated deepfakes and misinformation is especially significant with the upcoming U.S. Presidential Election. Already, AI-generated images of Kamala Harris speaking at a communist rally and Taylor Swift endorsing Donald Trump have circulated on social media.

About Cecily Mauran

Check Also

An official OpenAI X account just got hacked by crypto scammers

One of OpenAI’s official accounts just got hacked. OpenAI Newsroom, “The official newsroom for OpenAI”, …

Leave a Reply