The US Is Forming a Global AI Safety Network With Key Allies

The US is widely seen as the global leader in artificial intelligence, thanks to companies like OpenAI, Google, and Meta. But the US government says it needs help from other nations to manage the risks posed by AI technology.

At an international summit on AI Safety in Seoul on Tuesday, the US delivered a message from Secretary of Commerce Gina Raimondo announcing that a global network of AI safety institutes spanning the US, UK, Japan, Canada, and other allies will collaborate to contain the technology’s risks. She also urged other countries to join up.

“Recent advances in AI carry exciting, life-changing potential for our society, but only if we do the hard work to mitigate the very real dangers,” Secretary Raimondo said in a statement released ahead of the announcement. “It is paramount that we get this right and that we do so in concert with our partners around the world to ensure the rules of the road on AI are written by societies that uphold human rights, safety, and trust.”

The US government has previously said advances in AI create national security risks, including the potential to automate or accelerate the development of bioweapons or to enable more damaging cyberattacks on critical infrastructure.

One challenge for the US, alluded to in Raimondo’s statement, is that some national governments may not be eager to fall in line with its approach to AI. She said the US, the UK, Japan, Canada, Singapore, and the European AI Office would work together as the founding members of a “global network of AI safety institutes.”

The Commerce Department declined to comment on whether China had been invited to join the new AI safety network. Fears that China will use advanced AI to empower its military or threaten the US led first the Trump administration and now the Biden administration to roll out a series of restrictions on Chinese access to key technology.

The US and China have at least opened a line of communication. A meeting between President Biden and Chinese president Xi Jinping in November saw the two superpowers agree to hold talks on AI risks and safety. Representatives from the nations met in Switzerland last week to hold the first round of discussions.

The Commerce Department said that representatives of the new global AI safety network’s members will meet in San Francisco later this year. A blueprint issued by the agency says that the network will work together to develop and agree upon methodologies and tools for evaluating AI models and ways to mitigate the risks of AI. “We hope to help develop the science and practices that underpin future arrangements for international AI governance,” the document says. A commerce department spokesperson said that the network would help nations tap into talent, experiment more quickly, and agree on AI standards.

The Seoul summit on AI safety this week is cohosted by the UK government, which convened the first major international meeting on the topic last November. That summit culminated in more than 28 countries including the US, members of the EU, and China signing a declaration warning that artificial intelligence is advancing with such speed and uncertainty that it could cause “serious, even catastrophic, harm.”

Most PopularGearThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian ChokkattuCultureConfessions of a Hinge Power UserBy Jason ParhamGearHow Do You Solve a Problem Like Polestar?By Carlton ReidSecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty

Amid rapid deployment of generative AI systems like ChatGPT last year, some prominent researchers and tech leaders began to speak more loudly about the potential for AI algorithms to become difficult to control and perhaps even a threat to humanity. Talk of the most far-off threats has since faded, but policymakers around the world are concerned about more immediate problems, such as the potential for generative AI tools like ChatGPT to spread disinformation and interfere with elections. In January, some voters in New Hampshire received robocalls using an AI-generated fake of Joe Biden’s voice.

Last October, President Biden issued a wide-ranging executive order to address the potential and pitfalls of fast-moving leaps in AI most evident with the startling abilities of ChatGPT. The Commerce Department was ordered to work on a number of initiatives to develop AI safety standards and also to develop a plan for global “engagement on promoting and developing AI standards.”

Biden’s executive order also required the US National Institute of Standards and Technology, which is part of the Commerce Department, to establish a US AI Safety Institute to systematically test AI models to understand how they could be misused and how they might behave.

About Will Knight

Check Also

The Hottest Startups in Helsinki in 2024

Helsinki’s startup scene evolved around behemoths such as Nokia, games giant Supercell, and food delivery …

Leave a Reply