SEOUL, South Korea (AP) 鈥 Leading artificial intelligence companies made a fresh pledge at a mini-summit Tuesday to develop AI safely, while world leaders agreed to build a network of publicly backed safety institutes to advance research and testing of the technology.
Google, Meta and OpenAI were among the companies that made voluntary safety commitments at the , including pulling the plug on their cutting-edge systems if they can鈥檛 rein in the most extreme risks.
The two-day meeting is a follow-up to November鈥檚 at Bletchley Park in the United Kingdom, and comes amid a flurry of efforts by governments and global bodies to design guardrails for the technology amid fears about the potential risk it poses both to everyday life and to humanity.
Leaders from 10 countries and the European Union will 鈥渇orge a common understanding of AI safety and align their work on AI research," the British government, which co-hosted the event, said in a statement. The network of safety institutes will include those already set up by the U.K., U.S., Japan and Singapore since the Bletchley meeting, it said.
U.N. Secretary-General Antonio Guterres told the opening session that seven months after the Bletchley meeting, 鈥淲e are seeing life-changing technological advances and life-threatening new risks 鈥 from disinformation to mass surveillance to the prospect of lethal autonomous weapons.鈥
The U.N. chief said in a video address that there needs to be universal guardrails and regular dialogue on AI. 鈥淲e cannot sleepwalk into a dystopian future where the power of AI is controlled by a few people 鈥 or worse, by algorithms beyond human understanding,鈥 he said.
The 16 AI companies that signed up for the safety commitments also include Amazon, Microsoft, Samsung, IBM, xAI, France鈥檚 , China鈥檚 Zhipu.ai, and . They vowed to ensure the safety of their most advanced AI models with promises of accountable governance and public transparency.
It's not the first time that AI companies have made lofty-sounding but non-binding safety commitments. Amazon, Google, Meta and Microsoft were among a group that to voluntary safeguards brokered by the White House to ensure their products are safe before releasing them.
The Seoul meeting comes as some of those companies roll out the of their .
The safety pledge includes publishing frameworks setting out how the companies will measure the risks of their models. In extreme cases where risks are severe and 鈥渋ntolerable," AI companies will have to hit the kill switch and stop developing or deploying their models and systems if they can't mitigate the risks.
Since the U.K. meeting last year, the AI industry has 鈥渋ncreasingly focused on the most pressing concerns, including mis- and dis- information, data security, bias and keeping humans in the loop,鈥 said Aiden Gomez CEO of Cohere, one of the AI companies that signed the pact. "It is essential that we continue to consider all possible risks, while prioritizing our efforts on those most likely to create problems if not properly addressed.鈥
Governments around the world have been scrambling to formulate regulations for AI even as the technology makes rapid advances and is poised to transform many aspects of daily life, from education and the workplace to copyrights and privacy. There are concerns that advances in AI could eliminate jobs, or be used to create new bioweapons.
This week's meeting is just one of a slew of efforts on AI governance. has approved its first resolution on the safe use of AI systems, while recently held their first high-level talks on AI and the European Union's world-first is set to take effect later this year.
__
Chan contributed to this report from London. Associated Press writer Edith M. Lederer contributed from the United Nations.