SEOUL, South Korea (AP) – Major artificial intelligence companies renewed their commitment to safely develop AI at a mini-summit on Tuesday, while world leaders called for public support to advance research and testing of the technology. agreed to establish a network of safety agencies.
Google, Meta, and OpenAI were among the companies that took voluntary safety initiatives at the AI Seoul Summit, including pulling the plug on cutting-edge systems if the most extreme risks cannot be contained.
The two-day conference is a follow-up to the AI Safety Summit held in November at Bletchley Park in the UK, and comes amid a flurry of efforts by governments and global bodies to design guardrails for the technology. Ta. To everyday life and humanity.
leader The UK government, which co-sponsored the event, said in a statement that researchers from 10 countries and the European Union will “build a common understanding of AI safety and align AI research efforts.” The network of safety laboratories includes those already set up by the UK, US, Japan and Singapore since the Bletchley conference.
“AI presents huge opportunities to transform our economy and solve our biggest challenges,” UK Technology Secretary Michelle Donnellan said in a statement. “However, I have always been clear that we will not be able to realize the full potential of AI unless we understand the risks posed by this rapidly evolving, complex technology.”
The 16 AI companies that have signed on to the safety commitment also include Amazon, Microsoft, Samsung, IBM, xAI, France's Mistral AI, China's Zhipu.ai, and the United Arab Emirates' G42. They committed to responsible governance and public transparency, and vowed to ensure the safety of cutting-edge AI models.
This isn't the first time AI companies have undertaken lofty-sounding voluntary safety commitments: Amazon, Google, Meta and Microsoft were among a group that signed onto voluntary safeguards brokered by the White House last year to ensure their products were safe before they were released.
The conference in Seoul comes at a time when some of these companies are announcing the latest versions of their AI models.
The safety pledge includes publishing a framework that sets out how companies will measure the risk of their models. In extreme cases where the risk is severe and “unbearable,” AI companies will be required to hit a kill switch to cease development or deployment of the model or system if the risk cannot be mitigated.
Since last year's UK conference, the AI industry has “increasingly focused on its most pressing concerns, including misinformation and disinformation, data security, bias, and the constant surveillance of humans,” one of the conference's leaders said. said Aiden Gomez, CEO of Cohere. AI companies that have signed the agreement. “It is important to continue to consider all possible risks, prioritizing those that are most likely to cause problems if not properly addressed.”