Twenty-seven nations, including the UK, US, and EU, have signed a pact at the AI Seoul Summit to establish shared risk thresholds for the development and deployment of artificial intelligence, aiming to mitigate severe risks such as misuse and lack of human oversight. The agreement sets the stage for enhanced global AI safety measures and collaboration in setting risk thresholds internationally.
Nations Forge Agreement on AI Risk Thresholds in Seoul Summit
On May 22, 2024, a significant agreement was signed at the conclusion of the AI Seoul Summit in South Korea. Twenty-seven nations, including the UK, US, and members of the EU, endorsed a new pact aimed at establishing shared risk thresholds for the development and deployment of artificial intelligence (AI).
Known as the Seoul Ministerial Statement, the agreement intends to create internationally recognized criteria for determining when AI model capabilities pose severe risks. These risks include the potential misuse of AI to aid malicious actors in acquiring or using chemical and biological weapons, and the possibility of AI avoiding human oversight through deceptive practices.
The summit, co-hosted by the UK and South Korea, saw participation from countries such as the United States, France, and the UAE. However, China, which was involved in the discussions, did not sign the statement.
UK Technology Secretary Michelle Donelan emphasized that the agreement marks the beginning of a new phase in the global AI safety agenda. She noted the importance of setting risk thresholds beyond which AI models should not be released and enhancing international cooperation to mitigate severe risks.
The signatories aim to collaborate with AI companies, civil society, and academia to develop detailed risk proposals. These will be further discussed at the AI Action Summit, scheduled to be hosted by France in 2025.
In related agreements, 16 leading AI companies committed to publishing safety frameworks, and 10 nations, along with the EU, agreed to form an international network of AI safety institutes. This network will facilitate the sharing of research and data to bolster AI safety measures globally.