EU, UK, US, and China Sign Declaration on 'Catastrophic' AI Risk to Humanity
The European Union, United Kingdom, United States, and China have collectively acknowledged the potential catastrophic risk that artificial intelligence poses to humanity. A total of 28 governments signed the so-called Bletchley Declaration on the first day of the artificial intelligence security summit organized by the British government.
While the declaration doesn't agree to the creation of an international testing centre in the UK, as hoped for by the British government, it does provide a template for future international cooperation.
Security summits are also planned in South Korea in six months and in France in a year. The declaration states, "There is the potential for serious, even catastrophic, harm, whether deliberate or inadvertent, arising from the most significant capabilities of these AI systems."
UK Prime Minister Rishi Sunak welcomed the statement, calling it a "landmark achievement in which the world's major AI states have agreed on the urgency of understanding the risks associated with AI."
British Minister for Technology, Michelle Donelan, told reporters, "For the first time, we have countries agreeing that we need to collectively look at the risks associated with advanced AI technologies."
The UK had hoped that other countries would agree to bolster a government working group on artificial intelligence, enabling it to test new models from around the world before they are released.
Meanwhile, Gina Raimondo used the summit to announce the creation of a separate American Institute for AI Security within the US National Institute of Science and Technology, which she described as a "neutral third party for developing best-in-class standards," adding that the institute would develop its own safety, security, and testing rules.
In the meantime, the European Union is in the process of passing an AI legislation aimed at developing a set of regulatory principles and implementing rules for specific technologies such as real-time facial recognition.
Earlier this week, the US President's administration published its first-ever executive order on AI, which, among other things, requires AI companies such as OpenAI and Google to share their security testing results with the government before releasing their AI models.