Bletchley Declaration sees major nations all agreed that artificial intelligence poses a potential catastrophic risk to humanity
The AI Safety Summit in the UK has delivered what is being called the ‘Bletchley Declaration’, which addresses the potential risks posed by the artificial intelligence.
The Bletchley Declaration is notable as the first international declaration on AI, after countries such as UK, US, EU, Australia, and China all agreed that artificial intelligence poses a potentially catastrophic risk to humanity.
The summit is being attended by many nations, high profile figures and organisations, to discuss the future of AI and work towards a shared understanding of its risks.
Now twenty-eight governments have signed the so-called Bletchley declaration on the first day of the AI safety summit, hosted by the British government.
“Artificial Intelligence (AI) presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity,” the declaration states. “To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible.”
“Alongside these opportunities, AI also poses significant risks, including in those domains of daily life,” it reads. “Substantial risks may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent.”
“These issues are in part because those capabilities are not fully understood and are therefore hard to predict,” the declaration states. “We are especially concerned by such risks in domains such as cybersecurity and biotechnology, as well as where frontier AI systems may amplify risks such as disinformation. There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.”
“Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation,” it states. “We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, to promote cooperation to address the broad range of risks posed by AI.”
The signatories agreed that addressing frontier AI risk will focus on:
- identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.
- building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.
“This is a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI – helping ensure the long-term future of our children and grandchildren,” said Prime Minister Rishi Sunak.
“Under the UK’s leadership, more than 25 countries at the AI Safety Summit have stated a shared responsibility to address AI risks and take forward vital international collaboration on frontier AI safety and research,” said Sunak.
South Korea has agreed to host another such summit in six months’ time, while France will host one in a year.
American AI Safety Institute
The declaration marks a diplomatic achievement for the UK and for the Prime Minister in particular, who decided to host the summit this summer after becoming concerned with the way in which AI models were advancing rapidly without oversight.
It comes also as US Commerce Department Secretary Gina Raimondo, who appeared on stage at the summit with the Chinese vice-minister of science and technology, Wu Zhaohui, in a rare show of global unity, announced a separate American AI Safety Institute within the country’s National Institute of Standards and Technology.
Raimondo called this “a neutral third party to develop best-in-class standards”, adding that the institute would develop its own rules for safety, security and testing.
It comes the Biden administration earlier this week released an executive order requiring US AI companies such as OpenAI and Google to share their safety test results with the government before releasing AI models.
The G7 group of countries have also published voluntary guidelines for advanced AI development.