Big Names Join President Biden’s ‘AI Safety Institute Consortium’

AI artificial intelligence

AI giants as well as many other businesses and organisations have join President Biden’s AI Safety Institute Consortium

More than 200 organisations have signed up to the Biden Administration’s AI Safety Institute, including prominent players in the artificial intelligence (AI) sector.

The US Department of Commerce announced that the ‘AI Safety Institute Consortium’, is the first-ever consortium dedicated to the safe development and deployment of generative AI.

It comes after President Biden had in late October 2023 signed an executive order requiring US AI companies such as OpenAI and Google to share their safety test results with the government before releasing AI models.

us president joe biden kamala harris Image credit: US government
Image credit: US government

AI Safety Institute Consortium

Just days after that, the world’s first AI Safety Summit was held at Bletchley Park in the UK, which was attended by leading AI organisations, governments, and high profile people.

That AI Safety Summit delivered what was called the ‘Bletchley Declaration’, which addressed the potential risks posed by the artificial intelligence.

Now US Secretary of Commerce Gina Raimondo has created the AI Safety Institute Consortium (AISIC), which includes more than 200 leading AI stakeholders to help advance the development and deployment of safe, trustworthy AI.

The idea is to unite AI creators and users, academics, government and industry researchers, and civil society organisations in support of the development and deployment of safe and trustworthy AI.

The consortium will be housed under the US AI Safety Institute (USAISI) and will contribute to priority actions outlined in President Biden’s executive order, including developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.

Back in July 2023 a number of big name players in the artificial intelligence market such as OpenAI, Google and others had agreed voluntary AI safeguards that includes use of watermarks.

Significant role

“The US government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” said Secretary Raimondo.

“President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the US AI Safety Institute Consortium is set up to help us do,” said Secretary Raimondo.

US Secretary of Commerce Gina M. Raimondo.
Image credit US Government

“Through President Biden’s landmark Executive Order, we will ensure America is at the front of the pack – and by working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”

“To keep pace with AI, we have to move fast and make sure everyone – from the government to the private sector to academia – is rowing in the same direction,” added Bruce Reed, White House Deputy Chief of Staff. “Thanks to President Biden’s landmark Executive Order, the AI Safety Consortium provides a critical forum for all of us to work together to seize the promise and manage the risks posed by AI.”

Members of the consortium include OpenAI, Alphabet’s Google, Anthropic and Microsoft, as well as Facebook-parent Meta Platforms, Apple, Amazon, Nvidia, Palantir, Intel, Cisco Systems, IBM, Hewlett Packard Enterprise, Mastercard, Qualcomm, and Visa.

The full list of consortium participants is available here.