OpenAI, Anthropic To Share AI Models With US Government

Both OpenAI and Anthropic have signed deals with the United States government for research, testing and evaluation of their artificial intelligence models.

The US AI Safety Institute announced “agreements that enable formal collaboration on AI safety research, testing and evaluation with both Anthropic and OpenAI.”

Essentially the agreement will let the US government access major new AI models before their general release, in order to help improve their safety. This is a core goal of both the British and American AI Safety Institutes.

AI safety

In April 2024 both the United Kingdom and United States had signed a landmark agreement to work together on testing advanced artificial intelligence (AI).

That agreement saw the UK and US AI Safety Institutes pledge to work seamlessly with each other, partnering on research, safety evaluations, and guidance for AI safety.

It comes after last year’s AI Safety Summit in the UK, where big name companies including Amazon, Google, Facebook parent Meta Platforms, Microsoft and ChatGPT developer OpenAI all agreed to voluntary safety testing for AI systems, resulting in the so called ‘Bletchley Declaration.’

That agreement was backed by the EU and 10 countries including China, Germany, France, Japan, the UK and the US.

OpenAI, Anthropic agreement

Now according to the US AI Safety Institute, each company’s Memorandum of Understanding establishes the framework for it “to receive access to major new models from each company prior to and following their public release. The agreements will enable collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks.”

“Safety is essential to fueling breakthrough technological innovation,” said Elizabeth Kelly, director of the US AI Safety Institute. “With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety.”

“These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI,” said Kelly.

Additionally, the US AI Safety Institute plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models, in close collaboration with its partners at the UK AI Safety Institute.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Amazon Sellers ‘Pull Back’ From Prime Day Over Tariffs

Third-party sellers reportedly pulling back from participation in Prime Day mega-sale to protect profit margins…

17 hours ago

Private Equity Firms ‘Circle’ NCC Group’s Escode

Several buyout firms reportedly interested in NCC Group unit Escode as UK cybersecurity company says…

17 hours ago

Pegatron ‘Continuing’ Manufacturing Plans Despite Tariffs

Apple, Dell supplier Pegatron says tariffs not disrupting manufacturing strategy, but could lead to empty…

18 hours ago

Huawei ‘To Begin Testing’ Next-Gen AI Chip Ascend 910D

Huawei reportedly set to receive first batch of Ascend 910D AI chip samples as it…

18 hours ago

DeepMind UK Staff ‘Seek Unionisation’ To Challenge Military Deals

About 300 DeepMind UK staff seek unionisation to challenge Google's renewed pursuit of military, surveillance…

19 hours ago

M&S Tells Distribution Centre Staff To Stay At Home

Marks & Spencer tells agency staff at central England distribution hub to stay at home…

1 day ago