South Korea and the United Kingdom are hosting the second global AI summit in Seoul, that began on Tuesday and will continue on Wednesday.

Sixteen companies at the forefront of developing Artificial Intelligence pledged on Tuesday to develop the technology safely, at a time when regulators are scrambling to keep pace with its rapid development.

The AI Safety Summit in Seoul, South Korea is the second such global summit, and comes after the United Kingdom had hosted the first ever AI Safety Summit last November, at Bletchley Park.

That 2023 Summit had resulted in the first international declaration on AI – the so called ‘Bletchley Declaration’ – which saw attendees all agreed that artificial intelligence poses a potentially catastrophic risk to humanity, and propose a series of steps to mitigate this risk.

The new AI law will include regulations to govern the development of generative AI.

AI Seoul Summit

Six months later and the AI Seoul Summit has gathered political leaders, who agree to launch first international network of AI Safety Institutes to boost understanding of AI.

A new agreement was reached between 10 countries plus the European Union, which committed nations to work together to launch an international network to accelerate the advancement of the science of AI safety.

And the UK Republic of Korea has secured commitment from 16 global AI tech companies to a set of safety outcomes, building on Bletchley agreements with expanded list of signatories.

Tech companies at the forefront of AI, including firms from China and the UAE, have committed to not develop or deploy AI models if the risks cannot be sufficiently mitigated. The agreement also commits companies to ensuring accountable governance structures and public transparency on their approaches to frontier AI safety.

Among companies that have signed up to the fresh ‘Frontier AI Safety Commitments’, are:

  • Amazon
  • Anthropic
  • Cohere
  • Google / Google DeepMind
  • G42
  • IBM
  • Inflection AI
  • Meta
  • Microsoft
  • Mistral AI
  • Naver
  • OpenAI
  • Samsung Electronics
  • Technology Innovation Institute
  • xAI
  • Zhipu.ai

Where they have not done so already, AI tech companies will each publish safety frameworks on how they will measure risks of their frontier AI models, such as examining the risk of misuse of technology by bad actors.

The frameworks will also outline when severe risks, unless adequately mitigated, would be “deemed intolerable” and what companies will do to ensure thresholds are not surpassed.

In the most extreme circumstances, the companies have also committed to “not develop or deploy a model or system at all” if mitigations cannot keep risks below the thresholds.

On defining these thresholds, companies will take input from trusted actors including home governments as appropriate, before being released ahead of the AI Action Summit in France in early 2025.

World first

“It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on AI safety,” said Prime Minister, Rishi Sunak. “These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI.”

“It sets a precedent for global standards on AI safety that will unlock the benefits of this transformative technology,” said the UK Prime Minister. “The UK’s Bletchley summit was a great success and together with the Republic of Korea we are continuing that success by delivering concrete progress at the AI Seoul Summit.”

“Ensuring AI safety is crucial for sustaining recent remarkable advancements in AI technology, including generative AI, and for maximizing AI opportunities and benefits, but this cannot be achieved by the efforts of a single country or company alone,” added Republic of Korea Minister Lee.

“In this regard, we warmly welcome the ‘Frontier AI Safety Commitments’ established by global AI companies in collaboration with the governments of the Republic of Korea and the UK during the ‘AI Seoul Summit’, and we expect companies to implement effective safety measures throughout the entire AI lifecycle of design, development, deployment and use,” Lee stated.

“We are confident that the ‘Frontier AI Safety Commitments’ will establish itself as a best practice in the global AI industry ecosystem, and we hope that companies will continue dialogues with governments, academia, and civil society, and build cooperative networks with the ‘AI Safety Institute’ in the future,” said Lee.

Summit attendees

Attending the AI Seoul Summit are the governments of Australia; Canada; China; France; Germany; India; Italy; Japan; Kingdom of Saudi Arabia; Republic of Korea; Republic of Singapore; Republic of the Philippines; Rwanda; Spain; Switzerland; Turkey; United Arab Emirates, UK, USA and Ukraine.

The event is also being attended by the European Commission, the Organisation for Economic Co-operation and Development (OECD); and the United Nations.

Big name tech attendees include Amazon, Anthropic, Google/Google DeepMind, IBM, Meta, Microsoft, Mistral, OpenAI, Samsung Electronics, Tencent and xAI.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Apple Briefly Overtakes Microsoft For Market Crown On AI Plans

Apple AI announcements triggers three-day rally that sees market value briefly overtake Microsoft for most…

9 hours ago

Musk’s X Lawsuit Against Nazi Report Author Slated For 2025 Trial

Trial set for April 2025 against Media Matters, after its report prompted an advertising exodus…

1 day ago

Elon Musk Wins Shareholder Vote On Pay, Texas Incorporation

Shareholders at Tesla vote to reinstate Elon Musk's 'ridiculous' $56bn pay package, and approve incorporation…

1 day ago

X (Twitter) Now Hides Posts Liked By Users

Elon Musk’s X platform (formerly Twitter) has this week begun hiding user likes, amid reports…

2 days ago