G7 Nations Agree To Voluntary AI Guidelines

The Group of Seven (G7) nations reportedly plans later on Monday to agree to a voluntary code of conduct for advanced artificial intelligence (AI) systems, as governments seek ways of mitigating the potentially harmful effects of the rapidly deploying technology.

The voluntary code of conduct is the result of a diplomatic process called the “Hiroshima AI process” that the G7 economies of Canada, France, Germany, Italy, Japan, Britain and the United States, as well as the European Union, began in May.

It seeks to set broad guidelines for the way countries govern the technology and deal with privacy and security risks, according to a G7 document detailed in multiple media reports.

An 11-point code laid out in the document aims to promote “safe, secure, and trustworthy AI worldwide” and to provide voluntary guidance for the development of the most advanced AI systems, “including the most advanced foundation models and generative AI systems”, it states, Reuters reported.

European Commission Vice President Vera Jourova.
Image credit European Commission

‘Benefits and risks’

The code is intended to “help seize the benefits and address the risks and challenges brought by these technologies”, the document states.

Companies are advised to take measures to identify, evaluate and mitigate risks and address incidents and patterns of AI misuse.

The code also includes advice that firms should post public reports on their products’ capabilities, limitations and their use and misuse and to invest in security controls.

The EU was a key driving force behind the code and has taken a lead on regulating AI with its AI Act.

International summit

EU digital chief Vera Jourova told an internet governance forum in Kyoto, Japan earlier this month that a Code of Conduct would be a strong basis for ensuring safety and would act as a bridge until regulation was in place more broadly.

The UK is this week hosting an international summit on AI in another effort to address the technology’s potentially harmful side-effects.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

OpenAI Tests Search Engine Prototype Called ‘SearchGPT’

Google's dominance of online search is being challenged, after OpenAI unveiled a search prototype tool…

15 hours ago

Elon Musk To Discuss $5 Billion xAI Investment With Tesla Board

Conflict of interest? Elon Musk to talk with Tesla board about making $5 billion Tesla…

19 hours ago

Amazon Developing Cheaper AI Chips – Report

Engineers at Amazon's chip lab in Austin, Texas, are racing ahead to develop cheaper AI…

1 day ago

Apple Smartphone Sales In China Drop 6.7 Percent, Canalys Finds

China woes. Apple's China smartphone shipments decline during the second quarter, dropping it down into…

2 days ago

Meta Ordered To Clean Up AI-Generated Porn By Oversight Board

Oversight Board orders Meta to clarify rules over sexually explicit AI-generated images, after two fake…

2 days ago