Labour would make it ‘statutory’ for tech companies to carry out AI safety tests and share results with government, replacing voluntary deal
Labour has said it would make it mandatory for tech companies to share artificial intelligence (AI) safety test results with the government, rather than the voluntary agreement reached at last November’s AI Safety Summit, after regulators and lawmakers failed to rein in social media companies.
Shadow technology secretary Peter Kyle said legislators and regulators had been “behind the curve” on social media and Labour would ensure the same mistake was not be repeated with AI.
His remarks come after the mother of murdered teenager Brianna Ghey called for greater controls on social media for children under 16.
Kyle said tech companies developing powerful AI systems would be required to coordinate research with the government.
“We will move from a voluntary code to a statutory code, so that those companies engaging in that kind of research and development have to release all of the test data and tell us what they are testing for, so we can see exactly what is happening and where this technology is taking us,” Kyle said, speaking on BBC One’s Sunday with Laura Keunssberg.
At last year’s AI Safety Summit companies including Amazon, Google, Facebook parent Meta Platforms, Microsoft and ChatGPT developer OpenAI agreed to voluntary safety testing for AI systems.
The agreement was backed by the EU and 10 countries including China, Germany, France, Japan, the UK and the US.
Labour’s proposals would see such companies required to inform the government when they were planning to develop AI systems over a certain level of capability and to conduct safety tests with “independent oversight”.
AI Safety Institute
Kyle said the process would help the new UK AI Safety Institute “reassure the public that independently, we are scrutinising what is happening in some of the real cutting-edge parts of … artificial intelligence”.
“Some of this technology is going to have a profound impact on our workplace, on our society, on our culture. And we need to make sure that that development is done safely,” he said.
Kyle is currently on a week-long visit to the US for meetings on AI with government figures and representatives from tech companies including Apple, Amazon, Google, Meta, Microsoft and Oracle.
He is also to meet with AI-focused companies including Anthropic and OpenAI as he looks to discuss how the technology can be used to improve public services and health care, according to Labour.
“A Labour government wants to unleash innovation and give companies the certainty needed to invest in our country, boosting wages and getting the economy growing again,” he said after arriving in Washington DC on Saturday.
Conservative minister for science Andrew Griffith said that where it comes to balancing AI safety and business growth Labour “do not have a plan”.
In a new study a House of Lords committee warned the UK could miss out on an AI “goldrush” due to the government’s disproportionate focus on safety.
An International Monetary Fund (IMF) study last month found AI is likely to affect about 40 percent of jobs worldwide, rising to 60 percent in highly economically developed countries, with about half of those affected possibly seeing reduced labour demand and lower wages.