UK government pledges initial £100m for AI taskforce to develop use of ChatGPT-like technology across economy
The UK government has announced a taskforce with an initial £100 million in funding to develop artificial intelligence (AI) foundation models, a technology found in OpenAI’s ChatGPT and similar tools, for use across the economy.
Meanwhile a European consumer protection group warned of the risks such tools may pose to consumers and children.
The government said foundation models, including large language models (LLMs) such as that which powers ChatGPT, could be used in healthcare, education and other sectors.
The technology is predicted to raise global GDP by 7 percent over a decade, making its adoption “a vital opportunity” to expand the UK economy, the government said.
“To support businesses and public trust in these systems and drive their adoption, the taskforce will work with the sector towards developing the safety and reliability of foundation models, both at a scientific and commercial level,” the government stated.
Prime Minister Rishi Sunak said AI provides “enormous opportunities” to expand the economy, create better-paid jobs and improve healthcare and security.
“By investing in emerging technologies through our new expert taskforce, we can continue to lead the way in developing safe and trustworthy AI as part of shaping a more innovative UK economy,” he said.
Meanwhile, consumer concerns around AI continue to grow after Italy outlawed ChatGPT on data protection grounds earlier this month.
The European Consumer Organisation (BEUC) on Monday called on EU consumer protection agencies to investigate the technology and potential harm to the individual.
The organisation set out its concerns in separate letters earlier this month to consumer safety and consumer protection agencies.
It said content produced by chatbots may appear true and reliable but is often factually incorrect, potentially misleading consumers and resulting in deceptive advertising.
Younger consumers and children are more vulnerable to such risks, it said.
“BEUC thus asks you to investigate the risks that these AI systems pose to consumers as a matter of urgency, to identify their presence in consumer markets and to explore what remedial action must be taken to avoid consumer harm,” said BEUC deputy director general Ursula Pachl in the letter to consumer protection agencies and the European Commission.