UK Government Pledges £100m For AI Taskforce

The UK government has announced a taskforce with an initial £100 million in funding to develop artificial intelligence (AI) foundation models, a technology found in OpenAI’s ChatGPT and similar tools, for use across the economy.

Meanwhile a European consumer protection group warned of the risks such tools may pose to consumers and children.

The government said foundation models, including large language models (LLMs) such as that which powers ChatGPT, could be used in healthcare, education and other sectors.

The technology is predicted to raise global GDP by 7 percent over a decade, making its adoption “a vital opportunity” to expand the UK economy, the government said.

Image credit: Matheus Bertelli/Pexels

Commercial opportunities

“To support businesses and public trust in these systems and drive their adoption, the taskforce will work with the sector towards developing the safety and reliability of foundation models, both at a scientific and commercial level,” the government stated.

Prime Minister Rishi Sunak said AI provides “enormous opportunities” to expand the economy, create better-paid jobs and improve healthcare and security.

“By investing in emerging technologies through our new expert taskforce, we can continue to lead the way in developing safe and trustworthy AI as part of shaping a more innovative UK economy,” he said.

Meanwhile, consumer concerns around AI continue to grow after Italy outlawed ChatGPT on data protection grounds earlier this month.

Consumer concerns

The European Consumer Organisation (BEUC) on Monday called on EU consumer protection agencies to investigate the technology and potential harm to the individual.

The organisation set out its concerns in separate letters earlier this month to consumer safety and consumer protection agencies.

It said content produced by chatbots may appear true and reliable but is often factually incorrect, potentially misleading consumers and resulting in deceptive advertising.

Younger consumers and children are more vulnerable to such risks, it said.

“BEUC thus asks you to investigate the risks that these AI systems pose to consumers as a matter of urgency, to identify their presence in consumer markets and to explore what remedial action must be taken to avoid consumer harm,” said BEUC deputy director general Ursula Pachl in the letter to consumer protection agencies and the European Commission.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Sophos Expands Cybersecurity With $860m Secureworks Purchase

Second time Secureworks is acquired, after UK's Sophos says it will buy the US cybersecurity…

2 hours ago

AWS Tells Staff To Leave If They Don’t Wish To Return To Office

Head of Amazon Web Services reportedly tells staff to leave if they don't like controversial…

20 hours ago

Eutelsat Launches First Satellites Since OneWeb Merger

SpaceX rocket blasts off on Sunday with 20 satellites to expand the Eutelsat/OneWeb communications network

21 hours ago

Samsung Delays ASML Deliveries For Texas Chip Factory – Report

Another worrying development for chip industry after Samsung delays delivery of ASML kit for new…

21 hours ago

TikTok’s ByteDance Fires Intern For Allegedly Sabotaging AI Project

ByteDance sacks intern for allegedly sabotaging the training of an internal artificial intelligence (AI) project

22 hours ago