UK Government Pledges £100m For AI Taskforce

The UK government has announced a taskforce with an initial £100 million in funding to develop artificial intelligence (AI) foundation models, a technology found in OpenAI’s ChatGPT and similar tools, for use across the economy.

Meanwhile a European consumer protection group warned of the risks such tools may pose to consumers and children.

The government said foundation models, including large language models (LLMs) such as that which powers ChatGPT, could be used in healthcare, education and other sectors.

The technology is predicted to raise global GDP by 7 percent over a decade, making its adoption “a vital opportunity” to expand the UK economy, the government said.

Image credit: Matheus Bertelli/Pexels

Commercial opportunities

“To support businesses and public trust in these systems and drive their adoption, the taskforce will work with the sector towards developing the safety and reliability of foundation models, both at a scientific and commercial level,” the government stated.

Prime Minister Rishi Sunak said AI provides “enormous opportunities” to expand the economy, create better-paid jobs and improve healthcare and security.

“By investing in emerging technologies through our new expert taskforce, we can continue to lead the way in developing safe and trustworthy AI as part of shaping a more innovative UK economy,” he said.

Meanwhile, consumer concerns around AI continue to grow after Italy outlawed ChatGPT on data protection grounds earlier this month.

Consumer concerns

The European Consumer Organisation (BEUC) on Monday called on EU consumer protection agencies to investigate the technology and potential harm to the individual.

The organisation set out its concerns in separate letters earlier this month to consumer safety and consumer protection agencies.

It said content produced by chatbots may appear true and reliable but is often factually incorrect, potentially misleading consumers and resulting in deceptive advertising.

Younger consumers and children are more vulnerable to such risks, it said.

“BEUC thus asks you to investigate the risks that these AI systems pose to consumers as a matter of urgency, to identify their presence in consumer markets and to explore what remedial action must be taken to avoid consumer harm,” said BEUC deputy director general Ursula Pachl in the letter to consumer protection agencies and the European Commission.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

OpenAI, Broadcom In Talks Over Development Of AI Chip – Report

Rebelling against Nividia? OpenAI is again reportedly exploring the possibility of developing its own AI…

2 days ago

Microsoft Outage Impacts Airlines, Media, Banks & Businesses Globally

IT outage causes major disruptions around the world, after Crowdstrike update allegedly triggers Microsoft outages

2 days ago

GenAI Integration Efforts Hampered By Costs, SnapLogic Finds

Hefty investment. SnapLogic research finds UK businesses are setting aside three-quarters of their IT budgets…

3 days ago

Meta Refuses EU Release Of Multimodal Llama AI Model

Mark Zuckerberg firm says European regulatory environment too ‘unpredictable’, so will not release multimodal Llama…

3 days ago

Synchron Announces Brain Interface Chat Powered by OpenAI

Brain implant firm Synchron offers AI-driven emotion and language predictions for users, powered by OpenAI's…

3 days ago