International Tensions Surface At Paris AI Summit

A representative of China at this week’s AI Action Summit in Paris stressed the importance of collaboration on artificial intelligence, while engaging in a testy exchange with Yoshua Bengio, a Canadian academic considered one of the “Godfathers” of AI.

Fu Ying, a former Chinese government official and now an academic at Tsinghua University in Beijing, said the name of China’s official AI Development and Safety Network was intended to emphasise the importance of collaboration to manage the risks around AI.

She also said tensions between the US and China were impeding the ability to develop AI safely.

Pioneering AI researcher Prof Yoshua Bengio. Image credit: Yoshua Bengio

‘Unity and collaboration’

“At a time when the science is going in an upward trajectory, the relationship is falling in the wrong direction and it is affecting unity and collaboration to manage risks,” she said during a panel discussion, according to a BBC report.

“It’s very unfortunate.”

At the same time, Fu Ying, a former vice minister of foreign affairs in China and the country’s former UK ambassador, took veiled jabs at Prof Bengio, who was also a member of the panel.

She thanked him for a major international AI safety report that Bengio co-authored, saying it was “very, very long” at more than 400 pages in the Chinese translation and that she had not finished reading it.

She also implied that the name of China’s AI body was a dig at the AI Safety Institute, a state-backed international body of which Prof Bengio is a member.

Fu Ying spoke of the “risks” involved with AI development and said open source models, such as those developed by Chinese AI start-up DeepSeek, can be easier to regulate.

Prof Bengio disagreed, saying open source models could be easier to exploit, but acknowledged it was easier to spot issues with an open source model such as DeepSeek in contrast to OpenAI’s closed-source ChatGPT.

Bengio also spoke about AI risks at the World Economic Forum in Davos last month, focusing on AI agents as a particularly dangerous field of development as it gives AI tools the ability to take action in the outside world.

‘Most dangerous path’

“I want to raise a red flag. This is the most dangerous path,” he said at a Davos panel.

He said non-agentic AI research tools would be safer and could even be used to control agents.

Google DeepMind chief executive Demis Hassabis, also on the panel, agreed that measures should be taken to mitigate agentic risks, but he said companies are under pressure to give AI tools agentic capabilities.

“People want for their systems to be agentic,” Hassabis said.

OpenAI has released two AI agents in test versions, with capabilities including booking restaurants, ordering groceries and creating research reports based on online research.

The company has been criticised for de-emphasising safety efforts as it shifts from a non-profit to a for-profit model, but

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Virgin Media O2 To Invest £700m To ‘Transform’ 4G, 5G Network

Virgin Media O2 confirms it will invest £2m a day for new mobile masts, small…

2 days ago

Tesla Cybertruck Deliveries On Hold Due To Faulty Side Trim

Deliveries of Telsa's 'bulletproof' Cybertruck are reportedly on hold, amid user complaints side trims are…

2 days ago

Apple Plots Live Translation Option For AirPods – Report

New feature reportedly being developed by Apple for iOS 19, that will allow AirPods to…

2 days ago

Binance Token Rises After Trump Stake Report

Binance BNB token rises after WSJ report the Trump family is in talks to secure…

3 days ago

iRobot Admits ‘Substantial Doubt’ Over Continued Operation

After failed Amazon deal, iRobot warns there is “substantial doubt about the Company's ability to…

3 days ago

Meta’s Community Notes To Use X’s Algorithm

Community Notes testing across Facebook, Instagram and Threads to begin next week in US, using…

3 days ago