Journalism Group Calls On Apple To Remove AI Feature

Journalism group Reporters Without Borders has said it is “very concerned” by distortions to news headlines introduced by Apple’s artificial intelligence features and called on the tech giant to remove the feature.

The group, also known as RSF, said an incident reported by the BBC in which an “Apple Intelligence” feature generated a false headline shows the technology is “too immature to produce reliable information” and “should not be allowed on the market” for such uses.

Apple began releasing the AI features in the UK on 11 December, and the BBC reported the false headline shortly afterward.

Apple Intelligence “took less than forty-eight hours to demonstrate that its new generative AI tool is incapable of producing reliable information in a consistent, trustworthy manner”, RSF said.

‘Act responsibly’

Apple has not commented on the incident.

RSF said AI works in a probabilistic way that is incompatible with systems intended to deliver accurate information.

“AIs are probability machines, and facts can’t be decided by a roll of the dice,” said Vincent Berthier, head of RSF’s technology desk.

“RSF calls on Apple to act responsibly by removing this feature.”

Apple Intelligence includes a feature that groups multiple notifications and summarises their content.

In one case a grouped notification displayed a summary stating that Luigi Mangione, the suspect arrested following the high-profile murder of healthcare insurance chief executive Brian Thompson, had shot himself, which is false.

The other two headlines were summarised accurately.

The BBC said it had contacted Apple “to raise this concern and fix the problem”.

“It is essential to us that our audiences can trust any information or journalism published in our name and that includes notifications,” the corporation said in a statement.

Apple’s take on AI, which has been released for newer iPhones, iPads and Macs, has already been criticised for generating summaries of emails or other notifications that can be inscrutable, bizarre or incorrect.

False headlines

The problem is more concerning when applied to notifications from news apps.

In a similar issue, on 21 November, three headlines from The New York Times were grouped together, with the AI-generated summary indicating falsely that one of them was about the “arrest” of Israeli prime minister Benjamin Netanyahu.

In reality, the headline stated that the International Criminal Court had issued an arrest warrant for Netanyahu.

That mistake was highlighted on social media platform Bluesky by a journalist with investigative journalism site ProPublica, the BBC said.

In May, after Google launched AI summaries of search results, users found the summaries were often bizarre or incorrect.

Google temporarily scaled back the feature’s roll-out soon afterward, before resuming its deployment later in the year.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Meta Agrees To Halt Personalised Ads For UK Woman

Meta says it will stop targeting personalised Facebook ads at UK woman after legal battle,…

9 hours ago

Nine EU Countries Push For New Chips Act

Nine EU countries led by the Netherlands push European Commission for follow-up to 2023 EU…

15 hours ago

Ex-Cruise Chief Vogt Raises $150m For Robotics Start-Up

Former Cruise chief executive Kyle Vogt reportedly raises $150m for The Bot Company at $2bn…

15 hours ago

Gotbit Founder Pleads Guilty To Crypto Manipulation

Gotbit founder Aleksei Andriunin pleads guilty to manipulating tokens' trading volume and price after extradition…

16 hours ago

ByteDance’s Largest US Investors ‘In Talks’ Over TikTok Deal

ByteDance's largest US investors reportedly in talks for majority stake in US TikTok spin-off, with…

16 hours ago

Apple Reshuffles Executives As AI Plans Struggle

Apple reportedly reassigns Siri development to executive behind Vision Pro after acknowledging delays to much-hyped…

17 hours ago