Apple Acknowledges ‘Confusion’ Over ‘iPhone Scanning’ System

Apple has acknowledged that its announcement of tools to scan for illegal images on iPhones and iPads was “jumbled pretty badly”.

Following criticism from privacy campaigners, the company has now given more details on the system, saying device-level scanning would allow independent experts to verify how Apple was using the system and what was being scanned for.

On 5 August Apple announced it would scan images uploaded from iPhones and iPads to its iCloud storage, looking for matches against a database of known child sex abuse material (CSAM) maintained by the US National Centre for Missing and Exploited Children (NCMEC).

Companies that operate cloud-based services, including Facebook, Google and Microsoft, commonly scan for CSAM, but do so remotely.

Upload scanning

Apple said it plans to add hashes for the CSAM database directly to iPhones and iPads in an operating system update later this year and that devices are to scan images before they reach iCloud.

An image is to be scanned only when a user uploads it to iCloud, and the system only detects exact matches against the database.

The system would not flag images of a person’s children in the bath, or search for pornography, Apple’s head of software, Craig Federighi, told The Wall Street Journal.

He said the announcement was “misunderstood” and that people had become concerned that Apple was scanning iPhones for images.

“That is not what is happening,” Federighi said.

“We feel very positively and strongly about what we’re doing and we can see that it’s been widely misunderstood.”

Account review

If the user tries to upload several CSAM images, the account will be flagged for review by Apple staff.

Federighi said this would only happen if the user tried to upload in the region of 30 matching images.

Apple said it plans to add the same database to all versions of iOS and iPadOS, but that it would only be used for scanning in the US initially, with rollouts in other countries to be considered on a case-by-case basis.

Apple said putting the database on the device would add accountability and that an independent auditor would be able to verify what was being scanned for.


The company is also rolling out a separate parental control that invovles image-scanning and Federighi said there had been “confusion” between the two.

If activated by a parent, the second feature scans messages sent or received by a child using the iMessage app. If nudity is detected the tool obscures the photo and warns the child.

Parents can also choose to receie an alert if the child chooses to view the photo.

Privacy groups said the tool could be expanded and used by authoritarian governments to spy on citizens.

Will Cathcard, head of WhatsApp, said Apple’s tools were “very concerning” and whistleblower Edward Snowden called the iPhone a “spyPhone”.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

OpenAI, Broadcom In Talks Over Development Of AI Chip – Report

Rebelling against Nividia? OpenAI is again reportedly exploring the possibility of developing its own AI…

2 days ago

Microsoft Outage Impacts Airlines, Media, Banks & Businesses Globally

IT outage causes major disruptions around the world, after Crowdstrike update allegedly triggers Microsoft outages

2 days ago

GenAI Integration Efforts Hampered By Costs, SnapLogic Finds

Hefty investment. SnapLogic research finds UK businesses are setting aside three-quarters of their IT budgets…

3 days ago

Meta Refuses EU Release Of Multimodal Llama AI Model

Mark Zuckerberg firm says European regulatory environment too ‘unpredictable’, so will not release multimodal Llama…

3 days ago

Synchron Announces Brain Interface Chat Powered by OpenAI

Brain implant firm Synchron offers AI-driven emotion and language predictions for users, powered by OpenAI's…

3 days ago