Controversial facial recognition company Clearview has carried out nearly 1 million searches for US police, founder tells BBC
Controversial facial recognition company Clearview AI has carried out nearly a million searches for US police, founder and chief executive Hoan Ton-That has reportedly said.
Ton-That also told the BBC Our World programme that Clearview now has 30 billion image scraped from online platforms including Facebook, up from 20 billion a year ago.
The company has been fined millions of dollars in Europe and Australia for breaches of privacy, as the images are used without people’s permission.
Last May the UK Information Commissioner’s Office fined Clearview £7.5 million and told it to delete the data it holds on UK residents.
Clearview is banned from providing services to most US companies following a civil liberties lawsuit by the ACLU.
While there is an exemption for police, forces rarely disclose whether they are using facial recognition and the technology is usually portrayed as being used only for the most serious crimes.
It is banned for police use in US cities including Portland, San Francisco and Seattle.
But Ton-That said the service is used by “hundreds” of police forces across the US.
Miami assistant chief of police Armando Aguilar told the programme the system is used about 450 times a year for every type of crime, ranging from murders to shoplifting.
He said it has aided in solving several murders.
Facial recognition is highly controversial, with human rights campaigners calling it invasive and prone to errors and biases.
The European Union is working on an AI Act that is likely to limit its use in public spaces for law enforcement purposes.
But law enforcement authorities say such tools can be used to solve crimes or even prevent them.
Earlier this month the Washington Post reported that the FBI had signed a $120,000 (£98,000) contract earlier this year with Clearview, with FBI officials saying in the contract the tool was to be “used in ways that ultimately reduce crime”.
France is planning to use artificial intelligence-driven “intelligent” surveillance to protect next year’s Paris Olympic Games, using algorithms said to be able to spot dangerous situations such as abandoned packages or crowd surges.
Amnesty International called the move an “all-out assault on the rights to privacy, protest, and freedom of assembly and expression”.