Amazon has said it will implement a one-year moratorium on police use of its facial recognition software.
The decision is a blow for US law enforcement, as Amazon had until now stoutly defended the use of its facial recognition system (called Amazon Rekognition) by law enforcement in the United States.
The Amazon announcement comes a day after IBM called for police reform in the United States, and cancelled all its facial recognition programs in light of ongoing concern at the use of the technology.
Amazon announced its decision in a brief blog post, and comes as the United States has been rocked by nationwide protests over the death of George Floyd.
“We’re implementing a one-year moratorium on police use of Amazon’s facial recognition technology,” blogged Amazon. “We will continue to allow organisations like Thorn, the International Center for Missing and Exploited Children, and Marinus Analytics to use Amazon Rekognition to help rescue human trafficking victims and reunite missing children with their families.”
But Amazon was clear that it was halting police use of Rekognition, for a year.
“We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge,” Amazon blogged. “We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.”
Amazon’s decision come amid concern that facial recognition technologies could be used unfairly against protesters.
One law enforcement user of Rekognition was quoted by Reuters as saying that Amazon was “throwing us under the bus.” Agencies generally have said they use facial recognition for post-crime investigations, not real-time monitoring.
“After over and over again saying that they stand by us and how we use the tech, they are making it seem like all of a sudden they don’t think we use it right,” the person said, speaking on condition of anonymity.
On this side of the Atlantic, the police has also defended its use of facial recognition technologies.
In February the UK’s most senior police officer, Metropolitan Police Commissioner Cressida Dick, said criticism of the tech was “highly inaccurate or highly ill informed.” She also said facial recognition was less concerning to many than a knife in the chest.
An academic study last year for example found that 81 percent of ‘suspects’ flagged by UK’s Met police facial recognition technology were innocent, and that the overwhelming majority of people identified are not on police wanted lists.
And facial recognition systems has been previously criticised in the US after research by the Government Accountability Office found that FBI algorithms were inaccurate 14 percent of the time, as well as being more likely to misidentify black people.
In August 2019, the ACLU civil rights campaign group in the US ran a demonstration to show how inaccurate facial recognition systems can be.
It ran a picture of every California state legislator through a facial-recognition program that matches facial pictures to a database of 25,000 criminal mugshots.
That test saw the facial recognition program falsely flag 26 legislators as criminals.
Microsoft has previously refused to install facial recognition technology for a US police force, due to concerns about artificial intelligence (AI) bias. The Met use a facial recognition system from Japanese firm NEC.
And Redmond deleted a large facial recognition database, that was said to have contained 10 million images that were used to train facial recognition systems.
San Francisco (and now the whole of California) has banned the use of facial recognition technology, meaning that local agencies, such as the local police force and other city agencies such as transportation would not be able to utilise the technology in any of their systems.
Can you protect your privacy online? Take our quiz!
Money maker. Super follow feature coming soon on Twitter, will allow users to receive tips…