Amazon has extended its one-year moratorium on police use of its facial recognition software, in a blow for law enforcement officials in the United States.
In June last year, in the wake of the death of George Floyd and continued protests in the United States, Amazon announced a one-year moratorium on police use of its facial recognition software.
Until June 2020 Amazon had stoutly defended the use of its facial recognition system (called Rekognition – offered as an AWS service) by law enforcement in the United States.
But following the killing of George Floyd, tech giants began signalling they would no longer allow their technology to be used for facial recognition programs.
IBM for example had called for police reform in the United States, and cancelled all its facial recognition programs.
Amazon’s decision come amid concern that facial recognition technologies could be used unfairly against protesters.
According to Reuters, Amazon has now decided to make the ban indefinite, because it had hoped that the US Congress would use the year provided by the moratorium to implement rules surrounding the ethical use of facial recognition technology.
But the US Congress has not addressed this since June 2020.
Meanwhile on this side of the Atlantic, the police have defended their use of facial recognition technologies.
In February 2020 the UK’s most senior police officer, Metropolitan Police Commissioner Cressida Dick, said criticism of the tech was “highly inaccurate or highly ill informed.”
She also said facial recognition was less concerning to many than a knife in the chest.
But an academic study last year for example found that 81 percent of ‘suspects’ flagged by UK’s Met police facial recognition technology were innocent, and that the overwhelming majority of people identified are not on police wanted lists.
And facial recognition systems has been previously criticised in the US after research by the Government Accountability Office found that FBI algorithms were inaccurate 14 percent of the time, as well as being more likely to misidentify black people.
In August 2019, the ACLU civil rights campaign group in the US ran a demonstration to show how inaccurate facial recognition systems can be.
It ran a picture of every California state legislator through a facial-recognition program that matches facial pictures to a database of 25,000 criminal mugshots.
That test saw the facial recognition program falsely flag 26 legislators as criminals.
Amazon offered its rebuttal to the ACLU test, available here, alleging that the result could be skewed when an inappropriate facial database is used, and that the ACLU default confidence threshold was too low.
Amazon and IBM are not the only tech giants to ban the use of their respective technology for facial recognition.
Google doesn’t commercially offer its facial recognition technology, and Microsoft has previously refused to install facial recognition technology for a US police force, due to concerns about artificial intelligence (AI) bias.
The Met in London use a facial recognition system from Japanese firm NEC.
And Redmond deleted a large facial recognition database, that was said to have contained 10 million images that were used to train facial recognition systems.
There is currently no federal legislation addressing police use of facial recognition.
San Francisco banned the use of facial recognition technology, meaning that local agencies, such as the local police force and other city agencies such as transportation would not be able to utilise the technology in any of their systems.
In September 2019, the US state of California passed a three year moratorium on the use of the technology.
The cities of Portland, located in both Oregon and Maine, also passed legislation around the tech in late 2020.
The state of Massachusetts failed to pass a proposed ban in December 2020, but recently passed a modified bill that restricts police use of facial recognition.