Microsoft Bans Facial Recognition Sales To Police

microsoft future decoded windows

Software giant follows IBM and Amazon with police facial recognition ban until there is national regulation of the technology

Microsoft has joined IBM and Amazon with the confirmation that it will not start sales to US police departments until the country approves national regulation of the technology.

Microsoft’s president and chief counsel, Brad Smith, announced the decision and called on Congress to regulate the technology during a Washington Post video event on Thursday.

However the reality is that Microsoft had already implemented a police facial recognition ban for a while now.

Microsoft ban

Microsoft this time last year for example refused to install facial recognition technology for a US police force, due to concerns about artificial intelligence (AI) bias.

And a couple of months later in June 2019 Redmond deleted a large facial recognition database, which contained 10 million images used to train facial recognition systems.

But Brad Smith used the Washington Post video to solidify Microsoft’s position on law enforcement using facial recognition tech, in light of the ongoing protests following the death of George Floyd.

It should be noted that IBM, Amazon, and Microsoft are well known for their development of artificial intelligence, including face recognition software, but none of these three companies are actually a major player in selling such technology to police. Major players include NEC, Gemalto and Idemia.

Indeed, Smith said Thursday that Microsoft currently doesn’t sell its face recognition software to any US police departments.

That said, it is not clear whether Redmond does sell the tech to federal agencies or police forces outside the US.

“If all of the responsible companies in the country cede this market to those that are not prepared to take a stand, we won’t necessarily serve the national interest or the lives of the black and African American people of this nation well,” Smith reportedly said. “We need Congress to act, not just tech companies alone.”

Privacy concerns

On this side of the Atlantic, British police have defended its use of facial recognition technologies.

In February the UK’s most senior police officer, Metropolitan Police Commissioner Cressida Dick, said criticism of the tech was “highly inaccurate or highly ill informed.”

She also said facial recognition was less concerning to many than a knife in the chest.

But concerns still persist.

An academic study last year for example found that 81 percent of ‘suspects’ flagged by UK’s Met police facial recognition technology were innocent, and that the overwhelming majority of people identified were not on police wanted lists.

And facial recognition systems has been previously criticised in the US after research by the Government Accountability Office found that FBI algorithms were inaccurate 14 percent of the time, as well as being more likely to misidentify black people.

In August 2019, the ACLU civil rights campaign group in the US ran a demonstration to show how inaccurate facial recognition systems can be.

It ran a picture of every California state legislator through a facial-recognition program that matches facial pictures to a database of 25,000 criminal mugshots.

That test saw the facial recognition program falsely flag 26 legislators as criminals.

San Francisco (and now the whole of California) have banned the use of facial recognition technology, meaning that local agencies, such as the local police force and other city agencies such as transportation would not be able to utilise the technology in any of their systems.

Can you protect your privacy online? Take our quiz!