Microsoft Turns Down Police Contract Over AI Bias Concerns

facial scanning

Ethical use of AI prompted Redmond to turn down chance to win police facial recognition contract

Microsoft has reportedly refused to install facial recognition technology for a US police force, due to concerns about artificial intelligence (AI) bias.

President Brad Smit said that the software giant recently rejected a California law enforcement agency’s request to install facial recognition technology in officers’ cars and body cameras due to human rights concerns.

It comes as governments and firms around the world increasingly grapple with the ethical use of AI.

Microsoft refusals

Microsoft took the decision according to Reuters, after it concluded it would lead to innocent women and minorities being disproportionately held for questioning. Microsoft took this stance because artificial intelligence has apparently been trained on mostly white and male pictures.
https://uk.reuters.com/article/uk-microsoft-ai/microsoft-turned-down-facial-recognition-sales-on-human-rights-concerns-idUKKCN1RS2FY

And multiple research projects have apparently found that AI has more cases of mistaken identity with women and minorities.

“Anytime they pulled anyone over, they wanted to run a face scan” against a database of suspects, Smith reportedly said. However he declined to name the police force involved.

After thinking through the uneven impact, “we said this technology is not your answer.”

Smith made the comments whilst speaking at a Stanford University conference on “human-centered artificial intelligence.”

Microsoft also refused a deal to install facial recognition on cameras in the capital city of an unnamed country. Redmond took the decision because the non-profit Freedom House had deemed the country in question was not free.

Smith said it would have suppressed freedom of assembly there.

But Microsoft did reportedly agree to provide the technology to an American prison, after it concluded that the environment would be limited and that it would improve safety inside the unnamed institution.

Corporate ethics

This display of corporate ethics comes amid an intense debate about machine-learning and artificial intelligence.

The British government for example has just launched an inquiry into the use of AI and potential bias in legal matters.

Smith also used the event to call for greater regulation of facial recognition and other uses of artificial intelligence, and he warned that without that, companies amassing the most data might win the race to develop the best AI in a “race to the bottom.”

Google for example has tied itself in knots over the issue.

Late last month Google created the ‘Advanced Technology External Advisory Council (ATEAC)’, to offer guidance on the ethical use of AI.

But only a week later it disbanded the council over concern of a couple of its female members.

And Google also caused deep anger among its employees (some of whom resigned) over its involvement in a Pentagon project codenamed Project Maven. The Pentagon drone project had utilised Google’s AI technology.

Following that, Alphabet CEO Sundar Pichai last year created new principles for AI use at Google, and pledged not to use AI for technology that causes injury to people.

Put your knowledge of artificial intelligence (AI) to the test. Try our quiz!