IBM Cancels All Facial Recognition Programs, Urges Police Reform

Artificial IntelligenceGovernement-ITInnovationJusticeRegulationScienceSurveillance-IT
facial scanning

Big Blue cancels its facial recognition programs and calls for reform and new ways to work with law enforcement, “to advance justice and racial equity”

IBM is following the lead of Microsoft by axing its facial recognition programs in light of ongoing concern at the use of the technology.

In cancelling its facial recognition programs, Big Blue called for an urgent public debate on whether the technology should be used in law enforcement.

In February the UK’s most senior police officer, Metropolitan Police Commissioner Cressida Dick, strongly defended the use of facial recognition by law enforcement, saying criticism of the tech was “highly inaccurate or highly ill informed.” She also said facial recognition was less concerning to many than a knife in the chest.

IBM letter

In a letter to the US Congress on Monday, IBM CEO Arvind Krishna said the company would no longer offer, develop, or research facial recognition technology.

Krishna cited the fact that in September 1953, more than a decade before the passage Of the Civil Rights Act, IBM took a stand in favour of equal opportunity, when then IBM president Thomas J. Watson, Jr. wrote to all employees that “each of the citizens of this country has an equal right to live and work in America. It is the policy of this organisation to hire people who have the personality, talent and background necessary to fill a given job, regardless of race, colour or creed.”

Krishna said that Watson backed up this statement with action, refusing to enforce Jim Crow laws at IBM facilities.

The new IBM CEO then said that nearly seven decades later, the horrible and tragic deaths of George Floyd, Ahmaud Arbery, Breonna Taylor and too many others “remind us that the fight against racism is as urgent as ever.”

“To that end, IBM would like to work with Congress in pursuit of justice and racial equity, focused initially in three key policy areas: police reform, responsible use of technology, and broadening skills and educational opportunities,” wrote Krishna.

As part of IBM’s police reform suggestion, Krishna wrote that “IBM no longer offers general purpose IBM facial recognition or analysis software.”

“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” he added. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

Racial bias

“Artificial Intelligence is a powerful tool that can help law enforcement keep citizens safe,” Krishna wrote. “But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported.”

“We offer these suggestions in the constructive spirit of problem-solving that has always defined our company and its people,” he concluded. “We realize these measures are only a beginning, but IBM wants to help advance this nation’s pursuit of equity and justice and we stand ready to work with you to advance policies that will help unify our country and advance our national purpose.”

An IBM spokesperson meanwhile told CNN Business on Tuesday that the company will now limit its visual technology to “visual object detection,” which could, for example, help manage manufacturing plants or assist farmers with crop care.

Krishna’s letter follows huge anti-racism protests in America and around the world in response to the death of George Floyd, an unarmed black man who was killed while in police custody.

Privacy concerns

The use of facial recognition has always been controversial, on both sides of the Atlantic.

An academic study last year for example found that 81 percent of ‘suspects’ flagged by UK’s Met police facial recognition technology were innocent, and that the overwhelming majority of people identified are not on police wanted lists.

And facial recognition systems has been previously criticised in the US after research by the Government Accountability Office found that FBI algorithms were inaccurate 14 percent of the time, as well as being more likely to misidentify black people.

In August 2019, the ACLU civil rights campaign group in the US ran a demonstration to show how inaccurate facial recognition systems can be.

It ran a picture of every California state legislator through a facial-recognition program that matches facial pictures to a database of 25,000 criminal mugshots.

That test saw the facial recognition program falsely flag 26 legislators as criminals.

Microsoft meanwhile has refused to install facial recognition technology for a US police force, due to concerns about artificial intelligence (AI) bias. The Met use a facial recognition system from Japanese firm NEC.

And Redmond reportedly deleted a large facial recognition database, that was said to have contained 10 million images that were used to train facial recognition systems.

San Francisco (and now the whole of California) has banned the use of facial recognition technology, meaning that local agencies, such as the local police force and other city agencies such as transportation would not be able to utilise the technology in any of their systems.

Can you protect your privacy online? Take our quiz!

Read also :
Author: Tom Jowitt
Click to read the authors bio  Click to hide the authors bio