Microsoft Deletes Facial Recognition Database Over Privacy Fears – Report

Microsoft continues to react to growing alarm at the use of facial recognition (FR) technology in everyday life, with a report that it has deleted a large FR database.

The database in question was said to have contained 10 million images that were used to train facial recognition systems, the Financial Times reported.

This is not the first time that Microsoft has expressed its unease about the growing use of facial recognition. In April Redmond refused to install facial recognition technology for a US police force, due to concerns about artificial intelligence (AI) bias.

Facial databases

At the time President Brad Smith explained how Microsoft had rejected a California law enforcement agency’s request to install facial recognition technology in officers’ cars and body cameras due to human rights concerns.

That move came as governments and firms around the world increasingly grapple with the ethical use of AI and facial recognition, amid reports that some researchers are scraping people’s images from social media and CCTV cameras.

The Microsoft database in question was used to train other facial recognition systems around the world, including military researchers and Chinese firms such as SenseTime and Megvii, the FT reported.

The database was reportedly called MS Celeb, was published in 2016 and described by the company as the largest publicly available facial recognition data set in the world.

The people whose photos were used were not asked for their consent, and it is widely believed to have been made up of the faces of public figures such as celebrities.

“The site was intended for academic purposes,” the FT quoted Microsoft as saying in a statement. “It was run by an employee that is no longer with Microsoft and has since been removed.”

The FT also reported that two other databases have been removed including the Duke MTMC surveillance data set built by Duke University researchers, and a Stanford University data set called Brainwash.

AI, FR ethics

This display of corporate ethics from Microsoft comes amid an intense debate about the use of facial recognition and artificial intelligence.

The British government for example has launched an inquiry into the use of AI and potential bias in legal matters.

Google meanwhile has tied itself in knots over the issue.

In March this year Google created the ‘Advanced Technology External Advisory Council (ATEAC)’, to offer guidance on the ethical use of AI.

But only a week later it disbanded the council over concern of a couple of its female members.

And Google also caused deep anger among its employees (some of whom resigned) over its involvement in a Pentagon project codenamed Project Maven. The Pentagon drone project had utilised Google’s AI technology.

Following that, Alphabet CEO Sundar Pichai last year created new principles for AI use at Google, and pledged not to use AI for technology that causes injury to people.

Put your knowledge of artificial intelligence (AI) to the test. Try our quiz!

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Role Of Wirecard Supervisors To Be Probed By EU

The actions of financial supervisors responsible during the Wirecard collapse should be investigated, says senior…

3 days ago

Hacker Forums Contain Over 15 Billion Stolen Credentials

Sheer scale of stolen user data available online on hacker foums revealed in a report…

3 days ago

Republicans Accuse Twitter Of Anti-Conservative Bias

Two Republican Congressmen in the US write letter to CEO Jack Dorsey over 'Twitter's discrimination…

4 days ago

Tesla ‘Very Close’ To Full Autonomous Driving, Says Elon Musk

Tesla boss believes cars will be able to carry out full autonomous driving without human…

4 days ago

Removing Huawei Equipment To Take Five Years, Operators Tell MPs

Executives from Vodafone and BT give MPs a blunt assessment of the time needed and…

4 days ago