Microsoft continues to react to growing alarm at the use of facial recognition (FR) technology in everyday life, with a report that it has deleted a large FR database.
The database in question was said to have contained 10 million images that were used to train facial recognition systems, the Financial Times reported.
This is not the first time that Microsoft has expressed its unease about the growing use of facial recognition. In April Redmond refused to install facial recognition technology for a US police force, due to concerns about artificial intelligence (AI) bias.
At the time President Brad Smith explained how Microsoft had rejected a California law enforcement agency’s request to install facial recognition technology in officers’ cars and body cameras due to human rights concerns.
That move came as governments and firms around the world increasingly grapple with the ethical use of AI and facial recognition, amid reports that some researchers are scraping people’s images from social media and CCTV cameras.
The Microsoft database in question was used to train other facial recognition systems around the world, including military researchers and Chinese firms such as SenseTime and Megvii, the FT reported.
The database was reportedly called MS Celeb, was published in 2016 and described by the company as the largest publicly available facial recognition data set in the world.
The people whose photos were used were not asked for their consent, and it is widely believed to have been made up of the faces of public figures such as celebrities.
“The site was intended for academic purposes,” the FT quoted Microsoft as saying in a statement. “It was run by an employee that is no longer with Microsoft and has since been removed.”
The FT also reported that two other databases have been removed including the Duke MTMC surveillance data set built by Duke University researchers, and a Stanford University data set called Brainwash.
This display of corporate ethics from Microsoft comes amid an intense debate about the use of facial recognition and artificial intelligence.
The British government for example has launched an inquiry into the use of AI and potential bias in legal matters.
Google meanwhile has tied itself in knots over the issue.
In March this year Google created the ‘Advanced Technology External Advisory Council (ATEAC)’, to offer guidance on the ethical use of AI.
But only a week later it disbanded the council over concern of a couple of its female members.
And Google also caused deep anger among its employees (some of whom resigned) over its involvement in a Pentagon project codenamed Project Maven. The Pentagon drone project had utilised Google’s AI technology.
Following that, Alphabet CEO Sundar Pichai last year created new principles for AI use at Google, and pledged not to use AI for technology that causes injury to people.
Put your knowledge of artificial intelligence (AI) to the test. Try our quiz!
Explainable AI. Google Cloud will offer explanations so that businesses can see and understand why AI made a particular decision
Day two of Google's cloud event, sees new assistive features that utilise AI for G Suite to help both consumers…