The issues of racial bias with facial recognition systems has been raised again, and not in a good way for Google.
It was reported in the US that Google had allegedly been targetting people with ‘dark skin’ to improve its facial recognition systems.
And the US media report also alleged that Google had used deceptive practices to collect face scans, after subcontracted workers were told to persuade subjects to agree to face scans, mischaracterising them as a ‘selfie game’ or ‘survey’.
According to the New York Daily News, which cited anonymous sources, Google allegedly used subcontracted workers to collect face scans from members of the public in exchange for $5 gift cards.
According to the report, the subcontracted workers were employed by human resources company Randstad, but were allegedly directed by Google managers.
The subcontractors were allegedly instructed to target people with “darker skin tones” and those who would be more likely to be enticed by the $5 gift card, including homeless people and college students.
“They said to target homeless people because they’re the least likely to say anything to the media,” a former contractor told the Daily News. “The homeless people didn’t know what was going on at all.”
“I feel like they wanted us to prey on the weak,” another contractor told the Daily News.
Randstad has not respond to a request for comment, but Google has said it was investigating allegations of wrongdoing.
The contractors also reportedly described using deceptive tactics to persuade subjects to agree to the face scans, including mischaracterising the face scan as a “selfie game” or “survey”.
They also allegedly pressured people to sign a consent form without reading it, and did not tell the subjects that the phone they were handed to “play with” was also taking video of their faces.
“We’re taking these claims seriously and investigating them,” a Google spokesperson was quoted as saying by the Guardian newspaper. “The allegations regarding truthfulness and consent are in violation of our requirements for volunteer research studies and the training that we provided.”
The spokesperson added that the “collection of face samples for machine learning training” were intended to “build fairness” into the “face unlock feature” for the company’s new phone, Pixel 4.
There have long been concerns at the use of facial recognition systems, mostly due to privacy implications but also racial concerns.
Last month concern at the ethical use of facial recognition technology prompted California lawmakers to ban its use for the body cameras used by state and local law enforcement officers.
Officials in San Francisco had already banned its use, meaning that local agencies, such as the local police force and other city agencies such as transportation are not be able to utilise the technology in any of their systems.
Those moves came after civil rights campaign group in the US (the ACLU) ran a picture of every California state legislator through a facial-recognition program that matches facial pictures to a database of 25,000 criminal mugshots.
The test saw the facial recognition program falsely flag 26 legislators as criminals.
And to make matters worse, more than half of the falsely matched lawmakers were people of colour, according to the ACLU.
Facial-recognition has been previously criticised in the US after research by the Government Accountability Office found that FBI algorithms were inaccurate 14 percent of the time, as well as being more likely to misidentify black people.
Microsoft for example has refused to install facial recognition technology for a US police force, due to concerns about AI bias.
It also reportedly deleted a large facial recognition database, that was said to have contained 10 million images that were used to train facial recognition systems.
Can you protect your privacy online? Take our quiz!
Trolls beware. Twitter releases feature that will deliver a 'reconsider prompt' for users, if they…