Twitter Investigates Picture Cropping Tool Over Racial Bias Concerns

ISIS

Twitter confirms its algorithm used for picture-cropping is being investigated over racial bias concerns, and opens up its source code

Twitter has opened up the source code after an experiment apparently showed its picture-cropping algorithm sometimes prefers white faces to black ones.

The Twitter tool in question is reportedly an automatic tool on Twitter’s mobile app. Its job is to automatically crop pictures that are too big to fit on the screen. It apparently selects which parts of an image should be cut off.

According to Sky News, graduate programmer Tony Arcieri posted a long image featuring headshots of Senate Republican leader Mitch McConnell at the top and former US president Barack Obama at the bottom – separated by white space.

Racial bias

In a second image, Mr Obama’s headshot was placed at the top, with Mr McConnell’s at the bottom.

Both times, former president Obama was cropped out altogether.

“Twitter is just one example of racism manifesting in machine learning algorithms,” Arcieri tweeted.

Twitter responded quickly and said that it had tested for racial and gender bias during the algorithm’s development.

It also promised to open up the source code so others could check it for bias.

“We tested for bias before shipping the model & didn’t find evidence of racial or gender bias in our testing,” it tweeted. “But it’s clear that we’ve got more analysis to do. We’ll continue to share what we learn, what actions we take, & will open source it so others can review and replicate.”

Twitter’s chief technology officer, Parag Agrawal, also commented on the issue.

“We did analysis on our model when we shipped it – but [it] needs continuous improvement,” he tweeted. “Love this public, open, and rigorous test – and eager to learn from this.”

Ongoing concerns

There are ongoing concerns about racial bias in facial-recognition technology as well.

Amazon in June became the latest tech giant to express these concerns, after it placed a one-year moratorium on police use of its facial recognition software.

IBM also cancelled all its facial recognition programs in light of ongoing concern at the use of the technology.

But Microsoft was the first, when it previously refused to install facial recognition technology for a US police force, due to concerns about artificial intelligence (AI) bias.

And Redmond also deleted a large facial recognition database, that was said to have contained 10 million images that were used to train facial recognition systems.

Inaccurate system?

These decisions came after research by the US Government Accountability Office found that FBI algorithms were inaccurate 14 percent of the time, as well as being more likely to misidentify black people.

In August 2019, the ACLU civil rights campaign group in the US ran a demonstration to show how inaccurate facial recognition systems can be.

It ran a picture of every California state legislator through a facial-recognition program that matches facial pictures to a database of 25,000 criminal mugshots.

That test saw the facial recognition program falsely flag 26 legislators as criminals.