Twitter has demanded that Clearview delete all the images it has pulled from the micro-blogging website, and stop all photo collection.
Clearview is a face recognition company that developed tech to match faces to a database of more than three billion faces indexed from Internet social media websites and other sources, a New York Times investigation has revealed.
The privacy implications of this have been noted by US lawmakers. Senator Edward J Markey for example tweeted that Clearview’s facial recognition technology “poses chilling privacy risks” and he was seeking answers about its partnerships with law enforcement such as the FBI.
Into this has stepped Twitter, which on Tuesday reportedly sent a cease-and-desist letter to the firm, stating that it had violated its policies by gathering user pictures from its platform. It has requested the deletion of any collected data, the BBC reported.
Clearview allegedly violated Twitter’s developer agreement policy, which states: “Information derived from Twitter content may not be used by, or knowingly displayed, distributed, or otherwise made available to any public-sector entity for surveillance purposes.”
Clearview on its website however states that it only “searches the open web. Clearview does not and cannot search any private or protected info, including in your private social media accounts.”
Clearview did not respond to a request for comment, the BBC reported.
Earlier this week, the European Commission said it is considering a temporary ban on the use of facial recognition in public areas for up to five years as new regulations are worked out.
The use of facial-recognition is more widespread than many people thing.
Germany has plans in place to roll out automated facial recognition in railway stations and airports, while France is developing a legal framework that would permit such systems to be rolled out.
In the UK meanwhile, police have conducted trials of live facial recognition, while the Kings Cross estate was recently embroiled in controversy after its owners were found to be using the technology without alerting the public.
In September last year Gatwick Airport became the first airport in the United Kingdom to deploy facial recognition technology to allow passengers to board aircraft without checks.
China is a strong adopter of facial-recognition technology, rolling it out for people buying mobile phone SIM cards and certain controlled medicines, as a crime-prevention measure.
But there is also ongoing concern about how inaccurate facial recognition systems can be.
In August 2019 a campaign group in the US ran a picture of every California state legislator through a facial-recognition program that matched facial pictures to a database of 25,000 criminal mugshots.
And the results were not encouraging.
The test saw the facial recognition program falsely flag 26 legislators as criminals.
And to make matters worse, more than half of the falsely matched lawmakers were people of colour, according to the ACLU.
Facial recognition systems has been previously criticised in the US after research by the Government Accountability Office found that FBI algorithms were inaccurate 14 percent of the time, as well as being more likely to misidentify black people.
Can you protect your privacy online? Take our quiz!