Influential group of MPs call on police to suspend the use of automatic facial recognition technologies
The House of Commons Science and Technology committee had waded into the debate surrounding the ethical use of facial recognition technologies.
It has warned the police and other organisations that there there should be no further trials of the tech until relevant regulations were in place, and it also expressed concerns about its accuracy and bias.
The stance of the MPs is in marked contrast to the Home Secretary. Last week Sajid Javid gave his backing to police forces using facial recognition systems, despite growing concern about the technology.
But the committee of MPs in a report have warned about the problems of police forces using automatic (or ‘live’) facial recognition (AFR) systems.
A number of police forces in the UK have used the technology, including the Met in London, which has now concluded its trials.
Live facial recognition technology is also being used by South Wales Police, who in 2017 used facial recognition software at the Champions League Final in Cardiff to scan the face of every fan attending the game.
South Wales police is thought to be still using the tech in shopping centres, but that is currently under judicial review.
This month an academic study found that 81 percent of ‘suspects’ flagged by Met’s police facial recognition technology are innocent, and that the overwhelming majority of people identified are not on police wanted lists.
The MPs meanwhile said there was a lack of clear legislative framework for the technology, and that police forces were failing to edit a database of facial images and remove those of innocent or unconvicted people.
“The Group concluded that there had been a ‘lack of independent oversight and governance of the use of [Live Facial Recognition]’ in these trials and recommended that, pending the development of a legislative framework, the police trials should comply with the ‘usual standards of experimental trials, including rigorous and ethical scientific design’, said the report.
This is now the second time that the police have been warned about using the technology.
The Information Commissioner’s Office (ICO) warned recently that any organisation using facial recognition technology, and which then scans large databases of people to check for a match, is processing personal data, and that “the potential threat to privacy should concern us all.”
And these systems has been previously criticised in the US after research by the Government Accountability Office found that FBI algorithms were inaccurate 14 percent of the time, as well as being more likely to misidentify black people.
Microsoft for example has recently refused to install facial recognition technology for a US police force, due to concerns about artificial intelligence (AI) bias.
And Redmond reportedly deleted a large facial recognition database, that was said to have contained 10 million images that were used to train facial recognition systems.
San Francisco and a number of other US cities have banned the use of facial recognition technology, meaning that local agencies, such as the local police force and other city agencies such as transportation would not be able to utilise the technology in any of their systems.
Can you protect your privacy online? Take our quiz!