Exceptions could be made for security projects and for research and development.
A draft Commission white paper on artificial intelligence suggests a future regulatory framework could “include a time–limited ban on the use of facial recognition technology in public spaces” while a “sound methodology” for assessing the impacts of the technology is developed.
The 18-page paper, seen by Silicon UK, suggests five possible regulatory approaches to the technology, with the likelihood that future rules could use a combination of several of these approaches.
These include a voluntary trustworthiness labelling programme, minimum standards for government departments that wish to use automated facial recognition, and mandatory risk-based requirements for high-risk applications, such as in healthcare, transport, policing and the judiciary.
The paper also suggests that “targeted amendments” could be made to cover specific safety and liablity issues and that governance requirements could be set out for developers of artificial intelligence and producers of the products that use the AI.
It argues an effective system of enforcement is essential, involving public oversight with the participation of national authorities.
The document highlights that the EU’s GDPR data protection rules give citizens “the right not to be subject of a decision based solely on automated processing, including profiling”.
The proposals are laid out in an 18-page draft white paper, which officials say they plan to present in February.
The Commission said it would seek feedback on the issue before making a final decision.
The move comes as controversy grows over the privacy implications of the technology, which allows operators to identify individuals’ faces in real time.
Officials and private organisations can use facial recognition to match faces to watch lists, but activists say it is inaccurate, intrusive and infringes on individuals’ right to privacy.
A US government study published in late December found current facial recognition algorithms were significantly less accurate at identifying African-American and Asian faces compared to Caucasian faces.
Germany has plans in place to roll out automated facial recognition in railway stations and airports, while France is developing a legal framework that would permit such systems to be rolled out.
In the UK, police have conducted trials of live facial recognition, while the Kings Cross estate was recently embroiled in controversy after its owners were found to be using the technology without alerting the public.
China is a strong adopter of the technology, rolling it out for people buying mobile phone SIM cards and certain controlled medicines, as a crime-prevention measure.