Leaked document shows European regulators are considering ban on indiscriminate surveillance and AI use in credit scoring etc
The European Union is reportedly looking to rein in the worse uses of artificial intelligence (AI) for mass surveillance, social behaviour scoring, and other functions.
According to a leaked 81 page document, reported by Politico, the EU draft rules says “indiscriminate surveillance of natural persons should be prohibited when applied in a generalized manner to all persons without differentiation.”
Surveillance can include the monitoring and tracking of people in both the digital and physical worlds.
Politico reported that the European Commission would ban certain uses of ‘high-risk’ artificial intelligence systems altogether, and limit others from entering the bloc if they don’t meet its standards.
Companies that don’t comply could be fined up to €20 million or 4 percent of their turnover.
The Commission will reportedly unveil its final regulation on 21 April.
The rules are reportedly the first of their kind to regulate artificial intelligence.
Essentially, the EU is not going to follow the US approach of allowing tech firms to regulate themselves on the matter, nor does it want to go by the way of China in harnessing the tech to implement a surveillance state.
Instead, the bloc reportedly says it wants a “human-centric” approach that both boosts the tech, but also keeps it from threatening its strict privacy laws.
What this means in practice is that AI systems can be legitimately used to help manufacturing, model climate change, or make the energy grid more efficient.
However, some AI technologies currently in use, such as algorithms used to scan CVs, make creditworthiness assessments, hand out social security benefits or asylum and visa applications, or help judges make decisions, would be labelled as “high risk,” and would be subject to extra scrutiny.
The EU draft laws also reportedly seek to prohibit AI systems that cause harm to people by manipulating their behaviour, opinions or decisions; exploit or target people’s vulnerabilities; and for mass surveillance.
However, the rules do reportedly contain an exception for law enforcement.
For example, the use of facial recognition technology in public places could be allowed if its use is limited in time and geography.
The Commission reportedly said it would allow for exceptional cases in which law enforcement officers could use facial recognition technology from CCTV cameras to find terrorists, for example.
It was back in February 2020 when the European Commission unveiled its highly ambitious ‘digital strategy’ for EU member states over the next five years.
Its digital strategy seeks to encompass tech subjects such as artificial intelligence (AI), broadband, and data collection/sharing, as Europe seeks ways to take back control from American firms such as Google, Facebook and Amazon, as well as Asian rivals.
Despite the EU lacking its own homegrown digital giants, Europe is seeking to impose its own lofty rules that will see European businesses and tech firms engage in a ‘single market” so they can trade data.
The digital strategy also contain its guidance for the use of AI systems in the years ahead.