European Union edges forward into passing the world’s first laws governing artificial intelligence (AI), after parliament approval
The European Union has taken a major step towards regulating the growing artificial intelligence (AI) industry.
The European Parliament on Wednesday agreed changes to draft artificial intelligence rules, that will include a ban on the use of AI in biometric surveillance and for generative AI systems such as ChatGPT to disclose any AI-generated content.
It comes after EU tech chief Margrethe Vestager last month said that a draft code of conduct on AI could be drawn up within weeks, allowing industry to commit to a final proposal “very, very soon.”
The European push to develop a code of practice for AI comes amid regulatory and industry concern about the uptake of AI systems.
“On Wednesday, the European Parliament adopted its negotiating position on the Artificial Intelligence (AI) Act with 499 votes in favour, 28 against and 93 abstentions ahead of talks with EU member states on the final shape of the law,” the EU Parliament stated.
“The rules would ensure that AI developed and used in Europe is fully in line with EU rights and values including human oversight, safety, privacy, transparency, non-discrimination and social and environmental wellbeing,” it added.
MEPs expanded the list to include bans on intrusive and discriminatory uses of AI, such as:
- “Real-time” remote biometric identification systems in publicly accessible spaces;
- “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorisation;
- biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
- predictive policing systems (based on profiling, location or past criminal behaviour);
- emotion recognition systems in law enforcement, border management, the workplace, and educational institutions; and
- untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (violating human rights and right to privacy).
The amendments could set up a clash with those EU countries that are opposed to a total ban on AI use in biometric surveillance.
European Union MEPs also want any company using generative tools to disclose copyrighted material used to train its systems and for companies working on “high-risk application” to do a fundamental rights impact assessment and evaluate environmental impact.
Essentially this would mean that services such as ChatGPT would have to disclose that the content was AI-generated, help distinguish so-called deep-fake images from real ones, and ensure safeguards against illegal content.
“All eyes are on us today,” said co-rapporteur Brando Benifei said. “While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose.”
“We want AI’s positive potential for creativity and productivity to be harnessed but we will also fight to protect our position and counter dangers to our democracies and freedoms during the negotiations with Council,” Benifei added.
Microsoft and IBM reportedly welcomed the latest move by EU lawmakers but looked forward to further refinement of the proposed legislation.
“We believe that AI requires legislative guardrails, alignment efforts at an international level, and meaningful voluntary actions by companies that develop and deploy AI,” a Microsoft spokesperson told Reuters.
The lawmakers will now have to agree the details with EU countries before the draft rules become legislation.
The European Commission announced the draft rules two years ago, aiming to setting a global standard for a technology key to almost every industry and business as the EU seeks to catch up to AI leaders such as the United States, China and the UK.
It should be remembered that the UK has already enacted its own proposals before the European Union and the United States.
In March the UK government set out its plan to regulate the artificial intelligence (AI) sector and proposed five principles to guide its use via its “adaptable” AI plan.
Then in April the UK government also announced a taskforce (Foundation Model Taskforce) with an initial £100 million in funding to develop artificial intelligence (AI) foundation models.
Earlier this month in Washington, DC prime minister Rishi Sunak reached a deal with US president Joe Biden for the UK to host an international summit on the risks and regulation of AI later this year.
Then on Monday the PM told the London Tech Week conference he wants the UK to be the “geographical home” of coordinated international efforts to regulate artificial intelligence (AI).
This development comes as US politicians, including President Joe Biden, have indicated they are planning rules for AI, but are also wary of stifling domestic innovation at the expense of limiting the ability of western firms to compete with China.
Indeed the US seems to be leaning toward the use of existing laws to regulate AI.