Understanding the New European Union AI Law

Understanding the New European Union AI Law

The European Parliament is preparing the world’s first comprehensive law on AI. What will this pioneering regulation be like? What ‘red lines’ does it mark? And what benefits will it bring?

The impressive development that AI has seen over the past few years has been meteoric and transformative for many industries. With recent warnings about the risk AI could pose if development is left unchecked has led to regulators looking closely at how the development of these technologies could be controlled with laws and regulations, which has led directly to the European Commission Regulation on the legal framework applicable to AI systems.

Not surprisingly, the European Union has always been very cautious and has been taking small steps in this direction for some time. For example, almost five years ago we reported on the publication of the first draft that included the ethical principles that must be considered in the development of a reliable AI.

“The European Parliament is aware of the economic and social benefits that the use of AI will bring in all sectors but is also concerned about the risks that these new technologies pose, especially for human rights and fundamental freedoms, in particular with regard to discrimination, data protection and the privacy of citizens”, says Enrique Puertas, professor of AI and Big Data at the European University (EU).

Principles of the new law

 The new EU law is structured around six principles. “The law aims to ensure that the AI ​​systems used in the European Union are safe, transparent, traceable, non-discriminatory and respectful of the environment. In addition, it advocates for human supervision of AI systems to avoid causing harm”, explains Maite López-Sánchez, professor of AI at the University of Barcelona (UB).

The new law has several key sections:

Secure systems: “It is necessary to apply the precautionary principle with disruptive technologies such as AI and that can be beneficial, but the associated risks must be addressed, the appropriate security restrictions established and guaranteeing people’s privacy and cybersecurity”, explains Jordi Ferrer, professor at EAE Business School and lawyer specializing in Digital Law.

Guarantee of transparency: “Transparency allows inappropriate practices to be corrected, especially in data collection processes and system training. It is necessary to apply understandable and transparent information policies”, clarifies Ferrer.

System traceability: The EAE Business School professor points out that “it is necessary to ensure that we understand and know how the system evolves, in order to be able to trace and investigate how it works”.

Non-discrimination guarantee: “Systems must avoid unfair bias, as it could have multiple negative implications, from the marginalization of vulnerable, racial groups to the exacerbation of prejudice and discrimination,” she warns.

Respect for the environment: Ferrer recalls that these are systems with a large consumption of energy that have to be considered with the current situation of sustainability guarantees.”

Supervised by people: The EAE Business School expert emphasizes that “automation from start to finish is not acceptable, as it can generate detrimental results,” for which reason he insists that, “human supervision must be applicable at some point in the process,” 

Understanding the New European Union AI Law
The new AI law will be far reaching and could impact enterprises’ ability to innovate.

Finally, Ferrer specifies that “the regulation aims to establish a uniform and technologically neutral definition of AI and that allows the evolution of the systems that advance with its application.”

‘Red lines’ in the face of unacceptable risks

One of the critical aspects of the law is the establishment of various obligations for technology providers based on an assessment of the level of risk of AI. “This risk analysis is already applied to the personal data processing system, as a result of the European privacy regulations”, clarifies the EAE Business School professor.

In this way, a categorisation of the risk level of the AI ​​models is established. Four levels of risk are defined: unacceptable, high, limited, and low or minimal. The level of risk is established based on the types of data that are used and the purpose of use of the AI ​​models,” details Puertas.

The highest level of risk is associated with systems that may represent a threat to people. The European Union sets a ‘red line’ here and considers them unacceptable, so they will be banned.

Ferrer gives some examples: One of them would be systems that involve cognitive manipulation of the behaviour of specific vulnerable people or groups such as children. “The example would be toys with AI and voice control that may pose risks to minors.” He also talks about scoring systems or social classification. “AI would classify people based on behaviours, personal characteristics, etc. These systems work in China, for example.” And the law also talks about biometric identification systems in real time, referring to facial recognition for example.

“The second level of risk corresponds to systems that have a negative impact on the security or fundamental rights of people, such as medical devices, aviation, education, employment or the interpretation of the law, among others. These systems must be evaluated throughout their entire useful life”, explains López. Regarding high-risk systems, he specifies that although they are not prohibited, they are closely monitored and evaluated.” The UB professor indicates that “systems with a limited level of risk are also identified, for which only transparency mechanisms are required that allow informed decision-making.”

Special attention to generative AI

Although the law began to take shape before the inception of ChatGPT and generative AI, the European Union has been quick to include this technology in its new AI law.

Understanding the New European Union AI Law
The new AI law will include regulations to govern the development of generative AI.

“In the last version that was voted on, in May 2023, the concept of generative AI was introduced at the last minute. Many of the problems that could arise with generative AI, such as false news, deep fakes, identity theft, etc., are indeed covered in the new law, since it has been designed focusing more on the purpose of use than concentrate on the AI technology underneath. Therefore, many of these situations would be contemplated and regulated,” explains the EU professor.

Ferrer considers the solution to possible problems that generative AI could bring: “to faithfully comply with the principle of transparency that will be included in the regulations.”

“The system must obligatorily report that the content has been generated by AI. In addition, it should be trained and designed in such a way that illegal content is not generated and that may violate regulations. For example, that it does not violate privacy. Finally, transparency must be provided, and summaries of the data protected by copyright used for system training must be published”, points out the EAE Business School expert.

On the other hand, Puertas warns that: “there are other types of aspects that remain ‘lame’, such as those related to intellectual property, since the May proposal leaves many aspects that affect generative AI in the air.”

Benefits, but also limitations

The new regulation will bring benefits for technology companies. “The most immediate benefit is that it sets the ‘rules of the game’ on the use of AI, says Puertas. “At the moment, we find ourselves in a situation in which we have a regulation on data protection, the GDPR, which does not cover all aspects related to the development of AI models, which is generating a lot of uncertainty and slowing down the development of projects due to lack of regulatory security.”

The law will also have a positive impact on citizens. “The fact of having a specific regulation should guarantee us that fundamental rights are being respected when AI algorithms are applied to make decisions that can impact our lives”, she affirms.

However, the limitations established by the European law could also negatively affect the innovation and competitiveness of European technology companies, compared to companies based in other countries.

“AI needs data. It feeds on them and needs them to train the algorithms so that they are efficient. If companies are hindered from using data, the development of AI models will be very limited in Europe. We are already seeing symptoms of this problem, which may increase over time”, remarks the EU expert.

“For example, we see how some of the most popular AI technologies, those for generating images from texts, having all been developed in the United States, only work when texts are entered in English. They do not work for German, French, Spanish or other European Union languages. Another example has been the recent launch of the social network Threads, which has achieved more than 100 million users in a short period of time, but very few of them are citizens of the European Union, since the company that created it. Meta has decided not to launch it initially in Europe because it considers that the Union’s data protection policies are too strict,” he explains.

“If a balance is not reached that guarantees the privacy of citizens and that, at the same time, the data can be used to train algorithms, we will begin to see how companies and institutions in the United States and China stand out in competitiveness and innovation from the European ones. And that gap can pose a very serious problem for the European Union,” he adds.

In fact, remember that: “the European Union has always been one of the most guaranteeing regions in terms of transparency and privacy of the data of its citizens”, while “other regions have been lax regarding the type of data that can be collected by the companies… or the state.”

For this reason, he considers that the regulations that will regulate the use of AI in other countries in the future could point in a different direction, prioritizing “development and innovation over the privacy of citizens”.

However, López-Sánchez believes that it is a “necessary inconvenience.” “In the same way that security systems are implemented in machinery or industrial processes, we need to protect ourselves from potential harm from AI,” he notes.

In addition: “it is a strategy similar to the data protection law.” “Although European companies are forced to make an effort, it also places restrictions on companies operating in Europe,” he declares.

Ferrer shares this point of view. “The experience we have with the GDPR leads me to think that technology companies located outside European territory will have compliance with the regulations as their horizon and thus avoid a negative effect on the business competitiveness of companies located in European territory.”