Go to news

AI

Five key points to understand the EU Artificial Intelligence Act

European Artificial Intelligence Act

The latter part of 2022 was crucial for the democratisation of artificial intelligence. On November 30 of that year, OpenAI – a company specialising in the research and development of computer systems and algorithms capable of mimicking human intelligence – launched ChatGPT. The success of this chatbot was so immediate and striking that, just four days later, it already had over a million users, according to the Californian enterprise. More than a year and a half later, on August 1, the European Artificial Intelligence Act came into effect.

Brussels began working on it before ChatGPT became a significant turning point. Proposed by the European Commission in April 2021 and ratified by the European Parliament and the Council in December 2023, this regulation focuses on mitigating the potential risks of artificial intelligence to health, safety, and fundamental rights of citizens. It also establishes clear requirements and obligations for developers and implementers regarding their specific uses. These are the key points of a regulation whose ambition is to set a global benchmark.

What is Artificial Intelligence?

The European Artificial Intelligence Act defines an artificial intelligence system as a programme that operates autonomously, meaning without the need for constant human intervention. This system uses data it may receive from people or other machines and, based on this data, deduces how to achieve certain objectives. To do this, it employs techniques of machine learning or those based on logic and knowledge. As a result, it generates content, predictions, recommendations, or decisions that may influence the environment with which it interacts.

Who does the European Artificial Intelligence Act apply to?

The European Artificial Intelligence Act applies to any provider marketing or using artificial intelligence systems within the European Union, regardless of where the provider is located. It also applies to providers and users from third countries whose systems produce results used in the EU, users physically present or established in the EU, and providers of these systems and their authorised representatives, importers, and distributors.

What approach does it take?

The European Artificial Intelligence Act takes a risk-based approach, meaning that higher risks correspond to stricter rules.

  • Minimal risk. Most artificial intelligence systems can be used without additional complications as they comply with existing legislation. Examples of such systems include video games and spam filters in emails. Although not mandatory, providers of these systems can choose to adhere to principles of trustworthy artificial intelligence and follow voluntary codes of conduct.
  • Specific transparency risk. To build trust, it is crucial that the use of artificial intelligence is transparent. Therefore, the European Artificial Intelligence Act imposes specific transparency requirements for certain applications, especially where there is a clear risk of manipulation, such as conversational robots or deepfakes. Users must always be informed that they are interacting with a machine.
  • High risk. Systems of artificial intelligence considered high risk are those that could negatively impact individuals’ safety or fundamental rights. This includes, for example, systems that decide whether someone can receive medical treatment, obtain a job, or secure a loan to buy an apartment. It also includes systems used by the police to create profiles of individuals or assess the risk of committing a crime, as well as those operating robots, drones, or medical devices. These systems will be subject to a series of requirements and obligations to access the EU market.
  • Unacceptable risk. Some uses of artificial intelligence are deemed so harmful that they are prohibited due to their opposition to EU values, as they infringe on fundamental rights. These include manipulating people by exploiting their vulnerabilities, using subliminal techniques, social scoring for public or private purposes, predictive policing based solely on profiles, mass collection of facial images from the internet or security cameras to create databases, and emotion recognition in workplaces or schools, except for medical or security reasons. Biometric categorisation to infer sensitive data such as race or sexual orientation is also prohibited, as is remote biometric identification in real-time in public spaces by the police, except in exceptional cases.

How does it address racial and gender biases?

The European Artificial Intelligence Act places significant emphasis on preventing artificial intelligence systems from generating or perpetuating biases. Therefore, they must comply with new requirements ensuring their technical robustness and avoiding biased outcomes that disproportionately affect marginalised groups.

These systems must be trained with representative data and have mechanisms to detect and correct any inequalities. They must also be traceable and auditable, maintaining all relevant documentation, including algorithm training data, which facilitates subsequent investigations and ensures continuous monitoring.

How will the European Artificial Intelligence Act be enforced, and what penalties are foreseen?

The European Artificial Intelligence Act establishes a two-tier governance system: national authorities oversee compliance with the rules in their countries, while the EU regulates general-use models. To ensure coherence and cooperation, the European Artificial Intelligence Committee will be established, supported by the European Artificial Intelligence Office, which will provide strategic guidance.

Significant penalties will be imposed for non-compliance, depending on the level of seriousness:

  • €35 million or 7% of global turnover for serious breaches related to data requirements;
  • €15 million or 3% for non-compliance with other obligations;
  • €7.5 million or 1.5% for providing incorrect, incomplete, or misleading information to authorities.

For SMEs, the lower of these thresholds will apply, while for large companies, the higher amount will apply.