Opinion
Cybersecurity
Cybersecurity in the EU is a strategic priority for the coming years
Natalia García-Barberena
Senior Consultant and cybersecurity expert
Artificial intelligence
The draft European regulatory framework in this area promotes innovation and respect for ethical standards
Consultant
Sometimes fiction is not so far from reality. Movies continually depict paradigms in which robots and artificial intelligence end up dominating society or transforming reality into something dystopian. Obviously, this is far from the truth, but artificial intelligence (AI) has evolved rapidly, permeating many aspects of our lives, revolutionising industries and transforming the way we interact with technology.
As AI systems become increasingly complex and influential, however, concerns arise around their reliability. Issues such as bias, lack of transparency, accountability and potential risks to human values have raised questions about the responsible development and deployment of AI.
To address these concerns, governments and regulators around the world are taking action, and European institutions are no exception. The European Parliament last week adopted its negotiating position on a series of measures that are grouped together in the so-called EU AI Act. It will go into talks with member states on the final text, with the aim of reaching an agreement by the end of this year.
Proposed in April 2021 by the European Commission, this measure aims to create a harmonised regulatory framework for AI systems, while promoting innovation and upholding ethical standards. Below, we detail its features.
The Act classifies AI systems into four levels of risk: unacceptable, high, limited and minimal risk. High-risk systems, such as those used in critical infrastructure, healthcare or law enforcement, will face the most stringent requirements, including transparency, documentation and human oversight. By adopting a risk-based approach, the Act ensures that regulatory measures are proportionate to the potential risks posed by different AI applications, focusing efforts on high-risk areas while allowing innovation in lower risk domains.
This measure explicitly prohibits certain AI practices that are considered unacceptable because of their potential to infringe fundamental rights or cause significant harm. These include systems that manipulate human behaviour or use subliminal techniques, as well as those that create deepfakes for malicious purposes. In this way, the Act aims to safeguard the rights of individuals and prevent the misuse of AI.
The Act prioritises transparency and explainability to increase trust in AI systems. It states that users must be informed when interacting with any of them, so that they are aware that they are not interacting with a human being but with an automated system. In addition, high-risk AI systems must provide detailed information about their capabilities and limitations. These requirements enable users to make informed decisions and encourage accountability around the use of the results produced by AI.
Recognising the importance of data quality and the mitigation of bias, the Act emphasises that data must be transparent and traceable and meet certain quality requirements. In this way, it encourages the use of high-quality and diverse datasets to avoid biased results and discrimination.
The Act also underlines the importance of human oversight. High-risk AI systems must have adequate human intervention and control mechanisms. This ensures that crucial decisions do not rely solely on AI algorithms and that humans retain control over outcomes. Developers and providers of AI systems must be held accountable for their products. Failure to comply with the Act can result in substantial fines, encouraging AI developers to prioritise ethical considerations and take responsibility for the social impact of their technologies.
The European Commission allocates €317.50 million in the form of grants for innovative projects in the field of AI, under Horizon Europe’s Pillar II Cluster 4. This cluster has calls focusing on the ethical development of digital and industrial technologies, empowering end-users, and workers in the development of the technologies. The European Commission’s aim is to develop a trustworthy digital environment, based on a more resilient, sustainable and decentralised internet, to give end-users more control over their data and digital identity, and to enable new social and business models that respect European values.
The introduction of the European AI Law will represent an important milestone in regulating AI and ensuring its trustworthiness, because it aims to protect people’s rights while fostering innovation and competitiveness in Europe. The road to trustworthy AI, however, will not end when these rules are adopted. There are a few more steps to be taken, which are detailed below.
By taking these steps, we can foster a trusted AI ecosystem that benefits individuals, organisations and society as a whole. The EU AI Act serves as a foundation, but it requires collective efforts and ongoing commitment to navigate the complexities and ensure that AI remains a positive force. Together, we can shape the future of AI in a way that prioritises human values, fairness and transparency, instilling confidence in the technology that is reshaping our world.
Pamplona Office
Consultant
Opinion
Cybersecurity
Natalia García-Barberena
Senior Consultant and cybersecurity expert
Opinion
RAW MATERIALS
Jaime González
Spanish National Project Consultant and mining expert
Opinion
SUSTAINABILITY
Xabier Sevillano
Senior Consultant in European Projects and LIFE Programme Expert
News
KAILA WISE
The launch coincides with the first anniversary of the platform.
Opinion
Cybersecurity
Natalia García-Barberena
Senior Consultant and cybersecurity expert
Publication
CALENDAR
Through Kaila, our smart platform, we are launching the 'Horizon Europe Calendar', a basic working tool to plan your participation in the HEU calls