Go to expert opinions

Artificial intelligence

A step towards trustworthy and secure artificial intelligence

Daniel Errea

Daniel Errea


Sometimes fiction is not so far from reality. Movies continually depict paradigms in which robots and artificial intelligence end up dominating society or transforming reality into something dystopian. Obviously, this is far from the truth, but artificial intelligence (AI) has evolved rapidly, permeating many aspects of our lives, revolutionising industries and transforming the way we interact with technology.

As AI systems become increasingly complex and influential, however, concerns arise around their reliability. Issues such as bias, lack of transparency, accountability and potential risks to human values have raised questions about the responsible development and deployment of AI.

To address these concerns, governments and regulators around the world are taking action, and European institutions are no exception. The European Parliament last week adopted its negotiating position on a series of measures that are grouped together in the so-called EU AI Act. It will go into talks with member states on the final text, with the aim of reaching an agreement by the end of this year.


Proposed in April 2021 by the European Commission, this measure aims to create a harmonised regulatory framework for AI systems, while promoting innovation and upholding ethical standards. Below, we detail its features.

Risk-based approach

The Act classifies AI systems into four levels of risk: unacceptable, high, limited and minimal risk. High-risk systems, such as those used in critical infrastructure, healthcare or law enforcement, will face the most stringent requirements, including transparency, documentation and human oversight. By adopting a risk-based approach, the Act ensures that regulatory measures are proportionate to the potential risks posed by different AI applications, focusing efforts on high-risk areas while allowing innovation in lower risk domains.

Prohibition of unacceptable AI practices

This measure explicitly prohibits certain AI practices that are considered unacceptable because of their potential to infringe fundamental rights or cause significant harm. These include systems that manipulate human behaviour or use subliminal techniques, as well as those that create deepfakes for malicious purposes. In this way, the Act aims to safeguard the rights of individuals and prevent the misuse of AI.

Transparency and explainability

The Act prioritises transparency and explainability to increase trust in AI systems. It states that users must be informed when interacting with any of them, so that they are aware that they are not interacting with a human being but with an automated system. In addition, high-risk AI systems must provide detailed information about their capabilities and limitations. These requirements enable users to make informed decisions and encourage accountability around the use of the results produced by AI.

Data governance

Recognising the importance of data quality and the mitigation of bias, the Act emphasises that data must be transparent and traceable and meet certain quality requirements. In this way, it encourages the use of high-quality and diverse datasets to avoid biased results and discrimination.

Monitoring and accountability

The Act also underlines the importance of human oversight. High-risk AI systems must have adequate human intervention and control mechanisms. This ensures that crucial decisions do not rely solely on AI algorithms and that humans retain control over outcomes. Developers and providers of AI systems must be held accountable for their products. Failure to comply with the Act can result in substantial fines, encouraging AI developers to prioritise ethical considerations and take responsibility for the social impact of their technologies.

Commitment in the form of funding opportunities

The European Commission allocates €317.50 million in the form of grants for innovative projects in the field of AI, under Horizon Europe’s Pillar II Cluster 4. This cluster has calls focusing on the ethical development of digital and industrial technologies, empowering end-users, and workers in the development of the technologies. The European Commission’s aim is to develop a trustworthy digital environment, based on a more resilient, sustainable and decentralised internet, to give end-users more control over their data and digital identity, and to enable new social and business models that respect European values.

Next steps

The introduction of the European AI Law will represent an important milestone in regulating AI and ensuring its trustworthiness, because it aims to protect people’s rights while fostering innovation and competitiveness in Europe. The road to trustworthy AI, however, will not end when these rules are adopted. There are a few more steps to be taken, which are detailed below.

  • International cooperation: foster international cooperation and collaboration between governments, organisations and experts to establish common standards and good practices for reliable AI.
  • Ethical guidelines: develop and adopt comprehensive ethical guidelines covering principles such as fairness, transparency, accountability, privacy and robustness. Adherence to these principles will foster trust among users and stakeholders.
  • Robust testing and certification: Establish rigorous testing and certification processes for AI systems, especially those classified as high risk. Comprehensive assessments of the performance, reliability and security of AI systems can help ensure their reliability and prevent potential damage.
  • Continued research and development: advances in these areas will contribute to the improvement of AI systems and address potential biases, errors or unintended consequences.
  • Public awareness and education: empowering people to make informed decisions about the use of AI and to engage in debates about its societal impact. Digital literacy and fostering understanding will contribute to a more responsible and informed use of AI technologies.
  • Continuous evaluation and adaptation: regularly assess the effectiveness and impact of the EIA and make the necessary adaptations to meet new challenges and technological developments.

By taking these steps, we can foster a trusted AI ecosystem that benefits individuals, organisations and society as a whole. The EU AI Act serves as a foundation, but it requires collective efforts and ongoing commitment to navigate the complexities and ensure that AI remains a positive force. Together, we can shape the future of AI in a way that prioritises human values, fairness and transparency, instilling confidence in the technology that is reshaping our world.

Expert person

Daniel Errea
Daniel Errea

Pamplona Office