Blog

From compliance to competence: Our approach to EU AI Act readiness

Salla Westerstrand AI Designer, Solita

Published 17 Apr 2024

Reading time 7 min

When the European Parliament adopted the EU AI Act on the 13th of March 2024, many were celebrating the first legally binding framework to strive towards sustainable development and deployment of AI systems.

Soon after, questions have started to surface:

What does it mean for my organisation? Are we already compliant, or do we need to make changes? When is the sensible time to start preparing? What would be the first step towards AI Act readiness? What kind of help do we need to get started?

We’ve been working on AI Act readiness for a good while now. For us, AI Act readiness is not only about avoiding sanctions – although that’s definitely part of it, too – but about building solid legal foundations for holistic AI governance. We have gathered an interdisciplinary team of ethicists, lawyers, strategists, sociologists, technologists and many more to help our clients take their first steps towards compliance in a way that makes sense for their organisation and purpose.

As we believe sustainable AI is a collective effort, we want to share our experience and perhaps inspire you, too, to start your AI Act readiness journey.

What is the EU AI Act?

First things first: what even is the document that finally found its way out of the EU’s convoluted regulatory process? AI Act is an EU regulation that is directly applicable in all EU countries when adopted. We are still waiting for the legal-linguistic edits. The Act will enter into force 20 days after being published in the EU’s official journal.

The Act sets minimum requirements for AI systems in high risk use cases with the intention of preventing negative effects on the health, safety and fundamental rights of people. In addition, another objective of the AI Act is to safeguard democracy, the rule of law, and the environment while the proliferation of AI continues.

It divides AI systems into four risk categories, all of which come with specific requirements:

  1. Unacceptable risk – these systems are prohibited

  2. High-risk – systems subject to requirements for both providers and deployers

  3. Limited risk – systems subject to transparency requirements mainly for providers

  4. Minimal risk – recommendations, such as codes of conduct

Most of the requirements and hence the biggest impacts fall on the deployers and providers of systems in the high-risk category.

Non-compliance with the requirements comes with substantial costs, as the administrative fines are set between 7,5 and 35 million €, or 1–7 % of the annual global turnover, depending on the infringement. Not to mention the reputational damage and possible civil liability you might face as a result – clients and users expect your use of AI to be legal.

Who needs to act?

All organisations who develop or deploy AI systems in the EU need to start preparing. Whether you are planning to buy an AI tool to help you deal with your organisation’s documents more efficiently, using an AI-add-on in your HR system, putting up co-pilots for internal use or offering chatbots on your websites, EU AI Act concerns you. Even if you work in an otherwise excluded sectors, such as military or defence, EU AI Act compliance is expected as a standard for responsible use of AI systems.

In sum, there is no organisation immune to the Act. Most might not need much to comply, but for others, we are talking about an organisational change. Now is a high time to figure out which scenario awaits your organisation. Next, we’ll introduce the initial steps that every organisation needs to do to get started.

Your first steps towards EU AI Act readiness

Start by figuring out which processes you need to establish to become compliant with the EU AI Act, and which processes you already have in place. These three steps will help you to take your first steps:

1. Define whether your application involves AI and is thus subject to the AI Act.

Systems that fall into the scope of the AI Act are defined as follows:

“AI system’ means a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” Art. 3(1).

Unfortunately, the definition of an AI system, and therefore the scope of the AI Act itself, is not entirely clear. What is clear though is that an AI system is defined in an extremely broad manner in the AI Act. For us, the requirement of “autonomy” in the definition appears to be the main distinguishing criteria differentiating AI systems from other IT systems.

Instead of looking at the specific technical details, it is more helpful to pay attention to how a system would actually be used in practice unless it is clear that the system does not act autonomously or otherwise does not meet the requirements of the definition. Further regulatory guidelines are necessary to better understand the material scope of the AI Act.

Next, you need to ensure there are processes and responsibilities in place to correctly identify these systems and place them to the correct guardrails.

2. Define the risk category of your AI system.

The second step is to figure out the risk category and the specific requirements that apply to your AI system.

Systems with unacceptable risks, such as emotion recognition at work or school, or social scoring systems, need to be recognised early on, as they are prohibited and subject to the biggest sanctions.

High-risk systems, such as those used in the management of critical infrastructure, education, employment and workplace management and access to essential services, require actions to be compliant. You need to be able to identify if the system belongs to this category so you can fulfil the requirements related to, e.g., technical documentation, fundamental rights impact assessment, and human oversight.

Transparency requirements apply to certain systems, especially general-purpose AI systems (GPAIs). Most of these requirements only concern the providers of the systems. That is why knowing your role in relation to the AI system is essential, and that’s what we’ll explore below.

If your AI system does not belong to any of these categories (as will be the case of most systems), only recommendations apply. It’s still advisable to monitor the use cases and the purpose of your system to be aware of any drifts that could impact its risk category in the future.

3. Define the role of your organisation in relation to the AI system

Lastly, you need to identify your relationship with the AI system. Are you a provider? A deployer? Or perhaps both? Do you have some other role in the AI supply chain, such as importer or distributor? Your role defines which requirements apply to you, and which requirements you need to demand from others in your supply chain when buying AI systems or components. As there are many hands involved in developing and deploying AI systems, your role might not always be clear. It also might change during the AI system’s lifecycle, so you need to be able to monitor this aspect, as well.

From compliance to competence

You have now taken the initial steps in building solid legal foundations for your AI system governance. Congratulations! Now you are ready to expand your horizon and to ensure compliance is turned into competence. This means creating governance structures for holistic AI governance.

Firstly, to build compliance you need to establish roles and responsibilities in relation to the use of AI and the implementation of AI projects in your organization. Every organisation also needs to work on AI literacy to ensure their people are equipped with sufficient skills to use AI in a compliant manner (yes, this is required under Article 4 of the EU AI Act).

Secondly, no one feels confident in developing or using AI if something feels dodgy, or if they are uncertain about the impacts on others, or on their own work. After all, if there are no proper guardrails in place, those are valid concerns. Holistic AI governance goes beyond compliance and ensures your AI systems bring you the expected value in an ethically, economically, socially and environmentally sustainable way. While working on compliance, it is advisable to build processes in a way that support sustainable innovation from the very start, all while being compliant with regulations. This includes drafting an AI policy, building your understanding in ethical AI, and harnessing AI to advance your strategic goals and visions – all balanced with holistic risk and impact assessment.

Just remember: There are no two identical ways to reach compliance. Whether it is about establishing internal roles and responsibilities, defining processes, putting up an AI system catalogue, coaching in ethical AI, Solita’s experts are ready to help you figure out how to make compliance best work for your organisation. That is one of our contributions towards ethically, societally, economically and environmentally sustainable AI.

  1. Business
  2. Data