March 20, 2025 5 mins read

EU AI Act: Decoding risk levels for AI systems

Understanding EU AI Act risk levels is essential to ensure your AI system meets compliance and avoids potential legal pitfalls.

AI regulation is here, and it’s not just for tech giants. If you’re placing AI-based products or services on the European market, you need to understand EU AI Act risk levels. From chatbots to advanced biometric systems, every AI system falls into one of four AI system risk categories:

  1. Unacceptable risk
  2. High risk
  3. Limited risk
  4. Minimal or no risks

Where do(es) yours fit—and what does that mean for your business?

Let Intellias guide you through every stage of your GenAI journey.

Learn more

About the European Union’s AI Act (EU AI Act)

The EU AI Act represents a landmark effort to regulate artificial intelligence systems based on their potential risks to society. By creating a flexible yet rigorous framework, the Act aims to foster responsible AI development that respects individual rights, promotes transparency, and mitigates potential societal risks.

The four risk categories in the EU AI Act are based on an AI system’s potential impact on citizens’ rights, safety, and well-being. Each EU AI Act risk level carries its own obligations, ranging from minimal oversight to outright bans. High-risk AI systems face the strictest requirements.

Before exploring the four tiers, remember that your AI system’s risk categorization determines how you comply with the Act. Getting this categorization right through rigorous AI risk assessment is your crucial first step toward compliance, but it can be tricky.

The four EU AI Act risk levels

The scale of four AI system risk categories that every AI system falls into.

1. Unacceptable risk: Prohibited technologies

The EU AI Act bans AI systems deemed potentially harmful enough to pose unacceptable risks. This risk level, the strictest of all EU AI Act risk categories includes:

  • Social scoring systems that categorize citizens based on behavior and personal characteristics
  • Real-time biometric identification in public spaces
  • Manipulation of human behavior that could cause psychological harm
  • AI systems targeting vulnerable populations like children

Prohibited AI practices cross ethical lines that the EU considers fundamentally incompatible with human rights and democratic values. Depending on the context in which they’re deployed, some AI systems like these may actually qualify as high-risk.

2. High risk: Stringent oversight

The second and most complex risk level demands rigorous scrutiny. High-risk AI systems are those deployed in critical domains, such as:

The EU AI Act mandates comprehensive risk assessments, robust documentation, human oversight, and strict quality management systems for high-risk AI technologies. Developers must demonstrate their AI solutions’ reliability, transparency, and non-discriminatory nature through extensive testing and validation. Those who deploy third-party high-risk AI are also subject to regulatory requirements, such as ensuring proper integration, maintaining documentation, and monitoring AI performance.

3. Limited risk: Mandated transparency

The third tier of the EU AI Act risk levels focuses on AI systems that aren’t inherently dangerous but require informed consent. Typical examples include tools that produce AI-generated content, such as:

  • Generative chatbots
  • Voice cloning tools
  • Generative image and video technologies
  • Face swap filters

Limited-risk AI tools face few regulatory requirements because they are considered generally safe as long as users are aware they are interacting with artificial intelligence. Without digital literacy and informed consent, deepfake content could lead to fraud or other cybercrime.

4. Minimal risk: Minimal restrictions

Finally, there are minimal-risk AI applications. These systems can practically operate without restrictions. Examples include:

  • AI-powered email filters
  • Streaming service algorithms
  • Retail cross-sell recommendation systems

Companies developing technologies that pose no significant threat to individual rights or societal well-being can rest easy without worrying about additional compliance burdens. To build trust and demonstrate responsible AI development, they may consider proactively adopting codes of conduct and related governance mechanisms similar to those expected of high-risk systems.

The EU AI Act’s two-tier approach to general-purpose AI models

The EU AI Act also introduces a risk framework for general-purpose AI models (GPAI). The Act distinguishes between general-purpose AI models and those classified as “high-impact” GPAI models based on their computational power and potential societal influence. Its tiered approach to regulating GPAI models supports continued innovation while putting appropriate safeguards in place once algorithms meet certain conditions.

The first tier includes GPAI models for which the cumulative amount of computation used for their training is below 10^25 floating point operations, such as Phi-4 and TinyLlama. While still subject to regulation, models below the defined threshold face lighter requirements due to reduced potential for systemic risk.

The second tier includes high-impact GPAI models, such as GPT-4 and Gemini Ultra, where computational power used for training exceeds the defined threshold. The Commission sees these more powerful models as having the potential for widespread societal impact, so the EU AI Act establishes more stringent obligations for “general-purpose AI models with systemic risk.”

Companies developing GPAI models the Act defines as “high impact” must conduct thorough model evaluations, implement cybersecurity measures, and report their energy consumption and training data usage.

The challenge the EU AI Act presents for organizations

One of the most challenging aspects of AI governance is accurately determining which of the EU AI Act risk levels your AI systems and GPAI models fall under. An AI use case that seems straightforward on paper can get complicated when you start looking at its real-world context.

Consider AI systems that analyze biometric data. At first glance, all cases of deploying such technology to recognize people’s emotional states in workplaces or educational settings would seem an “unacceptable risk.” The EU AI Act generally prohibits these applications as surveillance mechanisms that violate fundamental human rights.

This is where things get really interesting: Imagine an employer using the same technology to monitor a truck driver’s alertness for safety. Suddenly, the risk assessment shifts. While still heavily regulated, this application falls into the “high-risk” category, as it could serve a crucial safety function in preventing accidents.

This example illustrates a critical point: context matters enormously in AI risk assessment. The same underlying technology can cross the line for EU AI Act high risk categories depending on its specific application and purpose. These nuanced situations often make it challenging for organizations to categorize their AI systems confidently.

Intellias can help you navigate AI risk categorization

The EU AI Act makes AI risk categorization essential to any AI governance framework. If you’re not sure how to navigate EU AI Act categories of risk, you’re not alone.

The experts at Intellias have extensive experience working with AI and are deeply familiar with the regulatory landscape surrounding AI development. Our AI governance framework provides a strategic roadmap that goes beyond compliance to create lasting competitive advantages.

As part of that framework, we help you categorize your AI solutions into appropriate risk categories to give you clarity on which require additional compliance measures. We’re here to help you make sense of these regulations and categorize your AI risk levels so you can keep innovating — safely.


Speak to one of our AI compliance experts today.

How useful was this article?
Thank you for your vote.
How can we help you?

Get in touch with us. We'd love to hear from you.

We use cookies to bring you a personalized experience.
By clicking “Accept,” you agree to our use of cookies as described in our Cookie Policy

Thank you for your message.
We will get back to you shortly.