Skip to content
EU AI Act Risk Assessment

How To Perform AI Risk Assessments under the EU AI Act?

On the 13th of March 2024, the Parliament of the European Union (EU) voted to pass the EU AI Act. This Act aims to ensure a high level of protection against potential harmful effects of Artificial Intelligence (AI) systems, while still supporting innovation (in a similar way as the GDPR did for personal data in 2018). It harmonises rules for AI systems, specifies various prohibited uses for AI, and provides specific requirements for AI systems based on their risk classification. At the heart of the Act is a new system of AI risk assessment.

Risk Categories

The EU AI Act distinguishes four different risk categories: unacceptable risk, high risk, limited risk, and minimal risk. The infographic below shows more information about the categories (click image to enlarge).

Risk Categories EU AI Act

Examples of Risk Classification

To better understand the classification framework, let’s take a look at a few examples:

  • An AI system that uses biometric data to monitor the skin colour of people in a certain area. This is prohibited, because in this case it uses biometric data to infer sensitive information about people. There would be serious concerns about the use and impact of such a system. Other, less impactful biometric AI systems may be permitted (e.g. as a high risk system).
  • A heart monitor that uses AI to detect (and potentially predict) irregularities. This can be classified as high risk, because it is a medical device. Medical devices can affect the health and safety of people and are explicitly listed as high risk in Annex II of the AI Act.
  • An AI-based algorithm that enables doctors to identify diseases based on scans in a hospital. Again, this can be classified as high risk, because the decision-making algorithm directly impacts medical health and safety.
  • A chatbot that a company has developed to find company information efficiently, for internal use only. This can be classified as limited risk. Although there is no direct safety impact, the interaction with the chatbot could mislead people and provide incorrect information.
  • A tool for anonymising documents. This can be classified as minimal risk, because there is minimal impact on the safety or rights of people and it is not otherwise considered to be higher risk by the AI Act.

Risk & Quality Management System for High Risk AI

If an AI system is classified as high risk, the provider of the system must comply with several legal requirements. The provider is the organisation that develops the system or commissions the development (or markets it under its own brand name). Examples of obligations include a conformity assessment, and requirements for technical documentation, security, and data governance.

In terms of risk management, the AI Act defines an explicit obligation to have a risk management system and a quality management system. These systems must be designed to assure compliance with the AI Act, and contain, among others, the following elements:

  1. A clear strategy for compliance and conformity assessment
  2. Defined processes for design and design verification
  3. Quality control and assurance (e.g. independent oversight)
  4. Testing and validation processes before, during and after development
  5. Systems and procedures for data management, documentation, and record-keeping
  6. A post-market monitoring system, and procedures related to incident management and communication with supervisory authorities
  7. Processes for identifying and analysing risks to the health, safety and rights of (groups of) people
  8. Risk evaluation and risk elimination/mitigation processes
  9. Adoption of technical and organisational security measures

While setting up and implementing these systems, experience from other regulated sectors can give a head start. For example, there are similarities with quality risk management (QRM) systems in the pharmaceutical industry that can be leveraged (to some extent).

Obligations when Using AI

Besides the AI provider (developer), any organisation that imports, distributes, or deploys an AI system in the EU must also comply with the AI Act. For example, they have to ensure that the provider has completed the conformity assessment and that a CE marking is present for high risk systems, and they have to cooperate with supervisory authorities if needed. Vendor management and assessment is of critical importance in this regard.

Most obligations for deployers depend on the level of control they have over the AI system and data, compared to the level of control of the provider. They may have to make sure that there is human oversight by competent and trained persons, that the input data is relevant and sufficiently representative, and may need to retain system logs. For the use of AI systems that process personal data, a Data Protection Impact Assessment (DPIA) can also be required, and adequate technical and organisational measures must be taken.

All in all, this may seem like a lot. And when developing or using high risk AI systems, there are indeed a lot of requirements. For minimal risk systems on the other hand, obligations are pretty limited. So keep an eye on your risks, and act accordingly.

Need help or advice regarding AI? You can reach us on +31 (0)88 8483 100 or via gdprteam@vivenics.com.

Back To Top