Back to all blog posts

Conformity Assessments

The EU AI Act has been agreed upon by the Parliament, Commission, and Council of the European Union and will bring with it obligations for organizations looking to use artificial intelligence. Notable amongst the requirements is the conformity assessment for high-risk uses of AI. We have seen similar requirements in the GDPR with Article 35 describing the need for data protection impact assessment. To that end, I want to describe what this process will look like and what exactly a conformity assessment will require.

First, we need to define what “high-risk” means with regards to the EU AI Act. The law itself actually has four risk categories for AI uses. They are prohibited, high risk, low risk, and minimal risk. We are focused on high-risk AI activities, but it is my belief that a conformity assessment, or at least a privacy impact assessment, is a good idea when looking to implement any kind of AI.

High risk activities are listed out in Annex III of the EU AI Act, and include items like biometric data processing, systems related to critical infrastructure (think hospitals, power, banking, etc.), hiring and employment, or systems related to educational facilities. Where any AI system includes these things or anything listed in that section of Annex III of the EU AI Act, a conformity assessment must be done.

A conformity assessment is, at its core, a risk-based impact assessment. The questions will be very similar to those you would find in a PIA or DPIA. I have developed a template for our own use, but questions include the following and more questions:

  • What is the purpose of the AI?
  • Is the AI system developed by your organization or provided by a third party?
  • Have the following items been review?
    • The algorithm or AI model
    • The training data to be used with the system
    • The outputs of the system
  • Is there a way for a human to override outputs?
  • Can human subjects opt-out of their inclusion in the AI system?

These questions are not the only ones, but are a good start. Importantly, we want to avoid a situation where the outputs are not what we expect, have outputs that are biased. Remember that if we overfit the model to training data, it may not work with other, real data, similarly if are training data is weighted inappropriately, it will result in bias most likely.

The main goal is to ensure we go through an approval process prior to implementing an AI system. Have leadership approval, and regularly review the system for issues or changes to outputs or training data in order to maintain effective use. Keeping track of the system ensures it works as expected, and also provides a evidence trail if we need to track something that went wrong.


Reach out to Privacy Ref with all your organizational privacy concerns, email us at info@privacyref.com or call us 1-888-470-1528. If you are looking to master your privacy skills, check out our training schedule, register today and get trained by the top attended IAPP Official Training Partner.