Fundamental Rights Impact Assessments

Fundamental Rights Impact Assessments

A Fundamental Rights Impact Assessment (FRIA) is required under Article 27 of the EU AI Act for certain deployers of high-risk AI systems in the EU. It is designed to identify, assess and mitigate risks that the AI system may pose to individuals’ fundamental rights (such as the right to non-discrimination, the right to respect for private and family life and other rights under the Charter of Fundamental Rights of the European Union). The obligation comes into force from 2nd August 2026.

  • Deployers that are public bodies
  • Deployers that are private entities providing public services
  • Deployers using AI systems to evaluate creditworthiness of individuals (except systems used only to detect financial fraud)
  • Deployers using AI systems for risk assessment & pricing for life and health insurance.

The FRIA obligation applies to high-risk AI systems as defined in Article 6(2) and listed in Annex III of the Act (eg Biometrics, Education, Employment, Migration, Administration of Justice). However, there is an exception for AI systems intended to be used as safety components in the management and operation of critical infrastructure, road traffic, or in the supply of water, gas, heating or electricity.

In summary, Article 27 requires the FRIA to contain at least:

  • a description of the processes in which the system will be used in line with its purpose;
  • a description of the period of time within which the system will be used, and the frequency;
  • the categories individuals and groups likely to be affected by its use in the specific context;
  • the specific risks of harm likely to have an impact on the categories of individuals or groups;
  • a description of human oversight measures;
  • the measures to be taken in the case of the materialisation of the risks, including the arrangements for internal governance and complaint mechanisms.

FRIA should be done prior to first deployment of the system. The exercise needs to be repeated if there is a material change, eg a substantial modification to the system’s use, behaviour, risk or oversight.

A DPIA is required under the GDPR when processing personal data is likely to result in a high risk to the rights and freedoms of individuals. Article 27(4) of the AI Act allows deployers who have already performed a DPIA in connection with a high-risk AI system, to leverage that DPIA for the FRIA.

Once the FRIA has been performed, the deployer must notify the Market Surveillance Authority (MSA) of its results by submitting the completed template. However, deployers may be exempt from the notification obligation in the case referred to in Article 46(1). In Malta, the Malta Digital Innovation Authority (MDIA) along with the Office of the Information and Data Protection Commissioner (IDPC) have been designated as Market Surveillance Authorities (MSAs) for the AI Act.


This summary is guidance only and is not intended to be used as Legal Advice.

Share this post:

Recent Posts

Fundamental Rights Impact Assessments

A FRIA is an evaluation of the risks high risk....

Digital ID: Anxiety by Design

The recent referendum in Switzerland on digital ID (50.4% in....

Malta’s AI Regulations, 2025 – Summary for Legal Teams

Malta’s AI Regulations - Cheat Sheet for Legal & Compliance....

Shadow Al: The Hidden Risk Lurking in Your Workplace

As organisations race to deploy Al tools, a quiet revolution....