The EU AI Act is being implemented on a phased basis, with the Regulation entering into force in 2024 and key obligations applying from 2025 onwards. Organisations should be aware of their compliance obligations and when specific requirements will take effect, depending on the classification of their AI systems.
Non-compliance with the EU AI Act can lead to substantial penalties. For the most serious breaches – such as the use of prohibited AI systems – fines can reach up to €35 million or 7% of global annual turnover, whichever is higher. These penalties exceed those under the GDPR.
Noetic supports organisations in clearly understanding their obligations under the EU AI Act and preparing for compliance in practical and proportionate terms. We provide detailed support for AI risk assessments, including Fundamental Rights Impact Assessments (FRIAs) and Data Protection Impact Assessments (DPIAs), tailored to how AI systems are used in practice.
Under the EU AI Act, from 2 August 2026, providers (i.e. developers) of high-risk AI systems located outside the EU are required to appoint an EU-based Authorised Representative before placing those systems on the EU market.
Noetic, based in Malta, can act as your designated EU Authorised Representative. We serve as a formal point of contact with EU Competent Authorities at both Member State and EU level, support regulatory cooperation, provide technical and conformity documentation upon request, and fulfil registration obligations for high-risk AI systems in the EU database.
Failure to appoint an EU Authorised Representative where required may expose organisations to significant administrative fines under the EU AI Act, of up to €15 million or 3% of global annual turnover for the preceding financial year, whichever is higher.
Under the EU AI Act, organisations must correctly classify their AI systems into one of the defined risk categories: prohibited (unacceptable risk), high-risk, limited risk, or minimal risk. Noetic supports organisations in mapping and classifying their AI systems, analysing functionality, intended purpose, and deployment context to produce a clear, documented and defensible risk classification. This classification defines your regulatory obligations – whether as a provider, deployer, importer, or distributor – and informs all subsequent compliance actions.
Incorrect classification can expose organisations to significant regulatory risk. If a system that should be treated as high-risk is wrongly classified as minimal risk, the resulting failure to meet high-risk obligations may lead to enforcement action and substantial financial penalties under the Act. Conversely, incorrectly classifying a minimal-risk system as high-risk can result in unnecessary compliance effort, increased costs, and wasted operational resources. Our mapping approach ensures proportional, evidence-based compliance aligned with regulatory requirements.
High-risk AI systems face strict obligations under the EU AI Act, including conformity assessments, technical documentation, human oversight measures, data governance controls, and ongoing monitoring. These requirements can be complex and resource-intensive if not addressed early.
Noetic provides end-to-end compliance support for high-risk AI systems, helping organisations design compliant systems, prepare conformity documentation, implement risk mitigation measures, and establish post-market surveillance processes. Our practical guidance ensures regulatory readiness without slowing innovation.
The EU AI Act places strong emphasis on transparency, explainability, and accountability to protect users and fundamental rights. Organisations must ensure that AI outputs are traceable, decisions are explainable where required, and appropriate human oversight is embedded into AI systems.
Noetic assists organisations in developing transparency policies, human-in-the-loop frameworks, and internal accountability structures aligned with the AI Act. We help ensure that governance models are not only compliant but also defensible to regulators, partners, and stakeholders.
Stay informed with expert insights on GDPR, the EU AI Act, and tech regulation – delivered monthly to your inbox.