AI customized to meet your compliance needs, your way.

/Compliance
Feb 20, 2025

EU AI Act: Timeline, Enforcement & Fines and How To Prepare

EU AI Act timeline and fines

This is the second part of our AI Act series. If you missed the deep dive, where we covered the fundamentals of the AI Act and what compliance professionals should know, check it out here.


Now, in discussion with Louie Vargas, founder of the Network for Financial Crime Prevention (NFCP), we’re digging deeper into what enforcement looks like, key deadlines, and how teams can prepare. Read on for the main takeaways, or watch the full video here.


EU AI Act Risk Framework

🔗 Click here to watch the full video


The AI Act’s Implementation Timeline: Key Dates & Prohibited Practices

The AI Act is rolling out in phases, introducing restrictions and obligations over the coming years. Understanding these deadlines is crucial for fintechs looking to integrate AI responsibly.


February 2, 2025: Prohibited AI Practices Now in Effect

As of February 2, 2025, the AI Act bans certain high-risk AI applications due to their potential for harm. These include:

  • Social scoring: AI systems that rank individuals based on personal characteristics, inspired by China’s approach.
  • Emotion recognition in workplaces and education: AI systems that assess emotions in these settings are not allowed.
  • Untargeted scraping of facial images: Creating facial recognition databases from internet images or CCTV footage is banned.
  • Predictive crime assessments: AI systems that infer a person’s likelihood of committing a crime based purely on profiling or personality traits (a real-life Minority Report scenario).
  • Biometric categorization: AI that categorizes people based on biometric data to infer sensitive attributes like race, political opinions, or religious beliefs.
  • Manipulative or deceptive AI: AI that exploits vulnerabilities due to age, disability, or socioeconomic status is strictly prohibited.
  • Real-time biometric identification in public spaces: Limited exceptions exist for law enforcement, but these systems face heavy scrutiny.

Non-compliance with Article 5 (prohibited AI systems) can result in penalties up to €35 million or 7% of global annual turnover—whichever is higher.


August 2, 2025: General-Purpose AI (GPAI) Governance

Fintechs using or developing general-purpose AI models (GPAIs) will need to adhere to new governance frameworks. The European Commission has released an FAQ document detailing obligations for these AI systems here.


August 2, 2026: High-Risk & Limited-Risk AI Requirements

By this date, AI systems deemed high-risk will need to meet strict oversight and compliance measures. These include:

  • AI components in products covered by EU safety laws (e.g., medical devices, civil aviation, vehicles).
  • AI used in eight critical areas:
    • Biometrics
    • Critical infrastructure
    • Education and vocational training
    • Employment and worker management
    • Credit scoring and insurance risk assessments
    • Law enforcement (e.g., predictive policing, crime risk assessment)
    • Migration and border control
    • Administration of justice (e.g., AI-assisted legal decisions)

By February 2026, the European Commission will publish additional guidance clarifying high-risk AI use cases.


August 2, 2027: Full Compliance for High-Risk AI in Safety-Critical Products

By this final deadline, any AI system integrated into safety-critical products (such as medical devices or transportation systems) must comply with the AI Act’s requirements and EU conformity assessments.


Enforcement & Penalties: What’s at Stake?

The AI Act’s penalties are steep, with enforcement split between the EU’s AI Office (for general-purpose AI) and national regulators (for all other AI systems). Fines depend on the severity of non-compliance:

  • Violations of prohibited AI practices (e.g., social scoring, manipulative AI): Up to €35 million or 7% of global turnover.
  • Breaches of high-risk AI requirements: Up to €15 million or 3% of global turnover.
  • Providing incorrect or misleading information to authorities: Up to €7.5 million or 1.5% of global turnover.

This strict enforcement underscores the need for compliance teams to proactively assess their AI systems and ensure alignment with the Act.


This series isn’t over yet! Our final video will explore how fintechs can drive innovation while staying compliant—and what the AI Act means for the future of financial services. See you next week!

Get the compliance support you deserve

Speed up onboarding and automate compliance checks with spektr’s no-code tools, tailored to even your most complex cases. It’s that simple!

Spektr

spektr as been certified by Mastermind Assurance LLC to ISO/IEC 27001:2022 (MMIND-24082301) and ISO/IEC 42001:2023 (MMIND-24102801).

LinkedInLet's connectPrimary HeadquartersBredgade 75, 4. sal, Copenhagen, 1260, DK