EU AI Act — Regulatory Compliance Tool

EU AI Act Compliance Navigator for Banks

Select a use case category and specific AI application to view EU AI Act risk classification, board obligations, functional responsibilities and governance requirements. Covers 20+ banking AI use cases including credit scoring, AML, trading algorithms and HR screening.

The EU AI Act (Regulation (EU) 2024/1689), in force since August 2024, classifies AI systems in four risk tiers. For banks, the critical deadline is August 2026, when obligations for high-risk AI systems — including credit scoring, AML/KYC, fraud detection, trading algorithms and HR screening — become fully enforceable. Prohibited practices (social scoring) apply since February 2025.
Select a category and use case above to view risk classification and compliance obligations.

EU AI Act Compliance: Key Steps for Banks

Achieving EU AI Act compliance requires banks to inventory all AI systems, classify each by risk tier under Annex III, and establish governance structures before the August 2026 deadline. High-risk systems — including credit scoring, AML, fraud detection and HR screening — need conformity assessments, technical documentation per Annex IV, registration in the EU AI database, and ongoing post-market monitoring. The EU AI Act supplements, rather than replaces, existing banking regulations such as EBA guidelines, MiFID II and CRR.

Frequently Asked Questions — EU AI Act & Banks

What is the EU AI Act and does it apply to banks?
The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive AI regulation, in force since August 2024 with phased application until 2027. It applies to all organisations deploying AI in the EU, including banks. Core banking AI systems — credit scoring, AML monitoring, KYC screening and HR recruitment tools — are classified as high-risk under Annex III, triggering extensive compliance obligations.
Which banking AI use cases are classified as high-risk?
High-risk banking AI use cases under Annex III include: credit scoring (retail and corporate), mortgage approval, AML and KYC screening, sanctions list checking, fraud detection, trading algorithms as critical infrastructure, HR recruitment screening, employee performance monitoring, and regulatory risk models (IRB/PD, VaR). Customer segmentation, churn prediction and internal operational risk monitoring generally fall into lower risk categories.
When is the EU AI Act deadline for high-risk AI systems in banks?
The key deadline for high-risk AI systems in banking is August 2026. Banks should start now: establishing an AI inventory, conducting gap analyses, setting up governance committees, and reviewing vendor contracts. The prohibition on social scoring applies since February 2025. GPAI rules apply from August 2025.
What must a bank board do to comply with the EU AI Act?
Bank boards must formally approve the AI governance framework and policies, designate an AI Risk Officer, oversee conformity assessments for high-risk AI systems, and receive regular reporting on model performance and bias metrics. They bear ultimate liability for CE marking of high-risk systems. For AML systems, the board bears personal compliance responsibility.
What is prohibited for banks under the EU AI Act?
Article 5(1)(c) prohibits social scoring — AI evaluating individuals based on social behaviour to their detriment. This ban applies since February 2025 with fines up to €35 million or 7% of global annual turnover. Banks must audit existing systems to ensure no behavioural profiling crosses into prohibited social scoring.
Does the EU AI Act replace EBA guidelines or MiFID II?
No. The EU AI Act supplements existing banking regulations. Credit scoring must comply with both AI Act and EBA lending guidelines. Trading algorithms face both AI Act Annex III and MiFID II requirements. AML systems must meet both AI Act obligations and the EU Anti-Money Laundering Directive. IRB models are governed by both AI Act and CRR/ECB TRIM. Banks must map overlapping requirements carefully.
What governance structures must a bank establish?
Banks must establish: an AI Governance Committee; a designated AI Risk Officer; a comprehensive AI inventory with risk classifications; a formal AI Use Case Approval process; an independent model validation unit; and an AI incident response plan. The three-lines-of-defence framework must be extended to cover AI risks. Vendor management must ensure third-party providers also meet AI Act obligations.