The EU AI act is no longer a distant regulation that may one day become relevant. It’s being enforced now, and it affects far more organizations than most managers realize. If your company uses AI-based recruitment tools, customer service chatbots or other automated decision-making systems in Europe, you’re already covered by one of the world’s most comprehensive AI regulations.

The question is not whether the AI act applies to you. The question is whether your organization is ready.

The consequences are bigger than many think

Let’s talk numbers. Organizations that fail to comply with the EU AI Regulation risk fines of up to €35 million or 7% of the global annual revenue, whichever is higher. These are not theoretical scenarios. The enforcement went into effect on August 2, 2025, and the prohibited AI practices have been subject to enforcement since February 2, 2025.

Other types of violations can trigger fines of up to 15 million euros or 3% of the global turnover, while even providing incorrect information to supervisory authorities can result in fines of up to €7.5 million or 1% of turnover. In comparison, these fines can exceed the level under GDPR for organizations of similar size.

What keeps many compliance and risk managers awake at night is this: Many organizations still don’t know which AI systems they are actually using, let alone which risk category those systems belong to.

It’s not just about AI developers

One of the biggest misconceptions about the EU AI act is that it only applies to companies developing AI systems. The reality is much broader.

The regulation distinguishes between:

  • Providers (providers) who develop or significantly modify AI systems
  • Users (deployers) (deployers) who use AI systems under their own responsibility

If you are use third-party tools with AI – perhaps a recruitment system with CV screening, a customer service chatbot or an automated credit scoring system – then you are a user with specific legal obligations.

The geographical reach is equally comprehensive. The regulation also applies to organizations outside the EU if the output of an AI system is used within the EU.

A Singapore company using AI to screen job applications for its Berlin office? Covered. An American software company whose European customers use its AI-powered analytics? Also covered.

This means HR managers using recruitment AI, customer service managers using chatbots, and finance departments using automated credit scoring share responsibility for compliance. It’s not just an IT issue.

Understand the risk-based framework

The EU AI Regulation uses a layered approach, where AI systems are categorized according to their potential harm. Your obligations increase in line with the risk level of the system.

Prohibited AI (unacceptable risk)

Certain AI practices are outright prohibited from February 2, 2025. This applies to, among other things:

  • Social scoring systems that rate or classify people based on behavior or personal characteristics
  • Biometric categorization systems that attempt to infer sensitive information such as race, political views or sexual orientation
  • Emotion recognition in work or educational contexts
  • AI that exploits vulnerabilities of specific groups

If your organization uses any of these systems, compliance means that use must cease immediately.

High risk AI

This is where most organizations’ compliance work comes in. High-risk AI includes systems used in areas such as employment, education, credit scoring and law enforcement.

If you’re using AI to:

  • Screen CVs or rank candidates
  • Make decisions about promotion or dismissal
  • Assess credit applications or insurance risk
  • Assigning training opportunities or evaluating student performance

you are working with high-risk AI. This requires quality management systems, human supervision, extensive documentation and regular audits. Full compliance for high-risk systems must be in place by August 2, 2026.

Limited risk

Customer servicechatbotsAI-generated content and deepfakes typically fall into this category. The main commitment here is transparency: users should be clearly informed when interacting with an AI system rather than a human, and AI-generated content should be marked as such.

Minimal risk

Spam filters, AI-enabled computer games and simple inventory management systems will often fall into the minimal risk category, where there are no specific requirements under the AI Regulation beyond the general product safety rules.

The AI literacy requirement: Your first compliance deadline

Something many organizations have overlooked: One of the earliest enforceable provisions of the EU AI Regulation is not about technology or documentation. It’s about people.

Article 4 requires all providers and users of AI systems to ensure that their staff and other persons involved in the use and operation of AI systems have an “adequate level of AI literacy”. This means they must have the knowledge and skills to make informed decisions about the use of AI and understand both opportunities and risks.

This obligation came into force on February 2, 2025 and is therefore one of the first provisions already being enforced.

What does “sufficient AI literacy” mean in practice? It’s not enough to ask employees to “read the manual” or send a general email about AI policy. The requirement is both role-based and context-dependent.

An HR manager using recruitment AI needs different knowledge than a customer service manager working with chatbots or a developer building AI systems.

The organization must be able to document that:

  • Employees know which AI systems the organization uses
  • Staff can identify the risks associated with specific systems
  • Employees know their obligations under the AI Regulation in relation to their roles
  • Training is ongoing as AI systems change or new systems are deployed

This is not a one-off exercise. AI reading skills must be maintained and updated as your AI landscape evolves.

How to build an AI-ready organization

Compliance with the EU AI Regulation is very much an organizational challenge, not just a legal one. Here’s how forward-thinking companies are approaching it.

1. Create an overview of your AI systems

You can’t not control ityou don’t know. Start with a systematic mapping of all AI systems you use, including:

  • Third-party software with AI components
  • Internal AI tools or models
  • Shadow AI (unauthorized AI tools that employees may already be using)
  • AI built into major platforms (CRM systems, HR systems, etc.)

For each system, you should document the purpose, risk category, data sources, role in decision-making processes and who in the organization is responsible for its use.

2. Map existing knowledge gaps about AI

Before you roll out training, it’s important to know the starting point. Map the employees’ current level across departments. Do they understand what AI systems they are using? Can they identify potential risks? Do they know when human supervision is required?

This mapping helps you design role-based training that addresses actual knowledge gaps instead of general AI courses with no direct relevance.

3. Introduce role-based AI awareness training

Generic courses are not enough.

  • HR professionals need to understand the requirements of high-risk AI in recruitment, including bias and human control.
  • Customer service managers need to know the transparency requirements for chatbots and when to escalate to a human.
  • Procurement needs to know what questions to ask AI vendors.

Consider building learning paths for different target groups:

  • Managers and board of directors: Strategic AI governance, responsibility and oversight
  • HR and recruitment: High-risk systems, bias reduction, human monitoring requirements
  • Customer front and support: Transparency, clear information for customers, escalation paths
  • IT and data teams: Technical requirements, documentation standards, security obligations
  • All employees: Basic AI literacy, acceptable use, how to report concerns

Organizations that want a structured approach to AI compliance training can use programs like AI Awareness Training, designed to address the AI Regulation’s requirements for AI literacy across roles in the organization.

4. Create clear AI policies

Create written policies that describe:

  • What is considered acceptable AI usage
  • Approval processes for new AI systems
  • Risk assessments of AI
  • Escalation and reporting procedures

Policies should be living documents that are updated as your use of AI or regulation changes.

5. Establish cross-functional AI governance

AI governance cannot live in isolation in either legal or IT. Set up a multidisciplinary working group with representatives from legal, compliance, IT, HR, operations and relevant business areas. The group should be ongoing:

  • Evaluate new AI implementations
  • Checking compliance with the AI regulation
  • Identify and close AI knowledge gaps
  • Respond to regulatory updates

6. Consider appointing an AI manager

Many organizations are now creating a dedicated AI Officer role to coordinate AI projects, ensure compliance and drive AI literacy initiatives. This person acts as a liaison between technical teams, management and compliance to ensure the organization’s approach to AI is cohesive.

7. Document everything

When regulators come knocking, documentation is your strongest defense. Make sure you keep detailed records of:

  • Training activities
  • Assessments of AI systems
  • Policies and decisions around AI
  • Compliance measures implemented

For high-risk systems, documentation is directly required. For all systems, solid documentation shows that the organization is acting in good faith and is serious about compliance.

From burden to competitive advantage

Herein lies the missed opportunity: AI compliance is not just about avoiding fines. Companies that actively build AI competencies and governance are positioning themselves to sustainable, responsible AI use.

  • When your entire organization understands both the opportunities and limitations of AI, you make better decisions about which systems to implement.
  • When employees can identify AI-related risks, you catch problems before they become crises.
  • When governance and decision pathways are clear, you can move faster on new AI opportunities because approval and assessment processes are already in place.

Organizations that wait until August 2026 to take high-risk AI seriously will be left behind. Those that invest in AI literacy and governance now will find that compliance becomes easier – and that they get far more out of their AI investments.

How to get started: a simple action plan

If it all feels overwhelming, start small – but start now:

This month:

  • Make an initial inventory of AI systems
  • Assess which systems can may be be high risk under the AI regulation
  • Survey a sample of employees to uncover their current AI knowledge

Next month:

The EU’s AI Regulation is a fundamental shift in the way organizations must work with artificial intelligence. But unlike many other regulations, the key requirement – building AI capabilities across the organization – is something that creates real value.

Employees who understand AI make better decisions, spot risks faster and use the tools more effectively.

The organizations that will thrive under the AI regulation are not those that see the regulations as a necessary evil, but those that view AI literacy as a strategic capability – and use the requirements as a foundation for responsible and effective AI use.

Are you ready to build AI capabilities across your organization? For example, you can explore how AI Awareness Training can help your teams understand the AI compliance requirements that apply to their roles – from HR managers using recruitment AI to senior leaders responsible for AI governance.

The EU AI regulation is here. The question is: Is your organization ready?