February 11, 20265 min read

EU AI Act 2025: What German Businesses Need to Know Now

Last updated: December 2025 | Reading time: 15 minutes

A law that learns from mistakes

When Amazon had to shut down its AI-powered recruiting tool in 2018, it was more than just a glitch. The system had systematically discriminated against women — not because it was programmed that way, but because it had learned from ten years of application data that came predominantly from men. Simply having the word "women's" on a CV — say, through membership in a women's chess club — was enough to trigger a lower score.

Amazon is not an isolated case. HireVue, a provider of AI-powered video interviews, spent years analysing candidates' facial expressions, voice, and gestures — until critics demonstrated that people with accents, neurodiverse individuals, or applicants with disabilities were being systematically disadvantaged. The company eventually pulled parts of the technology. And in May 2025, a class-action lawsuit against Workday was allowed to proceed: IT professional Derek Mobley alleges that the HR software giant's AI system rejected him across more than 100 applications — because he is over 40 and African American.

These cases illustrate why the European Union took action. The EU AI Act is the world's first comprehensive law regulating artificial intelligence. And since August 2025, it's no longer theoretical: violations can be penalised with fines of up to €35 million.

Why now?

The numbers speak for themselves: 36 percent of German companies already use AI — double the figure from the previous year. At the same time, many don't actually know what they're using. If your employees use ChatGPT for emails, your HR department pre-screens applications with software, or your marketing team posts AI-generated images — the AI Act applies to you.

The problem: AI makes decisions that directly affect people. And unlike human errors, it's often impossible to understand why an algorithm reached a particular conclusion. The Dutch childcare benefits scandal of 2013 shows just how devastating this can be: an AI fraud-detection system falsely accused thousands of families — predominantly people with immigrant backgrounds. The affair ultimately brought down the entire government.

The EU wants to prevent scenarios like these. The AI Act establishes clear rules — not to stifle innovation, but to build trust. Because customers, partners, and employees will increasingly ask: how does your company handle AI?

The timeline: what applies when?

Contrary to what many assume, the AI Act is not a future project. The first obligations are already in effect — and others are just around the corner.

Already in force (since 1 August 2024): The law itself is effective. Companies should have started preparing by now at the latest.

Since 2 February 2025: Two crucial rules already apply. First: certain AI practices are prohibited — more on that shortly. Second: companies must ensure their employees have sufficient AI literacy. Yes, this training obligation is already in effect. And yes, it most likely applies to your business too.

Since 2 August 2025: Authorities have been designated and sanctions can be imposed. The Federal Network Agency (Bundesnetzagentur) is the central contact point in Germany. Additionally, obligations for providers of general-purpose AI models now apply — primarily affecting major players like OpenAI, Google, and Anthropic.

From 2 August 2026: The comprehensive rules for high-risk AI systems take effect. If you use AI in recruitment, credit decisions, or medical diagnostics, mark this date in red on your calendar.

From 2 August 2027: The final stage — rules will then also apply to AI systems embedded as safety components in products.

What's actually prohibited?

The AI Act takes a risk-based approach. This means: the more dangerous an AI application potentially is, the stricter the rules. And some applications are so dangerous that they're banned outright.

Prohibited: Social scoring

Imagine an insurer setting your premiums based on your social media behaviour. Or a landlord rejecting you because an algorithm analysed your online activity. In China, that's reality — in the EU, it's been banned since February 2025.

Prohibited: Emotion recognition in the workplace

A camera system that monitors your employees' moods? A tool that analyses whether students are paying attention? Prohibited. The EU is protecting a core aspect of human dignity: the right not to have your emotions exposed.

Prohibited: Biometric categorisation by sensitive characteristics

AI that infers religion, political beliefs, or sexual orientation from facial features is prohibited. It may sound like science fiction, but it's already technically possible — which is precisely why it's explicitly banned.

Prohibited: Subliminal manipulation

AI systems that influence people's behaviour without their awareness are prohibited. This covers, for example, advertising that deliberately exploits vulnerabilities — such as targeting elderly people or those in financial difficulty.

Prohibited: Untargeted scraping of facial images

The mass collection of facial images from the internet to build biometric databases is prohibited. Companies like Clearview AI, which did exactly that, would no longer be able to operate in the EU.

The penalties for these violations are severe: up to €35 million or 7 percent of global annual turnover — whichever is higher.

The underestimated obligation: AI literacy

Many companies focus on the headline-grabbing prohibitions and overlook an obligation that already applies and affects virtually everyone: Article 4 of the AI Act requires companies to ensure their employees have "sufficient AI literacy."

What does that mean in practice? Every employee who works with AI — even if it's just using ChatGPT for emails — must understand what they're doing. They need to know where the technology's limitations lie, what risks exist, and how to use it responsibly.

The good news: formal certification isn't required. The bad news: you need to document that training has taken place. Who was trained, when, and on what content — all of this should be verifiable. Because if damage occurs due to faulty AI use, a lack of training could be deemed a breach of duty of care.

The Federal Network Agency reports that it receives a particularly high volume of enquiries on precisely this topic. Companies are unsettled. Yet the approach is fundamentally sensible: before you hand your employees a new tool, make sure they know how to use it.

High-risk AI: where the rules get strict

The AI Act designates certain application areas as "high-risk." These are areas where AI errors can have particularly severe consequences — because they directly determine people's life chances.

Recruitment and HR

AI that sorts CVs, evaluates candidates, or assists with promotion decisions is classified as a high-risk system. The cases of Amazon, HireVue, and Workday show why: when an algorithm discriminates, thousands of people can be affected — without realising it and with no way to fight back.

Credit scoring

When a bank uses AI to decide whether you get a loan, it must be able to demonstrate that the system operates fairly and transparently. This also applies to scoring systems that assess customer creditworthiness.

Education and vocational training

AI systems that grade exams, decide on admissions, or influence access to education are subject to strict requirements. These decisions shape young people's life prospects.

Critical infrastructure

AI in electricity and water supply, transport, or other critical sectors must meet particularly high safety standards. A failure here can endanger many people simultaneously.

Law enforcement and justice

When AI is used by police or in courts, the strictest rules apply. Nothing less than the rule of law is at stake.

From August 2026, all these high-risk systems will be subject to extensive obligations: risk management systems, technical documentation, data quality requirements, human oversight, conformity assessments, and registration in an EU database.

The grey area: AI with "limited risk"

Between the prohibited practices and high-risk systems lies a grey area: AI applications with limited risk. Here, transparency obligations are the primary concern.

Chatbots must identify themselves

When your customers interact with a chatbot, they need to know it's an AI. That sounds obvious, but it's often overlooked. That friendly "customer advisor Max" on your website? If he's an AI, that must be made clear.

Deepfakes must be labelled

AI-generated images, videos, and audio recordings must be identifiable as such — in a machine-readable format. This means a watermark or label that can also be read automatically. If you use AI-generated images in your advertising, you should check whether this requirement is met.

Synthetic text in certain contexts

When AI-generated texts serve to inform the public on matters of public interest, they must be labelled. This applies, for example, to automatically generated news articles.

The irony: many companies use AI-generated content without even knowing it. If your marketing team uses images from stock platforms, they could be AI-generated. It's worth checking.

The Federal Network Agency: your new point of contact

Since July 2025, Germany has had a central contact point for all questions about the AI Act: the AI Service Desk of the Federal Network Agency (Bundesnetzagentur). The authority you may have previously known only from telecommunications and energy is now becoming the AI regulator.

The Service Desk offers four key resources:

The Compliance Compass: An online tool that helps you determine whether and how the AI Act applies to your AI systems. You answer a few questions, and the system shows you which obligations you face. It's no substitute for legal advice, but it's a good starting point.

Information resources: FAQs, examples, and explanations covering all aspects of the AI Act. Particularly valuable: concrete scenarios that show where the line between permitted and prohibited lies.

Direct contact: You can submit questions directly to the authority. This is especially useful for companies operating in a grey area who need clarity.

Training guidance: Links to free training resources that can help fulfil the AI literacy obligation.

You can find the website at: www.bundesnetzagentur.de/ki

What you should do right now

Enough theory. What does all this mean for your business? Here are the key steps:

1. Conduct an inventory

Do you actually know which AI systems are in use across your organisation? Not just the official ones, but also those employees use on their own initiative? ChatGPT, Copilot, Midjourney, Claude — the list is long. Create an inventory. It's the only way to assess which obligations apply to you.

2. Clarify your role

Are you a "provider" of an AI system — meaning you develop AI yourself? Or are you a "deployer," using AI systems from other providers? The obligations differ significantly. Most mid-sized companies are deployers — but verify this.

3. Check for prohibited practices

Do you use emotion recognition anywhere? Do you analyse employee or customer behaviour in ways that could be considered manipulative? Does your HR department use tools that might discriminate? If you're unsure, seek legal advice. The penalties are too steep for guesswork.

4. Start AI training

This is the obligation you can and should fulfil fastest. Educate your employees on the fundamentals of AI — the opportunities, the risks, and responsible use. Document what you do.

5. Prepare for transparency obligations

If you use chatbots or AI-generated content, check whether they are properly labelled. This can usually be implemented with manageable effort.

6. Plan ahead for high-risk AI

If you use AI in recruitment, credit decisions, or other sensitive areas, you have until August 2026 to prepare for the stringent requirements. That sounds like plenty of time, but it isn't. Risk management systems and documentation take time.

The opportunity behind the regulation

It's easy to see the AI Act as a bureaucratic burden. But it's worth stepping back to look at the bigger picture.

The scandals involving Amazon, HireVue, and Workday show what happens when AI is deployed without oversight: people are discriminated against without knowing it. Decisions are made by algorithms nobody understands. Companies delegate responsibility to machines and wash their hands of the consequences.

The AI Act compels companies to take responsibility. That's work, yes. But it's also an opportunity. Companies that are transparent about their AI use, document their systems, and train their employees will earn trust — from customers, partners, and the public at large.

The alternative is a wild west where the next corporate scandal is just a matter of time. And then the calls for even stricter regulation will come.

Better to be prepared.

Conclusion: act now

The EU AI Act is not some distant prospect. The first obligations are already in effect. Sanctions can be imposed. And the authorities are up and running.

At the same time, there's no need to panic. Start with the basics: inventory, training, transparency. Use the resources provided by the Federal Network Agency. And if you're unsure, get professional support.

The good news: you don't have to do everything at once. The deadlines are staggered, and the authorities have been pragmatic so far. But don't wait too long. Because a €35 million fine is a steep price to pay for hesitation.


Need support implementing the AI Act? hypescale helps companies with strategic AI implementation — from initial assessment through to full compliance. We help you deploy AI responsibly and meet regulatory requirements.

Contact us for a no-obligation consultation

MK
Martin Kogut

ist Gründer und Produktentwickler mit Schwerpunkt auf KI-gestütztem Kundenservice und Automatisierung. Er verfügt über ein Zertifikat des MIT im Bereich Building and Designing AI Products und entwickelt täglich intelligente, skalierbare Lösungen, die Unternehmen dabei unterstützen, Prozesse effizienter zu gestalten und ihre Servicequalität zu steigern.

LinkedInConnect on LinkedIn