Back to Resources

Règlement IA (AI Act) : compte à rebours vers le 2 août 2026 (et pourquoi il faut planifier dès 2026)

2026-01-22
10 min read
Davoox Team
ai-acteucompliancegovernancerisk-management

Une timeline claire et un plan 2026 : inventorier les cas d’usage IA, classifier le risque, maîtriser fournisseurs/GPAI, gouvernance et change control, et lancer l’AI literacy.

If you deploy AI in HR, customer service, fraud detection, health, finance, or critical infrastructure, 2026 is not “later.” It’s deadline territory.

The EU AI Act is best treated like other operational compliance programs: you need an inventory, governance, vendor controls, and evidence—before you need the perfect legal interpretation of every edge case.

Note: This article is implementation guidance, not legal advice. Confirm classification and obligations with legal/compliance specialists.

The timeline your teams need on one page

The easiest way to get aligned internally is to be explicit about “what applies when.”

As commonly summarized in European Commission materials and industry briefings:

  • Entered into force: 1 August 2024
  • Already applicable since 2 February 2025: prohibited AI practices + AI literacy obligations
  • Applicable since 2 August 2025: governance rules and obligations for general-purpose AI (GPAI) models
  • Fully applicable: 2 August 2026
  • Extended transition (to 2 August 2027): some high-risk AI systems embedded in regulated products

Even if you disagree on details, the operational message is consistent: 2026 requires readiness work that starts now.

The “latest news” angle: the Digital Omnibus discussion

There has been discussion about a “Digital Omnibus” initiative intended to streamline parts of the EU digital compliance stack. Some commentary frames this as simplification; other coverage speculates about delays or softening of obligations.

Treat this as planning risk, not as a strategy:

  • Don’t pause your program based on rumors.
  • Build the fundamentals that you will need regardless (inventory, governance, vendor controls, evidence).

Practical guidance for 2026: plan for the law you have

1) Inventory AI use cases (this is always step zero)

Create a central register covering:

  • Internal tools (e.g., HR screening, productivity copilots)
  • Customer-facing AI (chat, recommendation, decisioning)
  • Embedded AI in products
  • Vendor-provided AI features (SaaS “AI assistants”)

Capture for each use case:

  • Purpose and business owner
  • Data types used (including personal/sensitive data)
  • Model/provider (including GPAI dependencies)
  • Where decisions are made and how outputs are used

2) Classify risk (don’t boil the ocean)

Even a lightweight classification is better than none:

  • Potentially prohibited practices
  • High-risk vs not high-risk
  • GPAI dependency (and whether you are a provider or deployer)

If classification is complex, start with “top risk first”:

  • HR and employment decisions
  • Credit/fraud/financial decisioning
  • Health and safety
  • Critical infrastructure operations

3) Vendor and procurement controls (where many programs break)

If you buy AI-enabled software, you still own outcomes.

Add procurement controls:

  • Disclosure of AI functionality and model dependencies
  • Documentation and auditability commitments
  • Security and incident notification requirements
  • Change notification for model updates
  • Support for human oversight and logging

4) Governance and change control (make it operational)

Define who owns:

  • AI risk acceptance
  • Model/tool approval
  • Major changes (retraining, vendor model upgrades)
  • Incident escalation for harmful outputs

Practical artifacts that help:

  • AI policy (short, operational)
  • AI change process (what triggers review)
  • “Human oversight” guidelines for teams using AI outputs

5) AI literacy (already a live obligation in many summaries)

Treat AI literacy like security awareness—but specific:

  • What AI can and cannot do
  • How to verify outputs
  • How to handle sensitive data
  • When to escalate concerns

Start small: 30–60 minutes of training for the highest-risk teams first.

6) Evidence as you go

Audits don’t fail because you did nothing—they fail because you can’t prove what you did.

Build evidence into operations:

  • Register of AI use cases (versioned)
  • Training records
  • Vendor assessments
  • Logs/monitoring for AI-enabled features (where appropriate)
  • Incident records and follow-ups

Key dates (summary)

  • 2 February 2025: prohibited practices + AI literacy obligations apply (commonly summarized milestone)
  • 2 August 2025: GPAI + governance obligations apply (commonly summarized milestone)
  • 2 August 2026: AI Act fully applicable (main deadline for most organizations)

Final thought

Even if future simplifications arrive, the organizations that win in 2026 will be the ones that treated AI like a managed operational risk: clear inventory, clear ownership, disciplined vendor controls, and evidence by design.

Need help with this topic?

Our team can help you implement the practices discussed in this article.

Prendre rendez-vous

Restez informé

Abonnez-vous pour recevoir des analyses sur la résilience opérationnelle, les mises à jour réglementaires et les meilleures pratiques.

Nous respectons votre vie privée. Pas de spam, désabonnement à tout moment.