Artificial intelligence

IA Act

Publiée le September 29, 2025

The IA Act: everything you need to know about Europe’s first artificial intelligence regulation

Introduction

In 2024, the European Union adopted theAI Act (or Artificial Intelligence Act), a major piece of legislation framing the development and use of artificial intelligence (AI) in Europe. It is the world’s first legislation dedicated to AI, and aims to become an international reference standard.

The aim is twofold: to encourage innovation while guaranteeing security and the protection of citizens’ fundamental rights.

In this article, we explain in detail what the AI Act is, its main provisions, its impacts for businesses, and why it represents a turning point in AI regulation.


TheAI Act (Artificial Intelligence Act) is the first European regulation entirely dedicated to artificial intelligence. Unlike a directive, this regulation is directly applicable in all EU member states, without the need for transposition into national law. This guarantees Europe-wide harmonization of rules, avoiding disparate legislation from one country to another.

Its main objective is to create a clear legal framework for the design, development, deployment and use of artificial intelligence systems. This framework aims to reconcile two strategic priorities:

  1. Encouraging innovation and competitiveness of European companies in the field of AI.

  2. Protect citizens and their fundamental rights in the face of potential abuses linked to certain AI applications (discrimination, intrusive surveillance, manipulation).

A risk-based approach

The AI Act is based on a proportionate, risk-based approach. The more dangerous an artificial intelligence system is deemed to be for the safety, health or rights of citizens, the stricter the rules to be respected.

In concrete terms :

  • Systems that present an unacceptable risk (such as real-time facial recognition in public spaces or social scoring inspired by the Chinese model) are purely prohibited.

  • High-risk systems (e.g. AI medical diagnostics, algorithms used in recruitment, critical infrastructures) are permitted, but subject to very strict obligations of transparency, documentation and human supervision.

  • Limited-risk systems must comply with transparency obligations (for example, informing users that they are interacting with a chatbot).

  • Minimal-risk systems (video games, spam filters, office tools) are not subject to any particular constraints.

A very broad field of application

The IA Act does not just concern large technology groups. It applies to any organization developing or using artificial intelligence in the European Union:

  • Major international companies offering AI-based solutions.

  • European SMEs and startups, who will have to integrate these new rules into the design of their products.

  • Public players (administrations, hospitals, local authorities) using AI systems in their services.

The ambition is to lay the foundations for trustworthy AI that is both innovative and respectful of human rights, by positioning Europe as a world leader in the ethical regulation of artificial intelligence.


The four levels of risk defined by the IA Act

The AI Act is based on a classification of artificial intelligence systems into four risk categories. This approach makes it possible to tailor legal obligations according to the potential danger posed by AI to citizens and society. The higher the risk, the stricter the regulatory requirements.

1. Unacceptable risk: prohibited uses

Some AI applications are deemed to be contrary to fundamental rights and are therefore formally prohibited in the European Union.
These practices represent a direct threat to the freedom, dignity and privacy of individuals.

Concrete examples:

  • Real-time mass biometric surveillance, notably via facial recognition in public places.

  • Social scoring systems, such as those used in China, which classify individuals according to their behavior or creditworthiness.

  • Large-scale behavioral or psychological manipulation, aimed at exploiting people’s vulnerabilities (for example, targeting children with manipulative advertising).

The ban on these uses positions Europe as a pioneer in the protection of individual liberties in the face of technological excesses.


2. High-risk: strictly controlled uses

So-called ” high-risk ” AI systems are authorized, but are subject to rigorous controls. These uses are considered essential in certain sectors (health, transport, education…), but can have serious consequences in the event of failure or bias.

Concrete examples:

  • AI used in recruitment and human resources management, where a biased algorithm can lead to discrimination.

  • Medical diagnostic systems supported by artificial intelligence, which must guarantee maximum reliability to protect patients’ health.

  • Algorithms linked to critical infrastructures such as transport, energy or security, where an error could have massive consequences.

Main obligations for companies :

  • Data transparency: the origin and quality of the data used must be documented.

  • Detailed technical documentation: each system must be accompanied by a clear description of how it works.

  • Compulsory human supervision: humans must remain in the decision-making loop to avoid drift or automatic errors.

This category is undoubtedly the most restrictive for companies, but it is essential for building trusting AI.


3. Limited risk: mandatory transparency

Limited-risk systems don’t have as many technical constraints as high-risk systems, but they do have to comply with rules of transparency for users.

Concrete examples:

  • Customer service chatbots: users must be informed that they are dealing with an AI, not a human.

  • Content generation tools (text, image, video) such as ChatGPT, DALL-E or MidJourney: they must clearly indicate that content has been produced by an AI.

The aim is to prevent any risk of confusion or manipulation, and to ensure that users can interact with full knowledge of the facts.


4. Minimal risk: unrestricted use

The vast majority of AI applications fall into the minimal risk category. These systems are considered harmless to citizens, and therefore require no special obligations.

Concrete examples:

  • Video games using AI to enhance the user experience.

  • E-mail spam filters.

  • Music and movie recommendation systems, used by platforms such as Spotify and Netflix.

This category illustrates the AI Act’s determination not to hold back innovation unnecessarily, by giving companies a great deal of freedom for applications considered safe.


Key obligations for companies

The IA Act introduces a strict legal framework for organizations. Here are the major points to remember:

  1. Mandatory registration: high-risk AI systems will have to be registered in a European database.

  2. Transparency and traceability: obligation to provide clear information on how AI works.

  3. Human supervision: the human element must remain in the decision-making loop to avoid any drift.

  4. Conformity assessment: companies will have to prove that their AI complies with regulations.

  5. Financial penalties: fines of up to 35 million euros or 7% of worldwide sales for non-compliance.


Implementation schedule

The regulation was adopted in 2024, but its application will be gradual:

  • 2025: ban on unacceptably risky systems.

  • 2026: obligations come into force for high-risk systems.

  • 2027: full roll-out and implementation of the system.

This gives companies time to prepare and bring their AI solutions into compliance.


What are the implications for companies?

The IA Act will have a major impact on economic players.

Opportunities

  • Increased user and customer confidence.

  • Harmonization of rules in Europe, facilitating international deployment.

  • Competitive advantage for compliant companies (AI governance).

Challenges

  • High compliance costs for SMEs and startups.

  • Increased need for legal and technical expertise.

  • Risk of slowing down innovation in the face of red tape.


Generative AI and Act AI

Special focus ongenerative AI (ChatGPT, DALL-E, Gemini, Mistral AI, etc.).

Obligations include :

  • Clear indication when content is generated by AI (texts, images, videos).

  • Documentation of training data used.

  • Measures to prevent the generation of illegal or discriminatory content.

These rules are designed to protect users against deepfakes, misinformation and algorithmic bias.


The AI Act in European strategy

The IA Act is part of Europe’s desire to create a third way between :

  • the United States (rather permissive, with regulation based on corporate self-regulation);

  • China (centralized, highly intrusive model).

With this regulation, the EU intends to impose an ethical and responsible framework that could become a global standard, as was the case with the RGPD (General Data Protection Regulation).


Conclusion

TheAI Act marks a decisive step in the regulation of artificial intelligence. It imposes a balanced approach, seeking to reconcile technological innovation with the protection of fundamental rights.

For companies, it represents both a challenge (compliance, costs, organization) and an opportunity (customer confidence, brand image, European expansion).

The future of AI in Europe will largely depend on the ability of economic players to adapt quickly to this new regulatory framework.

Autres articles

Voir tout
Contact
Écrivez-nous
Contact
Contact
Contact
Contact
Contact
Contact