Halfway through the Implementation of the AI Regulation?
A year ago, on August 1st 2024, the implementation of the AI Act began. After years of negotiations, a decision was finally reached, and the first international law regulating AI was introduced in phases over two years. In this article, we look back at what has happened since then and look ahead to what lies ahead in the coming period.
- 01 augustus 2025
Steps taken since August 1st 2024
- In November 2024, the proposed supervisory authorities were announced. In the Netherlands, the designated supervisory authorities are: the Netherlands Institute for Human Rights (het College voor de Rechten van de Mens), responsible for overseeing the protection of fundamental rights, and the Dutch Data Protection Authority (Autoriteit Persoonsgegevens), responsible for data protection law. Additionally, several supervisory authorities have been designated to perform judicial duties, namely the Procurator General at the Supreme Court, the Chair of the Administrative Law Division at the Council of State, and the Judicial Board of the Trade and Industry Appeals Tribunal;
- In February 2025, the provisions banning certain AI applications came into effect. This means that AI applications using predictive policing, social scoring, and biometric identification in public spaces are now prohibited, at least in principle. Unfortunately, the AI Act includes more exceptions to these rules than we would prefer;
- The Codes of Practice, which are guidelines detailing the standards of the AI Act, were supposed to be finalized by May 2025. That deadline was not fully met; for example, the Code of Practice for generative AI applications was only published in July.
Parts of the AI Regulation that take effect today
- Generative AI applications already on the market, such as ChatGPT, must comply with the AI Regulation. The rules governing generative AI are similar to those for high-risk applications, so a wide range of transparency and accountability requirements now apply to these AI systems;
- Member States must report to the European Commission on the financial and human resources available to supervisory authorities.
- They must also have established and implemented rules on fines and shared these with the Commission.
Final provisions taking effect in the coming year
- In February, the European Commission will issue guidelines on the practical implementation of high-risk applications, which include those that can impact your human rights or health;
- Member States must ensure that supervisory authorities have established a regulatory sandbox. This creates room for innovation while allowing authorities to advise developers on complying with the AI Regulation;
- By August 2026, the remaining provisions of the AI Act will come into force, and all high-risk AI systems must be fully compliant.
Timeline challenges
If all goes according to plan, the AI Regulation will be fully implemented by August 1st 2026. However, rumors have circulated that the deadline might be postponed. In June, the European Commission’s Chief of Technology said she could not rule out postponing parts of the Regulation if the standards and guidelines were not finalized on time. We joined EDRi in urging the European Commission to stick to the original timeline and the adopted legal text. The timeline is not the only issue in dispute; there are also attempts to simplify various European laws, including the AI Act, even before the law has fully taken effect.
In July, however, the European Commission confirmed that there would be no delay and that the original timeline remains in place. After a tumultuous legislative process, the AI Act has yet to settle into calmer waters. We will continue monitoring developments and advocating for AI legislation and enforcement that best protect your rights.