Brussels, Belgium – April 17, 2025 – The European Commission today unveiled a significant legislative proposal aimed at enhancing transparency and accountability for the most powerful artificial intelligence systems and online platforms operating within the bloc. The “EU Artificial Intelligence Transparency and Governance Act” (EU-AITG Act), formally introduced by Commissioner for Digital Affairs Elara Vance in Brussels, marks a pivotal step in the EU’s ongoing efforts to regulate digital technologies.
The comprehensive package, officially designated as Proposal 2025/0017 (COD), imposes stringent new obligations specifically targeting “Very Large Generative AI Models” (VLGAMs). These are defined under the proposed law based on their immense computational power, with regulations applying to models trained using resources exceeding 10^24 FLOPs (Floating Point Operations). The legislation also expands existing requirements for Very Large Online Platforms (VLOPs), integrating them further into the new governance framework for AI.
Key Provisions Mandating Disclosure and Audits
At the heart of the EU-AITG Act are provisions demanding unprecedented levels of insight into the inner workings of advanced AI. A key requirement is the mandatory disclosure of training data sources used for VLGAMs. This obligation is intended to provide researchers, regulators, and the public with a clearer understanding of the datasets that shape the capabilities and potential biases of these powerful models. Transparency in training data is viewed by proponents as crucial for identifying and mitigating risks associated with copyrighted material, biased information, or other problematic content.
Furthermore, the Act introduces the requirement for rigorous annual algorithmic audits. Companies operating VLGAMs will be mandated to subject their systems to independent, third-party evaluations on a yearly basis. These audits are designed to scrutinize the performance, safety, fairness, and compliance of the AI models, ensuring they adhere to the standards set out in the legislation. The results of these audits are expected to inform regulatory oversight and public trust.
Enforcement and Penalties
To ensure compliance with the demanding new requirements, the EU-AITG Act proposes significant penalties for non-adherence. Companies found to be in breach of the regulations could face fines reaching up to 7% of their global annual turnover. This substantial figure underscores the seriousness with which the Commission views compliance and reflects a growing global trend towards imposing significant financial consequences on large technology companies for regulatory violations.
The Commission emphasized that the penalty structure is designed to be a credible deterrent and to ensure that compliance is a financial priority for multinational corporations operating these large-scale AI systems and platforms.
Timeline for Implementation and Debate
Recognizing the complexity of the new mandates, the proposal includes a phased approach to implementation. Companies are expected to have a period of 18 months from the Act’s formal enactment to fully comply with its various obligations. This transition period is intended to allow firms adequate time to adapt their technical infrastructure, data management practices, and audit processes to meet the legislative standards.
Following its formal introduction by the Commission, the EU-AITG Act will now enter the European Union’s legislative process. Initial debates within the European Parliament are anticipated to commence as early as May 2025. The proposal will then undergo scrutiny and potential amendments by both the Parliament and the Council of the European Union before its final approval and enactment into law. The legislative journey is expected to involve extensive discussions among member states, industry stakeholders, and civil society groups.
Context and Broader Implications
The introduction of the EU-AITG Act comes amidst rapid advancements in generative AI technology and increasing global debate over its potential societal impact, risks, and governance. The EU has positioned itself as a leading regulator in the digital space, previously enacting landmark legislation such as the Digital Markets Act (DMA) and the Digital Services Act (DSA), which also place specific obligations on large online platforms (VLOPs).
This latest proposal specifically addresses the unique challenges posed by generative AI, particularly the largest and most powerful models whose influence is rapidly expanding across various sectors. By focusing on transparency in training data and mandatory algorithmic audits, the EU aims to establish a framework that fosters innovation while mitigating potential harms, such as the spread of disinformation, algorithmic bias, or the creation of synthetic content without proper disclosure.
The Act’s focus on VLGAMs based on computational thresholds highlights the regulatory challenge of defining and targeting the most impactful AI systems in a rapidly evolving technological landscape. The 10^24 FLOPs threshold is intended to capture only the very largest models currently in existence or foreseeable in the near future, focusing regulatory burden on entities with the greatest potential systemic impact.
The proposed legislation signals the EU’s determination to maintain a proactive stance in shaping the development and deployment of artificial intelligence, aiming to set global standards for responsible AI governance. Its passage and implementation are expected to have significant implications for major technology companies worldwide, particularly those developing and deploying frontier AI models and large online services used by European citizens.