EU Proposes New Law to Tackle Advanced AI Risks, Mandate Deepfake Labels and Foundation Model Transparency

EU Proposes New Law to Tackle Advanced AI Risks, Mandate Deepfake Labels and Foundation Model Transparency

Brussels, Belgium – In a significant legislative move targeting the evolving landscape of artificial intelligence, the European Commission officially proposed the \”Digital Integrity and Transparency Act\” (DITA) on March 18, 2025. This new framework is designed as a crucial follow-up to the bloc’s landmark AI Act, aiming to address advanced capabilities of AI, particularly large-scale foundation models and synthetic media like deepfakes, which were not fully covered by the initial legislation.

The proposal, championed by Commissioner Thierry Breton and spearheaded by the Directorate-General for Communications Networks, Content and Technology (DG CONNECT), underscores the European Union’s proactive stance in regulating cutting-edge AI technologies. The Commission views DITA as essential for maintaining public trust, ensuring digital security, and fostering responsible innovation in the digital sphere.

Strengthening Accountability for Foundation Models

A central pillar of the proposed Digital Integrity and Transparency Act is its focus on large-scale foundation models. These powerful AI systems, trained on vast datasets and capable of generating diverse outputs, are seen as fundamental building blocks for many AI applications. Recognizing their increasing prevalence and potential impact, DITA introduces mandatory requirements for their developers.

Under the proposed act, developers of these foundation models will be compelled to provide detailed disclosure and documentation of their training data sources and methodologies. This requirement aims to shed light on how these models are built, their potential biases, and the origin of the data they learn from. Proponents argue that greater transparency in this area is vital for understanding the capabilities and limitations of foundation models, enabling better risk assessment, and fostering a more responsible development ecosystem.

The documentation is expected to cover aspects such as the type and scale of data used, methods for data collection and cleaning, and details about the training process itself. This level of scrutiny is intended to empower regulators, researchers, and the public to better understand the complex nature of these foundational AI systems.

Combating the Spread of Synthetic Content and Deepfakes

Another critical aspect of DITA addresses the proliferation of AI-generated synthetic content, commonly known as deepfakes. The rapid advancement of generative AI has made it increasingly easy to create highly realistic but fabricated images, audio, and video, raising concerns about misinformation, manipulation, and reputational damage.

To counter these risks, the act features robust provisions demanding clear digital watermarking and explicit labeling for such content. This would require creators and platforms distributing AI-generated media to clearly indicate its synthetic nature. The goal is to provide users with immediate cues to distinguish between authentic and artificially created content, thereby helping to curb the spread of deceptive deepfakes.

Beyond labeling, DITA also seeks to establish concrete accountability mechanisms for the creators of such media. While the specifics of these mechanisms are subject to further legislative development, the intent is to ensure that individuals or entities responsible for creating and distributing harmful or misleading synthetic content can be identified and held responsible for their actions, particularly when such content violates existing laws or causes harm.

Enforcement and Potential Penalties

To ensure compliance with the stringent requirements of the Digital Integrity and Transparency Act, the proposal includes significant deterrents. Non-compliance under DITA could result in substantial penalties, reflecting the EU’s commitment to effective enforcement.

The proposed fines for violations are particularly noteworthy, potentially reaching up to 7% of a company’s global annual turnover. This penalty structure aligns with other major EU digital regulations, such as the Digital Markets Act (DMA), signaling the seriousness with which the Commission views breaches of AI integrity and transparency rules. A penalty of this magnitude could represent billions of euros for large technology companies, providing a powerful incentive to adhere to the new requirements regarding foundation models and synthetic content.

The Legislative Path Ahead

With the official proposal now laid out by the European Commission, the Digital Integrity and Transparency Act embarks on its legislative journey. The proposal now moves to the European Parliament and the Council for comprehensive legislative review.

Both institutions will scrutinize the text, propose amendments, and engage in negotiations to finalize the act. This process is typically multi-stage and can involve considerable debate as co-legislators balance various interests, including innovation, safety, fundamental rights, and economic competitiveness. The path from proposal to enacted law can take many months, but the Commission’s move on March 18, 2025, marks a decisive step towards establishing a more transparent and accountable framework for advanced AI within the European Union.

The DITA proposal signals the EU’s intent to build upon its existing AI regulatory framework, adapting it to the rapid technological advancements in foundation models and generative AI. By focusing on transparency at the source and clear identification of synthetic content, Brussels aims to enhance digital trust and mitigate potential harms in an increasingly AI-driven information environment.

Author

  • Ben Hardy

    Hello, I'm Ben Hardy, a dedicated journalist for Willamette Weekly in Portland, Oregon. I hold a Bachelor's degree in Journalism from the University of Southern California and a Master's degree from Stanford University, where I specialized in multimedia storytelling and data journalism. At 28, I'm passionate about uncovering stories that matter to our community, from investigative pieces to features on Portland's unique culture. In my free time, I love exploring the city, attending local music events, and enjoying a good book at a cozy coffee shop. Thank you for reading my work and engaging with the stories that shape our vibrant community.

    View all posts