The European AI Act: Balancing Transparency and Corporate Evasion Strategies

The European Union has taken a historic step with the entry into force of the AI Act, the world's first comprehensive legislation on artificial intelligence. This revolutionary act, which puts Europe at the forefront of AI governance, establishes a risk-based regulatory framework that aims to balance innovation and the protection of fundamental rights. However, the regulation is also yet another manifestation of the so-called “Brussels Effect”—the EU's tendency to impose its own rules on a global scale through the power of its market, without necessarily driving technological innovation.

While the US and China are leading AI development with massive public and private investment (45% and 30% of global investment in 2024, respectively), Europe has attracted only 10% of global AI investment. In response, the EU seeks to compensate for its technological lag through regulation, imposing standards that end up influencing the entire global ecosystem.

The central question is: is Europe creating an environment that promotes responsible innovation, or is it simply exporting bureaucracy to a sector where it cannot compete?

The extraterritorial dimension of European regulation

The AI Act applies not only to European companies, but also to those operating on the European market or whose AI systems have an impact on EU citizens. This extraterritorial jurisdiction is particularly evident in the provisions relating to GPAI models, where recital 106 of the Act states that suppliers must respect EU copyright “regardless of the jurisdiction where the training of the models takes place.”

This approach has been strongly criticized by some observers, who see it as an attempt by the EU to impose its own rules on companies that are not based in its territory. According to critics, this could create a rift in the global technology ecosystem, with companies forced to develop separate versions of their products for the European market or adopt European standards for all markets to avoid additional compliance costs.

Multinational technology companies are therefore in a difficult position: ignoring the European market is not a viable option, but complying with the AI Act requires significant investment and could limit opportunities for innovation. This effect is further amplified by the ambitious implementation timeline and the interpretative uncertainty of many provisions.

The implementation timeline and regulatory framework

The AI Act entered into force on August 1, 2024, but its application will follow a phased timetable:

  • February 2, 2025: Entry into force of the ban on AI systems posing unacceptable risks (such as government social scoring) and AI literacy requirements

  • May 2, 2025: Deadline for finalizing the Code of Conduct for General Purpose AI Models (GPAI)

  • August 2, 2025: Application of rules on general purpose AI models, governance, and notification authorities

  • August 2, 2026: Full application of provisions on high-risk systems and transparency obligations

  • August 2, 2027: Application of rules for high-risk systems subject to product safety legislation

The regulation adopts a risk-based approach, classifying AI systems into four categories: unacceptable risk (prohibited), high risk (subject to strict requirements), limited risk (with transparency obligations), and minimal or no risk (free use). This categorization determines the specific obligations for developers, suppliers, and users.

New transparency requirements: a barrier to innovation?

One of the most significant innovations of the AI Act concerns transparency requirements, which aim to address the “black box” nature of AI systems. These requirements include:

  • The obligation for suppliers of GPAI models to publish a “sufficiently detailed summary” of the training data, facilitating scrutiny by copyright holders and other interested parties

  • The need for systems that interact with humans to inform users that they are communicating with an AI system

  • The obligation to clearly label content generated or modified by AI (such as deepfakes)

  • The implementation of comprehensive technical documentation for high-risk systems

These requirements, while designed to protect citizens' rights, could place a significant burden on businesses, particularly innovative startups and SMEs. The need to document development processes, training data, and decision-making logic in detail could slow down innovation cycles and increase development costs, putting European companies at a disadvantage compared to competitors in other regions with less stringent regulations.

Case studies: Evasion in practice

Credit scoring and automated decision-making

The judgment in Case C-203/22 highlights how companies initially resist transparency mandates. The defendant, a telecommunications provider, argued that revealing the logic behind its credit scoring algorithm would disclose trade secrets, jeopardizing its competitive advantage6. The CJEU rejected this argument, stating that Article 22 of the GDPR entitles individuals to an explanation of the “criteria and logic” underlying automated decisions, even if simplified6.

Under the AI Act's two-tier system, most generative AI models fall under Tier 1, requiring compliance with EU copyright law and summaries of training data2. To avoid copyright infringement claims, companies such as OpenAI have switched to synthetic data or licensed content, but gaps in documentation remain.

The AI Act contains specific provisions on copyright that extend the EU's regulatory influence far beyond its borders. GPAI model providers must:

  • Respect the rights reservations set out in the Digital Single Market Directive (2019/790)

  • Provide a detailed summary of the content used for training, balancing the need to protect trade secrets with the need to enable copyright holders to enforce their rights

Recital 106 of the AI Act states that providers must respect EU copyright, “regardless of the jurisdiction where the training of the models takes place.” This extraterritorial approach raises questions about compatibility with the territoriality principles of copyright and could create regulatory conflicts with other jurisdictions.

Business strategies: evasion or compliance with the “Brussels Effect”?

For global technology companies, the AI Act presents a fundamental strategic choice: adapt to the “Brussels Effect” and comply with European standards globally, or develop differentiated approaches for different markets? Several strategies have emerged:

Evasion and mitigation strategies

  1. The trade secret shield: Many companies are seeking to limit disclosure by invoking trade secret protections under the EU Trade Secrets Directive. Companies argue that detailed disclosures of training data or model architectures would expose proprietary information, undermining their competitiveness. This approach confuses the Act's requirement for a summary of the data with full disclosure.

  2. Technical complexity as a defense: The inherently complex nature of modern AI systems offers another avenue for mitigation. Companies produce technically compliant but overly verbose or technical jargon-filled summaries that formally satisfy legal requirements without allowing for meaningful review. For example, a summary of training data could list broad categories of data (e.g., “publicly available text”) without specifying sources, proportions, or specific methods.

  3. The self-assessment loophole: Amendments to Section 6 of the AI Act introduce a self-assessment mechanism that allows developers to exempt their systems from high-risk categorization if they deem the risks to be “negligible.” This loophole grants companies unilateral authority to avoid rigorous compliance obligations.

  4. Regulatory forum shopping: The AI Act delegates enforcement to national market surveillance authorities, leading to potential disparities in rigor and expertise. Some companies are strategically locating their European operations in member states with more permissive approaches to enforcement or fewer resources for oversight.

The “dual model” as a response to the Brussels Effect

Some large technology companies are developing a “dual model” of operation:

  1. “EU-compliant” versions of their AI products with limited functionality but fully compliant with the AI Act

  2. More advanced “global” versions available in markets with less stringent regulations

This approach, although costly, allows companies to maintain a presence in the European market without compromising global innovation. However, this fragmentation could lead to a growing technological divide, with European users having access to less advanced technologies than those in other regions.

Regulatory uncertainty as a barrier to European innovation

The European AI Act represents a turning point in AI regulation, but its complexity and interpretative ambiguities create a climate of uncertainty that could negatively affect innovation and investment in the sector. Companies face several challenges:

Regulatory uncertainty as a business risk

The ever-changing regulatory landscape poses a significant risk for companies. The interpretation of key concepts such as “sufficiently detailed summary” or the classification of “high-risk” systems remains ambiguous. This uncertainty could result in:

  1. Unpredictable compliance costs: Companies must allocate significant resources to compliance without having full certainty about the final requirements.

  2. Prudent market strategies: Regulatory uncertainty could lead to more conservative investment decisions and delays in the development of new technologies, particularly in Europe.

  3. Fragmentation of the European digital market: The inconsistent interpretation of rules across Member States risks creating a regulatory patchwork that is difficult for businesses to navigate.

  4. Asymmetric global competition: European companies may find themselves operating under stricter constraints than their competitors in other regions, affecting their global competitiveness.

The innovation gap and technological sovereignty

The debate on the “Brussels Effect” is part of the broader context of European technological sovereignty. The EU finds itself in the difficult position of having to balance the need to promote internal innovation with the need to regulate technologies developed mainly by non-European actors.

In 2024, European companies attracted only 10% of global investment in AI, while the US and China dominated the sector with a combination of massive public and private investment, innovation-friendly policies, and access to large amounts of data. Europe, with its linguistic, cultural, and regulatory fragmentation, struggles to produce technological “champions” capable of competing globally.

Critics argue that Europe's regulation-focused approach risks further stifling innovation and deterring investment, while supporters believe that creating a reliable regulatory framework can actually stimulate the development of ethical and secure AI “by design,” creating a long-term competitive advantage.

Conclusion: regulation without innovation?

The “Brussels Effect” of the AI Act highlights a fundamental tension in the European approach to technology: the ability to set global standards through regulation is not matched by corresponding leadership in technological innovation. This asymmetry raises questions about the long-term sustainability of this approach.

If Europe continues to regulate technologies that it does not develop, it risks finding itself in a position of increasing technological dependence, where its rules could become increasingly irrelevant in a rapidly evolving global ecosystem. Furthermore, non-European companies could gradually withdraw from the European market or offer limited versions of their products, creating a “digital fortress Europe” increasingly isolated from global progress.

On the other hand, if the EU manages to balance its regulatory approach with an effective strategy to promote innovation, it could effectively define a “third way” between American capitalism and Chinese state control, placing human rights and democratic values at the heart of technological development. Vaste programme, as they would say in France.

The future of AI in Europe will depend not only on the effectiveness of the AI Act in protecting fundamental rights, but also on Europe's ability to accompany regulation with adequate investment in innovation and to simplify the regulatory framework to make it less burdensome. Otherwise, Europe risks finding itself in a paradoxical situation: a world leader in AI regulation, but marginal in its development and application.

References and Sources

  1. European Commission. (2024). “Regulation (EU) 2024/1689 laying down harmonized rules on artificial intelligence.” Official Journal of the European Union.

  2. European AI Office. (2025, April). “Preliminary guidelines on obligations for suppliers of GPAI models.” European Commission.

  3. Court of Justice of the European Union. (2025, February). “Judgment in Case C-203/22 Dun & Bradstreet Austria.” CJEU.

  4. Warso, Z., & Gahntz, M. (2024, dicembre). "How the EU AI Act Can Increase Transparency Around AI Training Data". TechPolicy.Press. https://www.techpolicy.press/how-the-eu-ai-act-can-increase-transparency-around-ai-training-data/

  5. Wachter, S. (2024). "Limitations and Loopholes in the EU AI Act and AI Liability Directives". Yale Journal of Law & Technology, 26(3). https://yjolt.org/limitations-and-loopholes-eu-ai-act-and-ai-liability-directives-what-means-european-union-united

  6. European Digital Rights (EDRi). (2023, settembre). "EU legislators must close dangerous loophole in AI Act". https://www.amnesty.eu/news/eu-legislators-must-close-dangerous-loophole-in-ai-act/

  7. Future of Life Institute. (2025). "AI Act Compliance Checker". https://artificialintelligenceact.eu/assessment/eu-ai-act-compliance-checker/

  8. Dumont, D. (2025, febbraio). "Understanding the AI Act and its compliance challenges". Help Net Security. https://www.helpnetsecurity.com/2025/02/28/david-dumont-hunton-andrews-kurth-eu-ai-act-compliance/

  9. Guadamuz, A. (2025). "The EU's Artificial Intelligence Act and copyright". The Journal of World Intellectual Property. https://onlinelibrary.wiley.com/doi/full/10.1111/jwip.12330

  10. White & Case LLP. (2024, luglio). "Long awaited EU AI Act becomes law after publication in the EU's Official Journal". https://www.whitecase.com/insight-alert/long-awaited-eu-ai-act-becomes-law-after-publication-eus-official-journal

Welcome to Electe’s Newsletter - English

This newsletter explores the fascinating world of how companies are using AI to change the way they work. It shares interesting stories and discoveries about artificial intelligence in business - like how companies are using AI to make smarter decisions, what new AI tools are emerging, and how these changes affect our everyday lives.

 

You don't need to be a tech expert to enjoy it - it's written for anyone curious about how AI is shaping the future of business and work. Whether you're interested in learning about the latest AI breakthroughs, understanding how companies are becoming more innovative, or just want to stay informed about tech trends, this newsletter breaks it all down in an engaging, easy-to-understand way.

 

It's like having a friendly guide who keeps you in the loop about the most interesting developments in business technology, without getting too technical or complicated

Subscribe to get full access to the newsletter and publication archives.