AI is transforming everything — and Europe just changed the rules again. Here’s what you need to know.

On November 19, 2025, the European Commission did what many companies had been asking for months: it proposed postponing the most complex parts of the AI Act. The rules for high-risk systems have been pushed back from August 2026 to December 2027. That's sixteen extra months to comply.

Is this a relief or a new problem? It depends on how you move now. The short answer: both. It depends on how you move now.

Why Brussels put the brakes on

The official reason is simple: the technical standards are not ready. CEN-CENELEC did not complete the work by April 2025.

But that's not the only reason.

  • The Draghi Report (2024) painted an alarming picture:– none of the world's top 10 AI companies are European– EU investment in AI: €8 billion vs. €68 billion in the US– only 11% of European companies actually use AI

  • On a practical level, two-thirds of Member States have not designated supervisory authorities by the August 2, 2025 deadline. Without authorities, the rules cannot be enforced.

  • And there was political pressure. At the Berlin summit on November 18, Macron was explicit: “The AI Act comes with too many uncertainties. The US and China are leading the race. We cannot hinder the innovation of our companies.”

Italy's competitive advantage

While two-thirds of European states are behind schedule in designating competent authorities, Italy has taken the lead. With Law 132/2025 (which came into force on October 10), we designated the ACN (National Cybersecurity Agency) as the supervisory authority and AgID as the notification authority. We were the first member state with a comprehensive national legislative framework.

This means that Italian companies have a real competitive advantage: they already know who they will have to deal with (ACN and AgID), they have certainty about the requirements while other countries are still in regulatory limbo, and they can access the first regulatory sandboxes that Italy will activate by August 2026.

For an Italian company, this means being able to say to an international customer:

“We already comply with Italian authorities.”

(In Italian: “Siamo già conformi per le autorità italiane.”)

Non è poco. In un mercato dove l'incertezza normativa è il problema principale, avere chiarezza sulle regole del gioco è un asset strategico.

Le nuove scadenze da segnare in agenda

Some parts of the AI Act are already in effect. Others have been postponed. Please note: these dates are only valid if the European Parliament and the Council approve the proposal. If it is not approved, the original deadlines remain valid.

Already in force (no change):

  • February 2025: bans on the most dangerous AI practices (cognitive manipulation, social scoring)

  • August 2025: obligations for general-purpose AI models such as ChatGPT

Postponed by the new proposal:

  • High-risk AI systems: from August 2026 to December 2027 (+16 months)

  • AI in regulated products (medical devices, cars, etc.): from August 2027 to August 2028 (+12 months)

  • Transparency requirements: extended until February 2027

How much does it really cost to comply?

Let's talk numbers. According to the CEPS study, bringing a high-risk AI system into compliance costs between €193,000 and €330,000 in initial setup, plus €71,400 per year in maintenance.

For an SME, this can mean up to €400,000 for a single AI product.

The penalties? Up to €35 million or 7% of global turnover for the most serious violations. For minor non-compliance, €15 million or 3% of turnover.

These figures explain why many small businesses have asked for more time.

Big Tech divided between signatories and protesters

The reactions of large technology companies show how divisive this regulation is.

Meta refused to sign the voluntary Code of Conduct for AI models, calling the European legislation a source of “unacceptable legal uncertainty.”

Google signed, but with explicit reservations about possible slowdowns in innovation in Europe.

OpenAI and Anthropic signed the code without public reservations.

European AI startups—Mistral, Aleph Alpha—got many of the changes they wanted and now say that “the AI Act is perfectly manageable.”

Draghi: “Europe has stalled”

The day after the Commission's proposed postponement, Mario Draghi chose a symbolic audience to launch his harshest warning about the European approach to AI: the inauguration of the 163rd academic year of the Politecnico di Milano on December 1, 2025.

“Europe has stalled,” he said bluntly. The problem is not technical, it is methodological: “We have treated initial and provisional assessments as if they were established doctrine, incorporating them into laws that are extremely difficult to change.”

This is a direct criticism of the way the AI Act was constructed. According to Draghi, Europe has adopted a “cautious approach, rooted in the precautionary principle,” when what was needed was “adaptability: reviewing assumptions and quickly adjusting the rules as concrete evidence emerges.”

The figures he cited are clear: if Europe adopted AI at the same speed as the American digital boom of the 1990s, productivity growth could increase by 0.8 percentage points per year. But time is running out: “The gap between countries that embrace innovation and those that hesitate will widen significantly and rapidly.”

The risk? “If we don't close this gap, Europe risks a future of stagnation, with all its consequences.” And assuming the EU maintains the average productivity growth rate of the last decade, in 25 years the European economy would be the same size as it is today.

Who applauds? Who protests?

Not everyone applauds. Over 120 civil society organisations have called this proposal “the biggest setback for digital rights in the history of the EU.”

BEUC, the European organization representing 45 consumer associations, is blunt: “Consumers were promised simplification. But this proposal reads like deregulation almost exclusively for the benefit of Big Tech” (sic).

This position ignores the fact that European companies have to compete with American and Chinese companies that do not have these constraints.

The Netherlands would prefer “greater clarity on enforcement” rather than delays. Spain, which has already created Europe's first AI supervisory agency (AESIA) in 2023, fears that the postponement will weaken protections.

Three different worlds: Europe, the US, and China

The global context tells a completely different story.

🌍 Three models of AI regulation

🇺🇸 US → Deregulation • Massive private investment

🇨🇳 China → State control • Rules for specific issues

🇪🇺 EU → Safety • Compliance as a competitive lever

The US has no federal AI law. Biden's 2023 Executive Order is voluntary. Trump further deregulated in 2025. Result: private investment in AI at $67.2 billion in 2023, seven times that of Europe.

China regulates by specific topic: recommendation algorithms, deepfakes, generative AI. Everything must align with “fundamental socialist values.” Many applications require state pre-approval.

Europe is focusing on security as a competitive advantage. But the famous “Brussels Effect”—when the world adopts European standards—is not materialising for AI. Only Brazil, Canada, and Peru are showing interest in copying the AI Act.

If you are outside the EU: what you need to know

Are you based in the US, Asia, or elsewhere, but sell AI services to European customers? The AI Act still applies to you.

The rule is simple: if your system has an impact in the EU—even if you do not have a physical presence here—you must comply with the same rules as a European company. This means:

  • Designate an authorized representative in the EU if your system is high-risk

  • Ensure technical compliance with European standards

  • Submit to audits by national authorities (such as the ACN in Italy)

  • Maintain documentation and records in accordance with EU requirements

The good news is that you have 16 more months to get organised. The bad news is that the complexity remains: 27 member states, different authorities, local interpretations of the same requirements.

Our advice? Consider partnering with a European company that can act as your authorized representative and navigate the local framework for you. It's more efficient than opening an office or hiring a legal team dedicated to the EU.

For European companies, this regulatory complexity is a problem. But it's also a barrier to entry for those who don't have the resources to navigate it.

Three immediate actions to make the most of the 16 months

Although the postponement gives you an extra 16 months, you will only benefit if you use this time wisely.

Here's what I recommend:

  1. Map your AI systems - Which of your AI applications could fall into the “high risk” categories? This includes systems for: hiring, credit assessment, critical infrastructure management, access to essential services.

  2. Monitor technical standards - CEN-CENELEC is progressively publishing harmonised standards. Subscribe to industry newsletters or ask your advisor to keep you updated.

  3. Consider regulatory sandboxes - When active, sandboxes will allow you to test compliance in a protected environment, with direct support from the authorities.

  4. The UK experience shows that participating companies receive 6.6 times more investment. Italy will have to activate its own sandboxes by August 2026: an opportunity not to be missed.

The Commission estimates that the proposed simplifications will save businesses up to €5 billion in administrative costs by 2029.

Your next move

The postponement of the AI Act is not a green light to ignore the issue. It is an opportunity to turn compliance into a competitive advantage.

Those who prepare now will be ready when the rules come into force—and will be able to tell their customers that their AI is “compliant by design.” Those who wait until the last minute will find themselves playing catch-up, with higher costs and less room for manoeuvre.

Uncertainty exists, it's true. But it's part of the price of operating in a market that is trying to balance innovation and protection.

The best time to prepare was yesterday. The second-best time is now.

P.S. AND ELECTE? We are doing exactly what I advised you to do: mapping, monitoring, preparing.

Sources

This article is based on research conducted on institutional sources and verified analyses:

  • European Commission: official press release “Simpler digital rules to help EU businesses grow” (November 19, 2025) and documentation from the Digital Package

  • Italian Law 132/2025: “Provisions on artificial intelligence” (Official Gazette no. 223 of September 25, 2025)

  • AgID and ACN: official documentation on the competences designated for the implementation of the AI Act in Italy

  • Mario Draghi: inaugural speech for the 163rd academic year of the Politecnico di Milano (December 1, 2025)

  • Euronews: coverage of Macron's statements and the postponement proposed by the Commission

  • CEPS (Centre for European Policy Studies) study: “Clarifying the costs for the EU's AI Act” - analysis of compliance costs for SMEs

  • Draghi report on European competitiveness (September 2024)

  • Tech Policy Press: “What's Driving the EU's AI Act Shake-Up?” - analysis of political and industrial pressures

  • MLex and Bloomberg: coverage of France and Germany's positions on the postponement

  • European Law Firm and Cooley LLP: legal analysis of the Digital Omnibus and implementation timeline

  • Official EU documents: text of the AI Act (Regulation 2024/1689) and European Parliament implementation timeline

For further information: digital-strategy.ec.europa.eu

Fabio Lauria

CEO & Founder, ELECTE S.R.L.

P.S. If you are interested in learning more about how AI is transforming not only Mars but also business on Earth, keep following this newsletter.

Welcome to the Electe Newsletter

This newsletter explores the fascinating world of artificial intelligence, explaining how it is transforming the way we live and work. We share engaging stories and surprising discoveries about AI: from the most creative applications to new emerging tools, right up to the impact these changes have on our daily lives.

You don't need to be a tech expert: through clear language and concrete examples, we transform complex concepts into compelling stories. Whether you're interested in the latest AI discoveries, the most surprising innovations, or simply want to stay up to date on technology trends, this newsletter will guide you through the wonders of artificial intelligence.

It's like having a curious and passionate guide who takes you on a weekly journey to discover the most interesting and unexpected developments in the world of AI, told in an engaging way that is accessible to everyone.

Sign up now to access the complete newsletter archive. Join a community of curious minds and explorers of the future.

1 BEUC (Bureau Européen des Unions de Consommateurs) is the umbrella organisation representing 45 independent consumer associations from 32 European countries. Founded in 1962, it is one of the first lobbying organisations in Brussels and receives funding from the EU Commission itself. In short, they are “the official voice of consumers” in the EU.