The protagonist is Claude, the model developed by Anthropic.

On 27 February 2026, the Pentagon placed the company on a blacklist previously reserved for Kaspersky, Huawei and suppliers linked to rival powers. The official justification: “risk to the national security supply chain”, pursuant to 10 U.S.C. § 3252 — a provision designed to counter foreign sabotage.

Never before used against an American company.

The reason? Anthropic refused to remove two clauses from its $200 million contract with the Department of Defence. The first prohibited the use of Claude for mass surveillance of American citizens. The second prohibited its use in fully autonomous weapons without human supervision.

On 26 February, CEO Dario Amodei had written:

‘Fully autonomous weapons could also prove crucial to our national defence. But at present, cutting-edge AI systems are simply not reliable enough to power them. We will not knowingly supply a product that puts American servicemen and civilians at risk.’

The day after the blacklisting, the United States launched air strikes against Iran as part of Operation Epic Fury.

And Claude was still there.

Active within classified military systems, it was set to remain there for months thanks to a 180-day removal window provided for by the designation itself. According to various media reports, the system contributed to the classification and selection of military targets.

CENTCOM confirmed the use of “advanced AI tools.” The Pentagon’s CIO, Kirsten Davies, told the Senate:

‘The system is currently active.’

Essential and dangerous. Same system. Same week.

This isn’t just a story about a contract gone wrong.

The question with no owner

It is the story of a question that no institution in the world today has the formal right to resolve:

Who decides what an artificial intelligence system can do?

The company that built it? The government that bought it? Parliament? An international treaty? No one?

In practice, the answer is far less noble than it seems:

It depends on the contract.

Case by case. Clause by clause. Negotiation by negotiation.

Without transparency, without binding precedents, without citizens having a say.

The Anthropic-Pentagon case has simply brought to light what until yesterday remained in the shadows.

The manufacturer says: “It’s not ready”

Anthropic’s argument is technical rather than ethical.

An AI system has measurable error rates. If those rates are incompatible with a particular application — for example, weapons that select and engage targets without human intervention — the manufacturer has an engineering responsibility to say so.

If Boeing were to inform the Pentagon that an aircraft is not certified for a certain mission profile, the normal response would be a technical review. Not a blacklist.

Anthropic has done the equivalent, in AI terms.

But the story doesn’t end there.

Boeing would not time a technical communication to coincide with the deadline of a political ultimatum. It would not present it as a battle for ‘democratic values’. And it would hardly see its own product leap to the top of the App Store the very next day.

Furthermore, the limits declared by Anthropic are selective, as revealed by the affidavits filed in the federal litigation. The company accepts the use of Claude for foreign intelligence. It accepts partially autonomous systems. The refusal concerns two specific lines:

  • Mass domestic surveillance

  • Full lethal autonomy without human supervision

It is not a refusal of military use. It is a calibrated refusal.

The government says: ‘We decide’

The Pentagon’s position follows a mirror-image logic.

In a democracy, military operations are authorised by Congress, directed by the President and subject to judicial review. The required standard — ‘any lawful use’ — means: any use permitted by current law.

The strategic memorandum of 9 January 2026 formalised this approach.

Those who accepted, joined.

OpenAI signed a classified contract just hours after Anthropic was blacklisted. xAI had already signed on 23 February.

The difference is not in principle. Both declare limits.

The difference lies in the mechanism.

Anthropic wanted binding contractual prohibitions. OpenAI’s safeguards refer to existing law. And this means that the government can use those models for anything that is not already illegal.

‘We’ve essentially gone back to square one: allowing the Pentagon to use its own AI for any legitimate purpose.’

But “legitimate” is not a universal category.

It varies from country to country.

The global regulatory vacuum

The dispute between Anthropic and the Pentagon cannot be resolved within the current regulatory framework. There is no clear system for determining when a military AI is ‘safe enough’. DoD Directive 3000.09 requires ‘appropriate levels of human judgement’ in the use of force. But it was not written with cutting-edge language models in mind.

And according to the affidavits filed in court, once installed in a classified and air-gapped environment, Claude cannot be modified remotely. No direct access. No kill switch. No intervention without Pentagon authorisation.

A private company ends up having de facto power to limit what a state can do with legally acquired technology. A state may blacklist a company for imposing those limits. And no law says who is in the right.

The problem is not just an American one.

What is lawful under the US FISA may be incompatible with the European GDPR. The EU AI Act prohibits certain forms of mass biometric surveillance.

And this is just the Euro-American front. The global picture is worse.

Summit

Year

Signatories

USA

China

Type of commitment

2023

50+

Call to Action (non-binding)

2024

~60

Blueprint for Action (non-binding)

2026

35

Pathways for Action (non-binding)

Fewer signatories. Fewer commitments. Less consensus. Just as technology is accelerating.

It is true that the United States and China continue to engage in dialogue in smaller-scale formats. But the practical outcome remains the same: there is no shared framework.

At the UN, the debate on lethal autonomous weapons has been ongoing for nine years. Secretary-General Guterres has described them as ‘morally repugnant’ and has called for a treaty by 2026. In December 2024, 166 countries voted in favour of a resolution on LAWS. Opposed: Belarus, North Korea and Russia.

But the negotiating group operates by consensus. A single state is enough to block everything.

In the absence of specific treaties, the rules governing military AI are laid down in commercial negotiations.

Not in parliaments. Not in treaties. In contracts.

The European silence

No European institution has formally commented on the case.

Yet the issue affects anyone building critical infrastructure on AI platforms negotiated in Washington.

The EU AI Act regulates the use of AI in Europe. It does not regulate how those models are designed in the United States.

If a system is designed to accept ‘any lawful use’, Europe can only intervene afterwards.

Ex-post control. Not architectural constraints.

And if your company uses AI, this story affects you more than you might think.

Where we are now

On 9 March 2026, Anthropic filed two federal lawsuits: one in the Northern District of California, alleging violations of the First and Fifth Amendments and the Administrative Procedure Act; the other before the D.C. Circuit Court of Appeals.

On 24 March, at the preliminary hearing before Judge Rita F. Lin, the tone was clear.

Lin described the designation as “an attempt to cripple Anthropic.” She asked the government why, if the issue was the integrity of the chain of command, the Pentagon had not simply stopped using Claude — instead of invoking a rule designed for hostile foreign suppliers.

When the government invoked national security, the judge replied:

If an IT supplier merely needs to be “stubborn” over contractual terms to be declared a risk, then the threshold is dangerously low.

Microsoft filed an amicus brief — a voluntary submission in support of one of the parties to the case —, citing serious consequences for the entire technology sector. Around 50 employees from OpenAI and Google DeepMind did the same in a personal capacity. Elizabeth Warren described the designation as “retaliation.”

A preliminary ruling is expected shortly. But even if the judge grants an injunction, the designation would only be suspended. Not overturned.

The trial on the merits could last a year or more.

The question that remains unanswered

One company said: ‘Our system isn’t ready for this.’

It was blacklisted.

The system remained active in combat because it was too useful.

The United Nations debates without producing binding agreements.

A federal judge is deciding whether a private company can technically restrict its own product.

No institution was designed to answer the question:

Who controls AI?

The Pentagon is designed to win wars. Companies want to sell technology. The UN is designed to build consensus. The courts are designed to enforce the law.

The question exists in the space between these institutions.

Currently, there is no one in that space.

AI governance is not decided in parliaments. Nor in treaties. Nor in courts.

It is decided in contracts.

Technology moves at the speed of capital. Rules, however, move at the speed of political consensus.

The gap between the two is becoming the place where the future is decided.

The next time you choose an AI model for your company, you are not just choosing a supplier.

You are also deciding which concept of power will enter your infrastructure.

The newsletter will be taking a break next week for Easter. We’ll be back on 9 April.

Fabio Lauria
CEO & Founder, ELECTE

Every week, we explore AI without the hype — using data, analysis and an independent perspective.

Sources

Anthropic-Pentagon case

US Regulatory Framework

  1. 10 U.S.C. § 3252 — Supply Chain Risk Statute

  2. DoD Directive 3000.09 — Autonomy in Weapon Systems (2012, updated 2023)

Autonomous Weapons and International Governance

REAIM Summit

If you found this analysis useful, please share it with someone who might be interested. And if you’d like to find out how ELECTE uses AI to automate data analysis and reporting, you can find out more at electe.net.

Recommended for you