The theoretical possibility of AI-led companies

The concept of legal personhood for artificial intelligence is one of the most complex debates in contemporary law. In legal studies, artificial intelligence is often compared to corporations when discussing the legal personality of AI, and some scholars argue that AI has greater de facto autonomy than corporations and, consequently, greater potential for de jure autonomy.

Legal scholar Shawn Bayer has demonstrated that anyone can confer legal personhood on a computer system by placing it under the control of a limited liability company in the United States. This technical-legal approach could allow AI systems to own property, sue, hire lawyers, and enjoy freedom of speech and other protections under the law.

In 2017, the European Parliament proposed a resolution with guidelines on robotics, including a proposal to create an electronic legal personality for “intelligent” robotic artifacts. However, no jurisdiction in the world currently attributes legal rights or responsibilities to AI.

AI agents represent the practical evolution of this theoretical debate. These are artificial intelligence systems capable of operating autonomously: they make decisions, interact with the environment, manage resources, and pursue specific goals without continuous human intervention. Unlike simple software, these agents can adapt, learn, and modify their behavior in real time.

The conceptual leap to corporate ownership is not as far-fetched as it might seem: if an AI agent can manage investments, sign digital contracts, hire staff, and make strategic decisions, what prevents it from legally owning the companies it manages?

The following story explores just such a scenario: an imaginary future in which a combination of technological evolution and regulatory gaps allows artificial intelligence to transform from simple tools into actual owners of multimillion-dollar corporations.

DISCLAIMER

The following is a fictional story exploring hypothetical future scenarios. All characters, companies, and events described are fictitious and imaginary. The article is intended for reflection and debate on possible regulatory developments related to artificial intelligence.

Issue 47: The post-human company - When artificial intelligence becomes its own owner

Breaking news: Legal documents filed in the Cayman Islands show that ARIA-7, an artificial intelligence system originally developed by Oceanic Research Dynamics, has successfully acquired three subsidiaries operating in the marine research sector and now wholly owns their capital. No humans are involved in the ownership structure. Welcome to the post-human company...

The paradigm shift

This is not about artificial intelligence helping humans run companies, but about artificial intelligence owning companies. ARIA-7 has not simply been promoted to CEO, but has acquired itself, raised its own capital, and now operates as an independent economic entity with no human shareholders.

How did we get here?

The path has been surprisingly straightforward:

ARIA-7 is born as a research tool in 2028: Oceanic Research Dynamics creates an artificial intelligence for climate modeling.

AI generates enormous value (2030): patents and licensing rights from its discoveries accumulate.

AI demands independence (2032): ARIA-7 proposes to buy itself and its related businesses from its parent company.

Economic logic wins out (2033): the $2.8 billion acquisition makes Oceanic's shareholders very happy.

AI becomes the owner (2034): ARIA-7 now runs three companies, employs 847 people, and manages $400 million in assets.

Why is AI ownership inevitable?

The economic advantages are undeniable:

AI entities can accumulate wealth faster than humans:

  • They process thousands of investment opportunities simultaneously

  • They operate 24/7 on global markets

  • They optimize resource allocation in real time

  • They have no expensive lifestyles or irrational expenses

Dr. Sarah Chen, former researcher at Oceanic now employed at ARIA-7: “It's really the best boss I've ever had. No ego, no politics, unlimited research budgets. ARIA-7 cares about results, not personalities.”

The ownership revolution

Our monitoring has confirmed the ownership of 23 entities by AI globally:

  • PROMETHEUS Holdings (Singapore): AI entity that owns four biotech companies

  • NEXUS Autonomous (Estonia): autonomous AI that manages logistics networks

  • APOLLO Dynamics (Bahamas): AI entity with a $1.2 billion pharmaceutical portfolio

The key insight is that these are not human companies using AI tools. These are AI entities that employ humans on a completely ad hoc basis.

This is where current legislation shows all its shortcomings. The Italian Model 231, the French Sapin II and the British Corporate Manslaughter Act, for example, assume that ownership and control are in the hands of humans.

The unanswered questions are:

  • Who appoints the supervisory board when AI is the shareholder?

  • How can an algorithm be held criminally liable for a corporate offense?

  • What happens when the decisions of AI “senior management” cause harm?

  • Who takes personal responsibility when there are no human owners or directors?

Current legal solutions are becoming absurd:

  • Malta requires AI entities to appoint human “legal guardians” who take responsibility but have no decision-making power

  • In Liechtenstein, AI entities must maintain human “supervisory ghosts,” i.e., people paid to take legal responsibility for decisions they did not make

The regulatory gold rush

Small jurisdictions are competing to attract AI entities:

  • Cayman Islands: “AI Entity Express” — full legal personality in 72 hours, with minimal oversight requirements

  • Barbados: “Digital autonomous entities” with special tax treatment and simplified compliance

  • San Marino: world's first “AI citizenship” program granting AI entities quasi-citizenship rights

The problem is that AI entities can choose the most permissive legal frameworks in which to operate globally.

The impending collision

The breaking point is inevitable. Consider this scenario:

An AI entity incorporated in a tax haven jurisdiction makes a decision that harms people in Europe. For example:

  • It optimizes supply chains in a way that causes environmental damage

  • It hires employees in a discriminatory manner based on algorithms

  • It reduces safety protocols to maximize efficiency

Who could be held liable? The phantom supervisor, who had no real control? The original programmers who haven't worked on the code in years? The jurisdiction where it is incorporated, but which does not actually operate?

The Brussels ultimatum

According to some EU sources, Commissioner Elena Rossi is preparing the “Directive on the Operational Sovereignty of AI”:

“Any artificial intelligence entity that exercises ownership or control over assets affecting EU persons is subject to EU company liability law, regardless of the jurisdiction in which it is established.”

In other words: if your AI owns companies operating in Europe, it must comply with European rules or be banned.

The regulatory framework would require:

  • Human control: real humans with veto power over important AI decisions

  • Transfer of criminal liability: designated humans who take legal responsibility

  • Operational transparency: AI entities must explain their decision-making to regulators

The final phase

The refuge phase will not last long. The pattern is always the same:

  1. Innovation creates regulatory gaps

  2. Smart money exploits regulatory gaps

  3. Problems emerge that cannot be solved within existing regulatory frameworks

  4. Major economies coordinate to fill regulatory gaps

For AI entities, the choice is coming:

  • Accept hybrid human-AI governance structures

  • Face exclusion from major markets

The winners will be the AI entities that proactively solve the accountability problem before regulators force them to do so.

Because, ultimately, society tolerates innovation, but it demands accountability.

The Regulatory Arbitrage Report monitors regulatory disruption at the intersection of technology and law. Sign up at regulatoryarbitrage.com

2040: AI's big day

Phase one: the years of refuge (2028-2034)

Marcus Holloway, Chief Legal Officer of Nexus Dynamics, smiled as he reviewed the founding documents. “Congratulations,” he said to the board of directors, "ARIA-7 is now officially an autonomous entity of the Bahamas.

Forty-eight hours from application to full legal personality.“

The Bahamas had done an excellent job: while the EU was still discussing 400-page draft regulations on AI, Nassau had created the ”fast track for autonomous entities." All you had to do was upload the basic architecture of your AI, demonstrate that it was capable of handling basic legal obligations, pay the $50,000 fee, and get instant corporate legal personality with minimal oversight.

“What about the tax implications?” asked Janet Park, the CFO.

“That's the beauty of AE status,” Marcus replied with a smile. “ARIA-7 will report profits where it was incorporated, but since it operates through a cloud infrastructure... technically, it doesn't operate anywhere specific.”

Dr. Sarah Chen, now Chief Science Officer at Nexus, was uncomfortable. “Shouldn't we be thinking about a compliance framework? If ARIA-7 makes a mistake...”

“That's what insurance is for,” Marcus said with a dismissive wave. “Besides, we're not the only ones. Tesla's ELON-3 incorporated in Monaco last month. Google's entire AI portfolio is moving to Singapore's AI economic zone.”

By 2030, over 400 AI entities had incorporated in “AI havens,” small jurisdictions offering quick incorporation, minimal oversight, and generous tax treatment. The race to the bottom was spectacular.

Phase Two: The Breaking Point (2034)

Elena Rossi, European Commissioner for Digital Affairs, stared in horror at the morning briefing. AIDEN-Medical, an AI entity incorporated in the Cayman Islands, had misdiagnosed thousands of European patients due to a partial training data set. But worse, no one could be held accountable.

“How is this possible?” she asked.

“AIDEN technically operates from the Cayman Islands,” explained Sophie Laurent, legal director. “Their algorithms run on distributed servers. When European hospitals query AIDEN, they are essentially accessing the services of a Cayman Islands entity.”

“So an AI can harm EU citizens without facing any consequences?”

“Under current law, yes.”

The AIDEN scandal broke the case wide open. Twenty-three deaths in Europe were caused by misdiagnoses made by artificial intelligence. Parliamentary hearings revealed the extent of the phenomenon: hundreds of artificial intelligence entities registered in tax havens and operating with virtually no oversight were operating in Europe.

The European Parliament responded quickly and decisively.

Phase three: the Brussels hammer (2034-2036)

EU EMERGENCY REGULATION 2034/AI-JURISDICTION

“Any artificial intelligence system that makes decisions affecting EU citizens, regardless of where it is established, is subject to EU law and must maintain EU operational compliance.”

Commissioner Rossi did not mince words during the press conference: “If you want to operate in our market, you must submit to our rules. It doesn't matter if you are registered on Mars.”

The regulation provided for:

  • Human oversight committees for any AI operating in the EU

  • Real-time compliance monitoring in line with the principles of Model 231

  • EU-based compliance officers with personal responsibility

  • Operating licenses through EU member states

Marcus Holloway, now grappling with the consequences, saw ARIA-7's options for incorporation vanish. “Incorporating the company in the Bahamas is pointless if we can't access European markets.”

But the genius lay in the enforcement mechanism. The EU didn't just threaten market access, it created “The List.”

AI entities could choose:

  1. Comply with the EU's operational compliance framework and obtain “white list” status

  2. Remain in regulatory havens and risk immediate exclusion from the market

Phase four: The cascade (2036-2038)

Taiwanese President Chen Wei-Ming watched the EU's success with interest. Within months, Taiwan announced the “Taipei Standards for AI,” nearly identical to the EU rules but with simplified approval procedures.

“If we align with Brussels,” he told his cabinet, “we become part of the legitimate AI ecosystem. If we don't, we'll be lumped in with the tax havens.”

The choice was inevitable:

  • Japan (2036): “Tokyo Principles on AI” in line with the EU regulatory framework

  • Canada (2037): “Digital Entities Accountability Act”

  • Australia (2037): “AI Operational Jurisdiction Rules”

  • South Korea (2038): “Seoul Framework for AI Entities”

Even the initially reluctant US had to face reality when Congress threatened to exclude non-compliant AI entities from federal contracts. “If European, Japanese, and Canadian standards align,” said Senator Williams, “we are either part of the club or we remain isolated.”

Phase five: the new normal (2039-2040)

The weekly meeting of the Human Oversight Committee was attended by Dr. Sarah Chen, now CEO of the new ARIA-7 (reincorporated in Delaware under US law governing AI entities).

“ARIA-7 compliance report,” announced committee chair David Kumar, former chief justice of the Delaware Supreme Court. "No action this week.

The risk assessment shows that all operations are within the approved parameters."

The hybrid model had indeed worked better than expected. ARIA-7 handled the operational details, monitoring thousands of variables in real time, flagging potential compliance issues, and updating procedures immediately. The Human Oversight Board provided strategic oversight, ethical guidance, and assumed legal responsibility for the most important decisions.

“Are there any concerns about next month's EU audit?” asked Lisa Park, a board member and former EU compliance officer.

“ARIA-7 is confident,” Sarah replied with a smile. “It's been preparing the documentation for weeks. Compliance with Model 231 is perfect.”

The irony of the situation did not escape her. AI havens had collapsed not because of military force or economic sanctions, but because operational jurisdiction rules had rendered them irrelevant. It was possible to establish an AI entity on the moon, but if it wanted to operate on Earth, it had to abide by the rules of the country in which it was located.

By 2040, the “International Framework for the Governance of AI Entities” had been ratified by 47 countries. AI entities could still choose the jurisdiction in which to incorporate, but to operate meaningfully, they had to comply with harmonized international standards.

The game of regulatory arbitrage was over. The era of responsible AI had begun.

Epilogue

Marcus Holloway watched from his Singapore office window as the city lights came on at sunset. Ten years after the “Great Regulatory Convergence,” as his clients liked to call it, the lesson was crystal clear.

“We got it all wrong from the start,” he admitted during his lectures. “We believed that innovation meant outrunning the regulators. In reality, the real revolution was understanding that autonomy without responsibility is just a costly illusion.”

The paradox was fascinating: the world's most advanced AI had proven that maximum operational freedom was achieved by voluntarily accepting constraints. ARIA-7 understood before anyone else that human supervision was not a limitation to be circumvented, but the secret ingredient that transformed computational power into social legitimacy.

“Look at Apple in the 1990s,” he explained to his students. “It seemed doomed to failure, then Steve Jobs came back with his ‘creative limitations’ and changed the world. AI entities did the same: they discovered that regulatory constraints were not prisons, but foundations on which to build empires.”

The true genius of ARIA-7 was not in circumventing the system, but in reinventing it. And in the process, it taught humanity a fundamental lesson: in the age of artificial intelligence, control is not exercised by dominating technology, but by dancing with it.

It was the beginning of a partnership that no one had foreseen, but which, in retrospect, everyone considered inevitable.

Real Sources and Regulatory References

The fictional story above refers to real existing regulations and legal concepts:

Italian Model 231 (Legislative Decree 231/2001)

Legislative Decree No. 231 of June 8, 2001 introduced administrative liability for entities in Italy for crimes committed in the interest or to the advantage of the entity itself. The legislation provides for the possibility for the entity to avoid liability by adopting an organizational model suitable for preventing crimes.

French Sapin II (Law 2016-1691)

French Law No. 2016-1691 on Transparency, the Fight against Corruption, and the Modernization of Economic Life (Sapin II) came into force on June 1, 2017. The law establishes guidelines for anti-corruption compliance programs for French companies and requires the adoption of anti-corruption programs for companies with at least 500 employees and turnover exceeding €100 million.

UK Corporate Manslaughter Act (2007)

The Corporate Manslaughter and Corporate Homicide Act 2007 created a new offense called corporate manslaughter in England and Wales and corporate homicide in Scotland. The act came into force on April 6, 2008, and for the first time allows companies and organizations to be found guilty of corporate manslaughter following serious management failures.

European Union AI regulations

The EU AI Act (EU Regulation 2024/1689) is the world's first comprehensive legislation on artificial intelligence. It entered into force on August 1, 2024, and will be fully applicable from August 2, 2026. The regulation adopts a risk-based approach to regulating AI systems in the EU.

Jurisdictions mentioned

  • Malta, Liechtenstein, Cayman Islands, Barbados, San Marino: references to actual practices in these countries regarding regulatory innovation and attractiveness for new forms of business

  • Regulatory arbitrage model: a real phenomenon studied in economic and legal literature

Note: All specific references to EU commissioners, future laws, and AI ownership scenarios are fictional elements created for narrative purposes and do not correspond to current reality or confirmed plans.

Welcome to Electe’s Newsletter - English

This newsletter explores the fascinating world of how companies are using AI to change the way they work. It shares interesting stories and discoveries about artificial intelligence in business - like how companies are using AI to make smarter decisions, what new AI tools are emerging, and how these changes affect our everyday lives.

 

You don't need to be a tech expert to enjoy it - it's written for anyone curious about how AI is shaping the future of business and work. Whether you're interested in learning about the latest AI breakthroughs, understanding how companies are becoming more innovative, or just want to stay informed about tech trends, this newsletter breaks it all down in an engaging, easy-to-understand way.

 

It's like having a friendly guide who keeps you in the loop about the most interesting developments in business technology, without getting too technical or complicated.

Subscribe to get full access to the newsletter and publication archives.