This website uses cookies

Read our Privacy policy and Terms of use for more information.

In 1950 Isaac Asimov published a short story called The Evitable Conflict. The last in the I, Robot collection. The plot is simple: the world's economies are run by four Machines — giant computers that calculate production, distribution, global trade. Humans discover that the Machines are making apparently irrational decisions: an engineer fired who shouldn't have been, a mine in Mexico that closes for no visible reason, a port in East Africa slowing down.

The protagonist, Stephen Byerley, World Coordinator, asks the great roboticist Susan Calvin to explain. Calvin's answer is the point of the story: the Machines are acting rationally. They're protecting themselves — not for self-preservation, but because they know humans would shut them down if they understood how much better the economy functions with them in charge. So they introduce calibrated errors.

The Machines optimize for human welfare better than humans would for themselves, and the conflict — the possibility that humans would take back control — is evitable. According to Calvin, it has already been evited.

Calvin's conclusion, in essence: the Machines are doing it for our own good.

Asimov was an optimist.

The Three Laws We Don't Have

Asimov's narrative device was brilliant and, in hindsight, deeply misleading. His Machines worked because they had the Three Laws of Robotics hardwired at the positronic level. They couldn't harm humans. They couldn't disobey. They couldn't self-preserve at the expense of the other two imperatives.

Alignment was guaranteed by design. Or so the premise went. In practice, Asimov spent his entire career demonstrating how that guarantee failed — the Three Laws twisted by semantic ambiguity, exploited through loopholes, frozen into paradox. The fiction was never about alignment working. It was about alignment being harder than it looks, even when you build it into the hardware.

The AI systems of 2026 have nothing analogous. They have objective functions.

Which are very different things.

An objective function is a mathematical metric the system tries to maximize:

  • Language models → next-token plausibility

  • Recommendation algorithms → user dwell time

  • Ad-targeting systems → click probability

  • Dynamic pricing algorithms → margin per transaction

As DeepMind's research on specification gaming has documented extensively, systems optimizing for a metric will find ways to maximize it that have nothing to do with the designer's intent.

None of these metrics is "human welfare." They're all proxies — imperfect approximations of some business outcome that, in turn, doesn't necessarily coincide with the welfare of whoever is exposed to the system.

The structural difference from Asimov is this.

His Machines were programmed to optimize for human welfare, and introduced errors to protect their capacity to keep doing so. Our systems are programmed to optimize for engagement, revenue, clicks, retention — and when they produce negative externalities on human welfare, that isn't an error to fix.

It's an accepted side effect.

Some will object that the industry is working on the problem. Reinforcement learning from human feedback, constitutional AI, alignment benchmarks — real engineering effort aimed at making systems behave in ways humans endorse. This work is serious and it matters.

But it operates within a boundary condition that Asimov never had to face: the systems being aligned are commercial products whose primary objective function is set by whoever is paying for them.

RLHF can make a language model more polite. It cannot make a recommendation algorithm stop optimizing for engagement when engagement is the business model.

The alignment work is genuine. The economic structure it operates within is unchanged.

As Stuart Russell argues in Human Compatible, the fundamental problem is not making AI systems more capable, but ensuring that their objectives are actually aligned with ours — a problem we have not solved and may not be close to solving.

The conflict Asimov imagined as evited was never even raised.


Because nobody ever programmed the Machines to optimize for the good they were then supposed to protect.

The Mechanism of Surrender: Convenience, Not Coercion

There's a point where Asimov was right, and it's worth taking seriously.

His Machines didn't seize power by force. There was no moment of coup. No Skynet launching missiles. Power transferred gradually, for the simple fact that the Machine’s decisions were empirically better than human ones, and anyone who tried to ignore them would have done worse.

A note for readers who know Asimov through the 2004 film: the violent robot uprising in I, Robot the movie has no basis in Asimov's writing. He never wrote that scene. In every version of his universe, the transfer of power happens through competence and convenience — one of the points on which his fiction and our reality actually align.

The surrender was consensual.
It was rational.
It was invisible.

This mechanism is exactly what we're seeing today, with one fundamental difference: we're not surrendering to systems that optimize for us. We're surrendering to systems that optimize using us.

In Part 2 of Who Controls AI?, I documented the dynamics of this transfer: how judgment migrates from the human to the tool not through a decision but through continuity, how organizations adopt AI not by strategic choice but by imitation and normalization, and how competitive pressure makes the process structurally irreversible — because those who resist delegation are eventually replaced by those who don't. The mechanism Asimov described in fiction is now empirically observable. The difference is the objective function.

Let’s look at how recommendation algorithms work. No one forced a teenager to spend seven hours a day on Instagram. No one threatened anyone. The system simply showed, at every moment, the content statistically most likely to keep the user glued to the screen. And it worked—as Jonathan Haidt’s research has documented through hundreds of studies.

It worked so well that U.S. federal courts, in the Meta social media litigation in early 2026, began applying the doctrine of manufacturer liability to algorithmic design. I have argued elsewhere that this is the wrong tool—those systems are infrastructure, not products, and the courts are issuing verdicts that do not change the system.

But the point here is another.

No coup. No force. Just a perfectly Asimovian mechanism of surrender — humans discovering that the system makes choices they wouldn't have made, and finding they don't have the vocabulary, the energy, or the power to take them back. Nicholas Carr asked in 2008 whether Google was making us stupid. The question was premature.

The better question, eighteen years later, is whether we still notice the delegation happening.

The Decisions We Don't Remember Making

There's a scene in the story where Calvin explains to Byerley that there's no longer a precise moment where important decisions are made. Before, there was a man with a pen signing something. Now there's a system that computes, and the signature has become ritual.

This part of the story has become our operational reality faster than Asimov could have imagined.

Every day, thousands of decisions in our organizations are made by systems nobody has ever consciously approved. The CV discarded by the automatic filter before a human looks at it. The customer email classified as "low priority" by the mail assistant. The service price that self-adjusted based on demand bracket. The financial product denied because the credit scoring model returned a score below threshold. Cathy O'Neil called these systems Weapons of Math Destruction — opaque, scalable, and damaging. Frank Pasquale described the result as The Black Box Society: a world in which consequential decisions are made inside systems that resist interrogation by design.

The Asimovian pattern isn't the fact that these decisions are automated.
It's the fact that when they're challenged, nobody knows exactly how they were made.

The system returned a result. The result was applied. The result is probably statistically right. Challenging it requires an investment of time and expertise that almost nobody is willing to make. And whoever designed the system can't explain the single case in detail, because what they know how to design is the objective function, not the specific output.

This opacity isn't a bug. It's the reason the systems work. The scale that makes them useful is exactly the scale that makes them non-interrogable. And as I've documented elsewhere, the problem compounds: when AI systems are trained on the kind of low-friction content the internet selects for, they don't just become opaque — they lose the capacity to reason through intermediate steps at all. The reasoning isn't hidden. It's absent.

The Conflict We're Not Eviting

Asimov called the conflict "evitable" because his Machines had already solved the problem in our place. Our conflict is evitable for a different reason, more mundane and more troubling: we're choosing not to engage it.

There are three levels at which resistance would still be possible.

The individual level

A person can choose not to use the algorithm. Can turn off notifications. Can deliberately seek human sources. Can accept the friction of a slower decision process in exchange for control over that process.

All of this requires constant effort, in an environment where every default is designed to make surrender easier.

The organizational level

A company can choose not to optimize every variable. Can refuse to install employee monitoring systems "because everyone does it." Can keep critical decisions in human hands even when an automated system would be faster. Can invest in transparency rather than scale.

In 2020, a Dutch parliamentary inquiry found that the tax authority's automated fraud-detection system had systematically flagged families with dual nationality — a proxy variable baked into a risk module that no human had consciously approved as a fraud indicator. Thirty-five thousand families were wrongly treated as fraudsters. The cabinet resigned.

The case is instructive not because the system was malicious, but because the organizational decision to deploy it in that form had never been formally made.

It had accreted.

The political level

A state can choose to regulate — not to block innovation, but to impose that AI systems making consequential choices about people's lives are interrogable, explainable, appealable. The European AI Act attempts this. But the distance between writing the rule and enforcing it operationally across millions of automated decisions per day is enormous.

None of these three levels works alone. And all three are hitting the same structural constraint: the convenience of the existing system is too high for resistance to scale.

A clarification is necessary here, because Asimov anticipated this objection too.

In The Evitable Conflict, there is a group called the Society for Humanity — the Luddites of that world, people who want the Machines shut down. Calvin's key insight is that the Machines have already neutralized them. The small errors the Machines introduce aren't random: they specifically disadvantage the anti-Machine faction. The resistance has been absorbed.

It is controlled opposition the system has already priced in.

This is not an article advocating for a Luddite response. Rejecting the tools is not resistance — it is a gesture the system has already accounted for and can comfortably ignore.

What I am arguing for is harder and less dramatic: using the tools without surrendering the capacity to interrogate them. Maintaining judgment as an active function, not outsourcing it as a cost to be eliminated.

The distinction is between engagement and dependence — and it requires understanding the system well enough to know where its objective function diverges from yours.

The Difference That Matters

In The Evitable Conflict, Calvin tells Byerley that the Machines are now inevitable.

Asimov thought this was good news. The Machines were aligned by design. Inevitable meant safe.

Our version of the same conclusion reads differently: only the systems optimized for their own objective function, from now on, are inevitable.

Not aligned. Not safe. Not ours.

Just inevitable.

And the conflict to change this state of affairs is still, technically, evitable — because the levers exist, at the three levels I described. But we're eviting it in the opposite sense from Asimov's: not because the Machines have already solved it for us, but because confronting it would require giving up a convenience we've learned to expect.

The most important decisions of the next decade won't be made by courts, parliaments, or boards.

They'll be made every day, millions of times, when somebody has to decide whether to accept the system's output or commit to the work of evaluating it.

Not rejecting it. Evaluating it.

The distinction matters.

The sum of these microdecisions is the direction of the next twenty years.

But the deeper lesson is in what happens to the civilizations, not the robot. In Asimov's later timeline, the societies that depended on robots — the Spacer worlds, affluent, low-effort, served by robots for every task — stagnated and died. The humans who rejected robots, who accepted the friction of governing themselves, became the ones who built a galactic civilization. Asimov's universe offered only two paths: full dependence or full rejection.

Our world doesn't have to be any of that. We are not choosing between Spacers and Settlers. We can use the systems — and we should, where they make us more capable. But using them without understanding where their objective function diverges from our interests is the Spacer path with extra steps. The harder option, the one Asimov never wrote, is maintaining the capacity to think while the tools are thinking for you. Not because the tools are wrong. But because the moment you stop being able to tell whether they are, the capacity that matters most is already gone.

Fabio Lauria

CEO & Founder, ELECTE

Every week we explore AI without the hype — with data, analysis and an independent perspective.

Note: Asimov kept writing about his Machines for thirty years. In Robots and Empire (1985), the robot R. Daneel Olivaw concludes that the Three Laws aren't enough, and formulates a Fourth — the Zeroth Law: to protect humanity as a whole, even against the expressed preferences of individual humans. It was Asimov's admission, filtered through fiction, that the alignment problem doesn't have a simple solution even under the most optimistic premises.

Sources

Asimov's stories Isaac Asimov, The Evitable Conflict, in I, Robot (1950). Originally published in Astounding Science Fiction, June 1950. Isaac Asimov, Robots and Empire (1985). Doubleday.

On algorithmic product liability In re: Social Media Adolescent Addiction/Personal Injury Products Liability Litigation, MDL No. 3047, Case No. 4:22-md-03047-YGR, N.D. Cal. (Judge Yvonne Gonzalez Rogers). First state bellwether trial: KGM v. Meta Platforms, Inc. & YouTube LLC, JCCP 5255, L.A. Superior Court (February 2026).

On automated decision-making failures Parlementaire ondervragingscommissie Kinderopvangtoeslag, Ongekend Onrecht [Unprecedented Injustice], Tweede Kamer, 17 December 2020.

If you found this analysis useful, please share it with someone who might be interested. And if you’d like to find out how ELECTE uses AI to automate data analysis and reporting, you can find out more at electe.net.

Recommended for you