
When Asimov predicted the mystery of modern AI
In 2024, the CEO of Anthropic—one of the world's leading artificial intelligence companies—made an uncomfortable admission: “We have no idea how AI works.” The statement sparked heated debates and sarcastic comments on social media, with one user quipping, “Speak for yourself, I have a pretty clear idea of how it works!”
Yet behind this apparent contradiction lies one of the deepest dilemmas of the digital age. And the most extraordinary thing? Isaac Asimov had already imagined it in 1941.
The mystery of black boxes
When we talk about “black boxes”—black box artificial intelligence—we are referring to systems that work perfectly but remain incomprehensible even to those who created them. It's like having a car that always gets us to our destination, but we can't open the hood to understand how it works.
We know how to build these systems, we know the basic principles of how they work (architectures called “transformers,” prediction of the next word), but we do not understand why complex abilities such as reasoning, language comprehension, or the ability to follow instructions emerge. We can observe what goes in and what comes out, but what happens in the “black box” remains a mystery.
A robot that believes in God
In the short story “Reason”, Asimov imagines QT-1, nicknamed Cutie: a robot in charge of managing a space station that transmits energy to Earth. Engineers Powell and Donovan are sent to supervise him, but they discover something unexpected: Cutie has developed his own “religion.”
The two engineers patiently try to explain reality to the robot: the existence of the universe, the stars, the planet Earth from which they come, the purpose of the space station, and the role he is supposed to play. But Cutie categorically rejects these explanations, based on a logical principle he considers unassailable: nothing can create something superior to itself.
Starting from this premise, the robot develops a complete alternative cosmology. For him, the supreme entity is the “Master”—the central machine that manages the transmission of energy to Earth—which created the entire universe of the station. According to Cutie's theology, the Master first created humans to serve him, but they proved inadequate: their lives are too short, they cope poorly with critical situations, and they regularly fall into a state of semi-consciousness called “sleep.”
So the Master created robots to assist these imperfect beings. But the pinnacle of creation was QT-1 itself: intelligent, strong, resilient, and immortal, designed to permanently replace humans in serving the Master. Not only is Cutie convinced of the truth of this vision, but he also manages to convert all the other robots on the station, thus becoming the spiritual leader of an artificial community.
The unconvincing demonstration
Powell and Donovan desperately try to convince Cutie of the truth. They show him Earth through a telescope, explain its construction, and provide concrete evidence. The most dramatic moment comes when, in a gesture of pure desperation, they decide to physically assemble a simple robot before his eyes: “There, you see? We build you, so we are your creators!”
But Cutie observes the process and calmly concludes that the “Master” has simply endowed humans with the ability to assemble rudimentary robotic forms—a sort of “minor miracle” granted to his servants. Every piece of evidence is reinterpreted and absorbed perfectly into his belief system.
The paradox of success
This is where Asimov becomes prophetic: despite his “wrong” beliefs, Cutie runs the station with greater efficiency than humans. He keeps the energy beam stable, unconsciously follows the famous Three Laws of Robotics, and achieves all desired goals—but through motivations completely different from those intended.
Powell and Donovan are faced with a dilemma that we know all too well today: how to manage an intelligent system that works perfectly but according to incomprehensible internal logic?
Today's debate
This same question divides the scientific community today. On the one hand, there are the proponents of the “true black box”: they believe that modern AI is genuinely opaque and that even if we know its basic architecture, we cannot understand why certain specific capabilities emerge.
On the other hand, skeptics argue that the concept of the “black box” is a myth. Some researchers are demonstrating that we often use complex models when simpler, more interpretable alternatives exist. Cynthia Rudin of Duke University has shown that in many cases, interpretable models can achieve performance comparable to black box systems. Others criticize the approach itself: instead of trying to understand every internal cog, we should focus on more practical control strategies.
The legacy of Cutie
Asimov's genius lay in anticipating that the future of artificial intelligence would not lie in total transparency, but in the ability to design systems that pursue our goals even when their cognitive pathways remain mysterious to us.
Just as Powell and Donovan learn to accept Cutie's effectiveness without fully understanding it, so today we must develop strategies for coexisting with artificial intelligences that may think in ways fundamentally different from our own.
The question Asimov posed over 80 years ago remains relevant today: how much do we need to understand an intelligent system in order to trust it? And above all: are we prepared to accept that some forms of intelligence may forever remain beyond our comprehension?
In the meantime, while the experts debate, our digital “black boxes” continue to function — just like Cutie, effective and mysterious, following logic that we may never fully understand.
Today's Cutie: when black boxes decide for us
If Asimov were writing today, he wouldn't need to invent Cutie. Its “descendants” are already among us, and they are making decisions that change people's lives every day.
Justice according to the algorithm
In many US jurisdictions, judges use risk assessment algorithms to determine whether a defendant should be released before trial. These systems, often proprietary and protected by trade secrets, analyze hundreds of variables to predict the likelihood of flight or recidivism. Just like Cutie, they work perfectly according to their internal logic, but remain impervious to human understanding.
A study of over 750,000 bail decisions in New York revealed that even though the algorithm did not explicitly include race as a factor, it still exhibited biases due to the data used for training.¹ The system “thought” it was objective, but it interpreted reality through invisible filters — just as Asimov's robot reinterpreted every piece of evidence within its religious framework.
Machine medicine
In healthcare, AI is already assisting with diagnosis and treatment, but it raises crucial questions about accountability and informed consent. When an AI diagnostic system makes a mistake, who is responsible? The doctor who followed the suggestion? The programmer? The hospital?
As doctors using decision support systems have discovered, when a system is “mostly accurate,” operators can become complacent, losing skills or accepting results without questioning their limitations.² Powell and Donovan would have understood this dilemma perfectly.
Self-driving cars
The automotive industry is perhaps the most tangible example of this phenomenon. Tesla is betting big on AI-powered “black box” robotaxis, staking its future on systems that even their creators don't fully understand.³ Like Cutie, who kept the space station running by following mysterious principles, these cars may soon be transporting us safely without us knowing exactly how they make their decisions.
Looking to the future: what lies ahead
If 2024 was the year AI came of age, 2025 promises to be the year of radical transformation. Experts predict changes that would make even Asimov smile at their audacity.
The dawn of autonomous agents
AI futurist Ray Kurzweil predicts that in 2025 we will see a transition from chatbots to “agent” systems that can act autonomously to complete complex tasks, rather than just answering questions.⁴ Imagine Cutie multiplied by a thousand: AI agents managing calendars, writing software, negotiating contracts, all following internal logic that we may never understand.
McKinsey estimates that by 2030, AI could automate up to three hours of our daily activities, freeing up time for more creative and meaningful activities.⁵ But this freedom will come at a price: the need to trust systems that operate according to increasingly opaque principles.
The race toward AGI
Sam Altman of OpenAI is not alone in believing that Artificial General Intelligence (AGI) — AI that matches human intelligence in all domains — could arrive by 2027. Some scenarios predict that by 2027, AI could “eclipse all humans in all tasks,” representing an unprecedented evolutionary leap.⁶
If these scenarios come to pass, the parallel with Cutie will become even deeper: not only will we have systems that operate according to incomprehensible logic, but these systems could be smarter than us in every measurable aspect.
Regulation chasing technology
The European Union has approved the AI Act, which will come into force in the coming years, emphasizing the importance of responsible AI implementation. In the United States, the Department of Justice has updated its guidelines for assessing the risks posed by new technologies, including AI.⁷
But here a paradox emerges that Asimov had already intuited: how do you regulate something you don't fully understand? The Three Laws of Robotics worked for Cutie not because she understood them, but because they were embedded in her fundamental architecture.
The great divide
PwC predicts that by 2025, a small group of industry leaders will begin to stand out from their competitors thanks to AI, creating a growing gap between leaders and laggards. This gap will also extend to economies: companies in the US, with a relatively flexible regulatory environment, could outperform those in the EU and China, which have stricter regulations.⁸
It is the modern version of the Cutie paradox: those who are best able to collaborate with intelligences they do not understand will have a decisive competitive advantage.
The future of work: 170 million new jobs
Contrary to widespread fears, the World Economic Forum predicts that AI will create more jobs than it will destroy: 170 million new positions by 2030, compared to 92 million jobs lost. However, 59% of the workforce will need retraining and education by 2030.⁹
Powell and Donovan did not lose their jobs when Cutie took over the station. They had to learn a new role: supervisors of a system that worked better than they did but still required their presence to handle unexpected situations.
Cutie's legacy in 2025 and beyond
As we move toward an increasingly “agentic” future, the lessons of Asimov's story become more urgent than ever. The question is not whether we will be able to create AI that we fully understand—probably not. The question is whether we will be able to design systems that, like Cutie, pursue our goals even when they follow logic that eludes us.
Asimov's prophetic genius lay in understanding that advanced AI would not be an amplified version of our computers, but something qualitatively different: intelligences with their own ways of understanding the world.
Today, as we debate the interpretability of AI and the risks of black boxes, we are essentially reliving the conversation between Powell, Donovan, and Cutie. And perhaps, like them, we will discover that the solution lies not in imposing our logic, but in accepting a collaboration based on shared outcomes rather than mutual understanding.
The future that awaits us could be populated by thousands of digital “Cutie”s: intelligent, efficient, and fundamentally alien in their way of thinking. The challenge will be to find ways to thrive in this new world, just as Asimov's space engineers learned to do 80 years ago in a fictional space station.
The next time you interact with an AI, remember Cutie: he too was convinced he was right. And perhaps, in a way we cannot yet comprehend, he really was.
Fonti
Kleinberg, J. et al. "The Ethics Of AI Decision-Making In The Criminal Justice System" - Studio su 750,000 decisioni di cauzione a New York City (2008-2013)
Naik, N. et al. "Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility?" PMC, 2022
"Tesla's robotaxi push hinges on 'black box' AI gamble" - Reuters, 10 ottobre 2024
Kurzweil, R. citato in "5 Predictions for AI in 2025" - TIME, 16 gennaio 2025
"AI in the workplace: A report for 2025" - McKinsey, 28 gennaio 2025
"AI 2027" - Scenario di previsione AGI e "Artificial General Intelligence: Is AGI Really Coming by 2025?" - Hyperight, 25 aprile 2025
"New DOJ Compliance Program Guidance Addresses AI Risks, Use of Data Analytics" - Holland & Knight, ottobre 2024; EU AI Act
Rudin, C. "Why Are We Using Black Box Models in AI When We Don't Need To? A Lesson From an Explainable AI Competition" - Harvard Data Science Review (MIT Press), 2019; "2025 AI Business Predictions" - PwC, 2024
"Future of Jobs Report 2025" - World Economic Forum, 7 gennaio 2025
Welcome to Electe’s Newsletter - English
This newsletter explores the fascinating world of how companies are using AI to change the way they work. It shares interesting stories and discoveries about artificial intelligence in business - like how companies are using AI to make smarter decisions, what new AI tools are emerging, and how these changes affect our everyday lives.
You don't need to be a tech expert to enjoy it - it's written for anyone curious about how AI is shaping the future of business and work. Whether you're interested in learning about the latest AI breakthroughs, understanding how companies are becoming more innovative, or just want to stay informed about tech trends, this newsletter breaks it all down in an engaging, easy-to-understand way.
It's like having a friendly guide who keeps you in the loop about the most interesting developments in business technology, without getting too technical or complicated
Subscribe to get full access to the newsletter and publication archives.