
Harari's Thinking on AI: A Critical Analysis
Yuval Noah Harari, historian and philosopher known for his analyses of human civilization, has become an important voice in the debate on the risks of artificial intelligence. In his book “Nexus” and in public speeches, such as the one he gave at the United Nations in 2023, Harari argues that AI threatens democratic institutions, human intimacy and the collective search for truth. But how well-founded are these concerns?
Historical analogies: useful but limited
Harari often compares AI to the printing press, noting how both technologies initially destabilized society. He recalls how the printing press facilitated the spread of texts such as the “Malleus Maleficarum”, fueling the witch hunts, in a similar way to how AI could spread harmful ideologies.
This comparison, however, overlooks a fundamental difference: while the impact of the press developed over centuries, the integration of AI into society is happening at an unprecedented speed. Moreover, rapid institutional responses already exist today, such as the recent AI Act of the European Union.
AI is not an autonomous actor
One important criticism of Harari's analysis concerns the tendency to anthropomorphize artificial intelligence. When he talks about systems that “form intimate relationships” to manipulate our beliefs or “create cultures and religions,” Harari attributes intentionality to tools that in reality are limited to optimizing models in training data.
The problems related to social media algorithms, for example, do not stem from the fact that AI has “discovered” our susceptibility to outrage, but from the choices of companies that necessarily respond to market logic.
Regulation: the analogy with the FDA is insufficient
Harari proposes an AI regulatory body similar to the American Food and Drug Administration, which requires safety tests before public release. Although the idea is intuitively valid, it has its limitations:
Drugs follow linear development paths with clear risk parameters, while AI systems show emergent behaviors that are difficult to predict.
The development of AI is globally decentralized, making a single regulatory authority impractical
The FDA model itself has limitations, such as delays in approvals and susceptibility to industry pressures
Human resilience underestimated
In his dystopian narratives, Harari often portrays societies passively succumbing to AI-driven illusions. He fears that humans may “trust these divine technologies as infallible”, eroding critical thinking.
This view underestimates human adaptability. Studies show that media literacy programs significantly improve skepticism towards disinformation, suggesting that education, not just regulation, could counter the risks of AI.
Beyond the technology-economy dichotomy
A significant limitation in Harari's analysis lies in his binary view of the dynamics driving AI development. Rather than exploring the complex ecosystem in which the technology evolves, he tends to isolate AI as an independent variable from the economic context.
Market incentives are not inherently problematic, but they inevitably shape how AI is developed and implemented. Tech companies respond to market signals that reward speed of implementation and growth in engagement, not necessarily prudence or long-term social value.
The case of OpenAI illustrates this tension: the transition from a non-profit organization to an entity with a commercial component does not represent a moral “succumbing”, but a pragmatic recognition that innovation requires significant monetary resources.
A more balanced perspective would recognize that neither demonizing technology nor the economic system offers concrete solutions.
Towards a more balanced discourse on AI
Harari's criticisms provide important warnings, but they suffer from deterministic thinking. By presenting AI as an existential threat, he risks promoting reactionary policies that stifle innovation.
History reminds us that humanity's greatest challenges have stimulated the most profound innovations. While we heed Harari's warnings, we must not exclude the possibility of a future in which AI amplifies human ingenuity rather than subverting it.
P.S. I still liked the book. The historical discussion is interesting, while the potential threats of AI presented in the book (they're not real!) will appeal to science fiction fans.
Welcome to Electe’s Newsletter - English |
---|
This newsletter explores the fascinating world of how companies are using AI to change the way they work. It shares interesting stories and discoveries about artificial intelligence in business - like how companies are using AI to make smarter decisions, what new AI tools are emerging, and how these changes affect our everyday lives. |
|
You don't need to be a tech expert to enjoy it - it's written for anyone curious about how AI is shaping the future of business and work. Whether you're interested in learning about the latest AI breakthroughs, understanding how companies are becoming more innovative, or just want to stay informed about tech trends, this newsletter breaks it all down in an engaging, easy-to-understand way. |
|
It's like having a friendly guide who keeps you in the loop about the most interesting developments in business technology, without getting too technical or complicated |
|
Subscribe to get full access to the newsletter and publication archives.