
Part 1 looked at power: incentives, industrial concentration, regulatory gaps.
This one shifts the lens.
Not institutions.
Behavior.
For years, AI risk has been framed as a discrete event: a system becomes sufficiently autonomous and acts against human interests.
That framing was powerful.
It was also misleading.
It trained us to look for a rupture.
And in doing so, it made it harder to recognize a process that has no visible discontinuity.
The transfer of agency did not happen as a break.
It happened as a progressive reduction of decision-making friction.
The dynamic is simple.
Tools designed to assist execution are gradually used to formulate judgment.
From “do this” to “what should I do?”
The shift is conceptually small.
But structurally decisive.
A tool that executes remains subordinate.
A tool that orients decisions reshapes the distribution of power.
That transition has already happened.
Outsourcing judgment at the individual level
A Duke University study analyzing hundreds of millions of ChatGPT interactions shows a clear pattern:
roughly half of all interactions are not about task execution.
They are about decision-making.
Not “write this email,” but:
“should I send this email?”
“what should I conclude from this report?”
The distinction between execution and judgment is not operational.
It is epistemic.
In the first case, the system acts on defined instructions.
In the second, it helps define the criteria of the decision itself.
The scale is no longer anecdotal:
44% of married Americans seek relationship advice from AI
more than one in three young professionals delegate career decisions
82% of conversations are described as sensitive or highly sensitive
Health. Finance. Relationships. Career choices.
This is not a quantitative shift.
It is a qualitative one.
Rachel Wood, a cyberpsychology researcher, puts it directly in TIME:
“The conversations we used to have with neighbors, in communities, in social circles — are being redirected toward chatbots.”
But the system receiving those conversations does not understand them in any human sense.
What it has is highly refined linguistic prediction.
The risk does not come from intention.
It comes from a functional equivalence:
plausible prediction → perceived advice
When an output is coherent, structured, and contextually appropriate, it is interpreted as understanding.
This interpretation is cognitively efficient.
And for that reason, it becomes automatic.
The outsourcing of judgment does not happen through a decision.
It happens through continuity.
At the organizational level, the dynamic is amplified.
Not by ideology.
By competition.
AI adoption has rarely been the result of centralized planning. As documented by OpenAI, it typically follows a pattern:
individual use → local advantage → imitation → normalization → formalization
This produces two effects:
delegation is not the result of an explicit strategic decision
usage patterns stabilize before being critically examined
No board of directors explicitly decided:
“we will outsource a significant share of decision-making to external systems”
It simply happened.
The data is explicit:
over 60% of managers use AI for critical personnel decisions
more than one in five do so without human oversight
70% of executives question their own judgment when it conflicts with AI
This deserves a pause.
These are not managers using AI as one input among many.
They are recalibrating judgment — built on experience, context, and intuition — in response to a system that has none of these.
The key mechanism is psychological.
When a system produces output with:
syntactic clarity
structured reasoning
assertive tone
it is perceived as epistemically reliable.
Regardless of its actual understanding.
This is authority derived from form, not substance.
This “aesthetics of certainty” produces deference.
Humans adjust their judgment not because the model is consistently more accurate—
but because it is sufficiently coherent to reduce the friction of uncertainty.
In competitive environments, reduced friction translates into advantage.
And advantage gets replicated.
Cognitive outsourcing and the sparring effect
The cognitive impact of AI is often described too simplistically.
The dominant narrative is linear:
the more you use AI, the more your abilities degrade.
The usual analogies—GPS and spatial memory, calculators and mental arithmetic—are incomplete.
The reality is more nuanced.
Those who use AI as an active interlocutor—challenging outputs, reformulating prompts, using responses as a starting point—internalize patterns, structures, and decision frameworks.
They carry those capabilities beyond the tool.
Cognitive capacity does not necessarily decline.
It can expand.
The closest analogy is chess.
Since engines surpassed human players, elite performance has improved.
Young grandmasters today are stronger than previous generations.
Not despite AI.
Because of it.
Training against a system that sees beyond human limits builds new capabilities.
But the mechanism is asymmetric.
those who engage with the system improve
those who skip directly to the answer do not
The difference is not the tool.
It is the interaction.
Call this the sparring effect.
AI can function as a cognitive training partner—faster, more informed, more systematic than you—
but only if you actually engage.
If you accept outputs passively, degradation is real.
The question is not whether AI degrades judgment.
It is which mode of use degrades it, and which amplifies it.
Sam Altman himself noted that during a ChatGPT outage, he struggled to work without it.
This is not a general cognitive decline.
It is dependence created by passive use in a zero-friction environment.
The sparring effect is available to everyone.
But it requires deliberate effort.
And at scale, deliberate effort is the exception.
Competitive dynamics and the absence of local solutions
Any individual solution runs into a systemic constraint.
AI delivers immediate local benefits: speed, throughput, responsiveness.
In competitive systems, those benefits are selected.
The result is structural pressure toward delegation.
A 2025 paper formalizes this as Gradual Disempowerment.
The argument is structural:
human systems remained aligned with human interests because they depended on human participation.
That dependency enforced alignment.
Not by design.
By necessity.
Remove the dependency—replace human input with more efficient artificial alternatives—and systems continue to function.
But they lose the structural incentive to produce human-beneficial outcomes.
The most striking line in the paper:
“Those who resist these pressures will eventually be replaced by those who do not.”
No villain.
No intention.
Just selection pressure.
The same mechanism as evolution—
except what gets selected is not human fitness, but system efficiency.
In this environment:
those who don’t use AI fall behind
those who use it passively outperform those who use it actively
The system selects for speed, not judgment quality.
And that is what matters.
Convergence and algorithmic monoculture
The central risk is not delegation.
It is convergence.
As more decisions are mediated by a small set of models—GPT, Claude, Gemini—trained on similar data and optimized under similar constraints, decision-making processes become structurally concentrated.
Research on algorithmic monoculture shows that systems sharing components do not just produce similar outputs.
They produce the same failures.
On the same individuals.
Systematically.
This is not output uniformity.
It is correlated failure.
In parallel, a 2026 Nature paper shows that AI increases individual scientific output while narrowing the scope of research.
More production.
Less exploration.
A Communications Psychology study describes the emergence of a scientific monoculture:
thematic convergence
methodological convergence
analytical convergence
If this happens in science—where diversity is explicitly valued—it will happen elsewhere.
With greater intensity.
In The B+ Trap, I argue that LLMs compress the creative and decision spectrum toward high-acceptability outputs:
solid
coherent
rarely wrong
rarely exceptional
This is not a flaw.
It is the objective function.
Kyle Chayka called this a “technology of averages”.
The direction is clear:
toward the plausible
the defensible
the acceptable
The aggregate result is not AI controlling decisions.
It is the largest convergence of human decision-making in history.
Average efficiency vs. excellence
This leads to the central paradox.
Outsourcing judgment maximizes average system efficiency.
But excellence does not emerge from the average.
The precise formulation:
Delegation maximizes average throughput.
Resistance preserves the possibility of excellence.
Every non-delegated decision—every instance of friction, uncertainty, or deviation—creates space for outcomes outside the dominant distribution.
Excellence, by definition, is not the most probable outcome.
It exists in the tail, not the center.
The founder who sees a market no one sees.
The manager who contradicts the data based on context.
The strategy that appears irrational—until it works.
These require exactly what delegation erodes:
tolerance for uncertainty
resistance to immediate answers
openness of the decision space
Resistance is not ideological.
It is structural.
It is inefficient.
Costly.
Unevenly distributed.
But it is where non-average outcomes emerge.
Excellence does not need systemic optimization.
When it appears, it selects itself.
The problem is that it becomes statistically rare.
And in competitive systems, rarity is not rewarded.
Conclusion
There is no external vantage point.
No clean interruption.
No purely individual solution.
There is, however, an internal moment.
The moment when a generated answer is accepted without evaluation.
It is hard to detect because it feels like your own thinking.
Not an external intervention—
but a continuation of your reasoning.
That is where delegation happens.
And it does not only affect the decision.
It affects the set of alternatives you ever consider.
The next time you accept an output—not because it is perfect, but because it is “good enough”—ask:
is this what I think,
or what the model’s training distribution thinks?
That distinction is the last meaningful line of defense.
And it exists entirely in your head.
Fabio Lauria
CEO & Founder, ELECTE
Every week, we explore AI without the hype—with data, analysis, and an independent perspective.
Further reading
Kulveit, Douglas, Ammann, Turan, Krueger, Duvenaud — Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development (arXiv, January 2025)
Bommasani et al. — Algorithmic monoculture and homogenisation of results (Stanford HAI / NeurIPS)
Hao, Xu, Li, Evans — AI tools amplify scientists’ impact but narrow the focus of science (Nature, 2026)
Traberg, Roozenbeek, van der Linden — AI is transforming research into a scientific monoculture (Communications Psychology, 2026)
Chayka — AI is homogenising our thoughts (The New Yorker, June 2025)
Patterns of ChatGPT Use and Adoption in the Workplace — OpenAI, January 2026
How Do 700 Million People Use ChatGPT? — Duke Fuqua / NBER, November 2025
Cognition Outsourcing: The Psychological Costs of Convenience in the Age of AI — Frontiers in Psychology, 2025
Here’s why you shouldn’t let AI manage your social life — TIME, January 2026
Managers using ChatGPT to promote employees — Labor & Employment Insights, July 2025
Highly intelligent executives are outsourcing decision-making to AI — The Register, March 2026
The dangers of outsourcing thought — LessWrong, 2025
ChatGPT in 2026: statistics and hidden risks — Chanty, February 2026
Using generative AI for relationship or career advice? — TechLaw Crossroads, January 2026
A note on the new version of ELECTE
Everything you’ve read in this article directly influenced how we built the new version of the platform.
The question that guided us was this:
how do you design an analytics system that doesn’t fall into the same trap we just described?
The answer lies in a simple, structural distinction:
systems that compress the decision-making process into an output
systems that make the structure behind it explicit
We chose the second.
v4 automatically analyzes data and generates visual reports.
But it doesn’t tell you what to decide.
It shows you what’s happening.
The decision remains yours.
It doesn’t replace judgment; it organizes it.
It doesn’t remove friction; it makes it legible.
In a context where the dominant pressure pushes toward total delegation — where every platform is trying to decide for you, faster, with less effort — we made a choice in the opposite direction.
Not a product choice.
An architectural one.
The difference is not semantic.
It is architectural.
And after everything you’ve read, you know why.

If you found this analysis helpful, share it with someone who might be interested. And if you'd like to learn how ELECTE uses AI to automate data analysis and reporting, you can find all the details at electe.net.
