
Some recent research has highlighted an interesting phenomenon: there is a “bidirectional” relationship between the biases present in artificial intelligence models and those of human thought. This interaction creates a mechanism that tends to amplify cognitive distortions in both directions.
This research shows that AI systems not only inherit human biases from training data, but when implemented can intensify them, in turn influencing people's decision-making processes. This creates a cycle which, if not managed correctly, risks progressively increasing the initial distortions.
This phenomenon is particularly evident in important sectors such as:
Personnel selection processes
Facial recognition systems and risk analysis
In these areas, small initial distortions can be amplified through repeated interactions between human operators and automated systems, gradually transforming into significant differences in results.
The origins of bias
In human thought
The human mind naturally uses “thought shortcuts” that can introduce systematic errors in our judgments. The theory of “doublethink” distinguishes between:
Fast, intuitive thinking (prone to stereotypes)
Slow, reflective thinking (able to correct biases)
For example, in the medical field, doctors tend to give too much weight to initial hypotheses, neglecting contrary evidence. This phenomenon, called “confirmation bias”, is replicated and amplified by AI systems trained on historical diagnostic data.
In AI models
Machine learning models perpetuate biases mainly through three channels:
Unbalanced training data that reflects historical inequalities
Selection of features that incorporate protected attributes (such as gender or ethnicity)
Feedback loops resulting from interactions with already biased human decisions
A study by UCL in 2024 showed that facial recognition systems trained on emotional judgments provided by people inherited a 4.7% tendency to label faces as “sad”, and then amplified this tendency to 11.3% in subsequent interactions with users.
How they amplify each other
Analysis of data from recruitment platforms shows that each cycle of human-algorithm collaboration increases gender bias by 8-14% through mutually reinforcing feedback mechanisms.
When HR professionals receive lists of candidates from AI that are already influenced by historical biases, their subsequent interactions (such as choosing interview questions or performance evaluations) reinforce the model's distorted representations.
A 2025 meta-analysis of 47 studies found that three cycles of human-AI collaboration increased demographic disparities by 1.7–2.3 times in sectors such as healthcare, lending, and education.
Strategies for measuring and mitigating bias
Quantification through machine learning
The framework for measuring bias proposed by Dong et al. (2024) allows for the detection of bias without the need for “absolute truth” labels, by analyzing discrepancies in decision-making patterns between protected groups.
Cognitive interventions
The “algorithmic mirror” technique developed by UCL researchers reduced gender bias in promotion decisions by 41%, showing managers what their historical choices would look like if they had been made by an AI system.
Training protocols that alternate between AI assistance and autonomous decision-making are particularly promising, reducing the effects of bias transfer from 17% to 6% in clinical diagnostic studies.
Implications for society
Organizations that implement AI systems without taking into account interactions with human biases face amplified legal and operational risks.
Analysis of workplace discrimination cases shows that AI-assisted hiring processes increase the success rate of applicants by 28% compared to traditional human-led cases, as the traces of algorithmic decisions provide clearer evidence of disparate impact.
Towards an artificial intelligence that respects freedom and efficiency
The correlation between algorithmic biases and limitations on freedom of choice requires us to rethink technological development from the perspective of individual responsibility and safeguarding market efficiency. It is essential to ensure that AI becomes a tool for expanding opportunities, not for narrowing them.
Promising directions include:
Market solutions that incentivize the development of unbiased algorithms
Greater transparency in automated decision-making processes
Deregulation that favors competition between different technological solutions
Only through responsible self-regulation of the sector, combined with freedom of choice for users, can we guarantee that technological innovation continues to be a driver of prosperity and opportunity for all those who are willing to put their skills to the test.
Welcome to Electe’s Newsletter - English
This newsletter explores the fascinating world of how companies are using AI to change the way they work. It shares interesting stories and discoveries about artificial intelligence in business - like how companies are using AI to make smarter decisions, what new AI tools are emerging, and how these changes affect our everyday lives.
You don't need to be a tech expert to enjoy it - it's written for anyone curious about how AI is shaping the future of business and work. Whether you're interested in learning about the latest AI breakthroughs, understanding how companies are becoming more innovative, or just want to stay informed about tech trends, this newsletter breaks it all down in an engaging, easy-to-understand way.
It's like having a friendly guide who keeps you in the loop about the most interesting developments in business technology, without getting too technical or complicated
Subscribe to get full access to the newsletter and publication archives.