Trust in AI remains a gamble, caught between technological promises and regulatory grey areas. It is digital whispers in a world that is always listening.

The Big Change: OpenAI Admits to Reporting to Authorities

In September 2025, OpenAI made a revelation that shook the global tech community: ChatGPT actively monitors user conversations and reports potentially criminal content to law enforcement.

The news, which emerged almost casually in a company blog post, revealed that when automated systems detect users who “are planning to harm others,” conversations are routed to specialized pipelines where a small team trained in usage policies reviews them. If human reviewers determine that there is an “imminent threat of serious physical harm to others,” the case may be referred to law enforcement.

ChatGPT cordially invites you to share your innermost thoughts. Don't worry, everything is confidential... more or less.

Sources:

The Contrast with ‘Protected’ Professions

The Privilege of Professional Secrecy

When we talk to a psychologist, solicitor, doctor or priest, our words are protected by a well-established legal mechanism: professional secrecy. This principle, rooted in centuries of legal tradition, establishes that certain conversations are inviolable, even in the face of criminal investigations.

Characteristics of traditional professional secrecy:

  • Very extensive protection: Communications remain confidential even in the presence of confessed crimes

  • Limited and specific exceptions: Only in extreme cases defined by law can/must certain professionals break their silence

  • Qualified human control: The decision to breach confidentiality always remains in the hands of a trained professional

  • Ethical responsibility: Professionals are bound by codes of conduct that balance duties to the client and society

The Real Limits of Professional Secrecy

Contrary to common perception, professional secrecy is not absolute. There are well-defined exceptions that vary by professional category:

For solicitors (Art. 28 of the Code of Conduct for Solicitors): Disclosure is permitted when necessary for:

  • The performance of defence activities

  • Preventing the commission of a particularly serious crime

  • Defending oneself in a dispute against one's client

  • Disciplinary proceedings

Critical example: If a client declares to their solicitor that they intend to commit murder, the protection of life must prevail over the protection of the right to defence, and the solicitor is released from their duty of confidentiality².

For psychologists (Art. 13 Code of Ethics): Confidentiality may be breached when:

  • There is an obligation to report or file a complaint for offences that are prosecutable ex officio

  • There is a serious threat to the life or mental and physical health of the subject and/or third parties

  • There is valid and demonstrable consent from the patient

Important distinction: Private psychologists have greater discretion than public psychologists, who, as public officials, have more stringent reporting obligations³.

Sources:

AI as a “non-professional”

ChatGPT operates in a completely different grey area:

Lack of legal privilege: Conversations with AI do not enjoy any legal protection. As Sam Altman, CEO of OpenAI, admitted: ‘If you talk to a therapist or a solicitor or a doctor about those issues, there is legal privilege for that. There is doctor-patient confidentiality, there is legal confidentiality, whatever. And we haven't solved that yet for when you talk to ChatGPT’².

Automated process: Unlike a human professional who evaluates each case individually, ChatGPT uses algorithms to identify ‘problematic’ content, removing qualified human judgement from the initial screening stage.

Sources:

The Practical Implications: A New Paradigm of Surveillance

The Paradox of Technological Trust

The situation creates a troubling paradox. Millions of people use ChatGPT as a digital confidant, sharing intimate thoughts, doubts, fears, and even criminal fantasies that they would never share with a human being. As Sam Altman reports: "People talk about the most personal things in their lives to ChatGPT. People use it — especially young people — as a therapist, life coach.‘⁴.

The risk of criminal self-censorship: The awareness that conversations may be monitored could paradoxically:

  • Push criminals towards more hidden channels

  • Prevent people with violent thoughts from seeking help

  • Create a ’chilling effect" on digital communications

Expertise vs. Algorithms: Who Decides What Is Criminal?

A crucial issue highlighted by critics concerns the expertise of those making the final decisions.

Human professionals have:

  • Years of training to distinguish between fantasies and real intentions

  • Codes of ethics that define when to break confidentiality

  • Personal legal responsibility for their decisions

  • Ability to assess context and credibility

The ChatGPT system operates with:

  • Automated algorithms for initial detection

  • OpenAI staff who do not necessarily have clinical or criminological training

  • Non-public and potentially arbitrary evaluation criteria

  • No external control mechanisms

Problematic example: How does an algorithm distinguish between:

  • A person writing a thriller and seeking inspiration for violent scenes

  • Someone fantasising with no intention of acting

  • An individual who is actually planning a crime

Sources:

Sources:

OpenAI's Contradiction: Privacy vs. Security

The Double Standard

OpenAI's admission creates a glaring contradiction with its previous positions. The company has strongly resisted requests for user data in lawsuits, citing privacy protection. In the case against the New York Times, OpenAI argued strenuously against the disclosure of chat logs to protect user privacy.

The irony of the situation: OpenAI defends user privacy in court while simultaneously admitting to monitoring and sharing data with external authorities.

The Impact of the New York Times Case

The situation has been further complicated by a court order requiring OpenAI to retain all ChatGPT logs indefinitely, including private chats and API data. This means that conversations that users believed to be temporary are now permanently archived⁵.

Sources:

Possible Solutions and Alternatives

Towards an ‘AI Privilege’?

As suggested by Sam Altman, it may be necessary to develop a concept of ‘AI privilege’ - a legal protection similar to that offered to traditional professionals. However, this raises complex questions:

Possible regulatory options:

  1. Licensing Model: Only certified AI can offer ‘conversational privilege’

  2. Mandatory Training: Those who handle sensitive content must have specific qualifications

  3. Professional Supervision: Involvement of qualified psychologists/lawyers in reporting decisions

  4. Algorithmic Transparency: Publication of the criteria used to identify ‘dangerous’ content

Intermediate technical solutions

“Compartmentalised” AI:

  • Separate systems for therapeutic vs. general use

  • End-to-end encryption for sensitive conversations

  • Explicit consent for each type of monitoring

“Tripartite” approach:

  • Automatic detection only for immediate and verifiable threats

  • Mandatory qualified human review

  • Appeal process for contested decisions

The Precedent of Digital Professionals

Lessons from other sectors:

  • Telemedicine: Developed protocols for digital privacy

  • Online legal advice: Uses encryption and identity verification

  • Digital therapy: Specialised apps with specific protections

Sources:

What This Means for AI Companies

Lessons for the Industry

The OpenAI case sets important precedents for the entire artificial intelligence industry:

  1. Mandatory transparency: AI companies will need to be more explicit about their monitoring practices

  2. Need for ethical frameworks: Clear regulation is needed on when and how AI can interfere with private communications

  3. Specialised training: Those who make decisions about sensitive content must have appropriate skills

  4. Legal liability: Define who is responsible when an AI system makes an incorrect assessment

Operational Recommendations

For companies developing conversational AI:

  • Implement multidisciplinary teams (legal, psychologists, criminologists)

  • Develop public and verifiable criteria for reporting

  • Create appeal processes for users

  • Invest in specialised training for review staff

For companies using AI:

  • Assess privacy risks before implementation

  • Clearly inform users about the limits of confidentiality

  • Consider specialised alternatives for sensitive uses

The Future of Digital Confidentiality

The central dilemma: How to balance the prevention of real crimes with the right to privacy and digital confidentiality?

The issue is not merely technical but touches on fundamental principles:

  • Presumption of innocence: Monitoring private conversations implies generalised suspicion

  • Right to privacy: Includes the right to have private thoughts, even disturbing ones

  • Preventive effectiveness: It is not proven that digital surveillance actually prevents crime

Conclusions: Finding the Right Balance

OpenAI's revelation marks a watershed moment in the evolution of artificial intelligence, but the question is not whether reporting is right or wrong in absolute terms: it is how to make it effective, fair and respectful of rights.

The need is real: Concrete threats of violence, plans for attacks or other serious crimes require intervention. The issue is not whether to report, but how to do so responsibly.

The fundamental differences to be resolved:

Training and Competence:

  • Human professionals have established protocols for distinguishing between real threats and fantasies

  • AI systems need equivalent standards and qualified supervision

  • Specialised training is needed for those who make final decisions

Transparency and Control:

  • Professionals operate under the supervision of professional associations

  • OpenAI needs public criteria and external control mechanisms

  • Users need to know exactly when and why they might be reported.

Proportionality:

  • Professionals balance confidentiality with security on a case-by-case basis.

  • AI systems need to develop similar mechanisms, not binary algorithms.

For companies in the sector, the challenge is to develop systems that effectively protect society without becoming tools for indiscriminate surveillance. User trust is essential, but it must coexist with social responsibility.

For users, the lesson is twofold:

  1. Conversations with AI do not have the same protections as traditional professionals

  2. This is not necessarily bad if done transparently and proportionately, but it is important to be aware of it

The future of conversational AI requires a new framework that:

  • Recognises the legitimacy of crime prevention

  • Establishes professional standards for those who handle sensitive content

  • Ensures transparency in decision-making processes

  • Protects individual rights without ignoring security

The right question is not whether machines should report crimes, but how we can ensure that they do so with (at least) the same wisdom, training and responsibility as human professionals.

The goal is not to return to AI that is “blind” to real dangers, but to build systems that combine technological efficiency with ethics and human expertise. Only then can we have the best of both worlds: security and protected individual rights.

References and Sources

  1. Futurism - ‘OpenAI Says It's Scanning Users' ChatGPT Conversations and Reporting Content to the Police’

  2. Studio Legale Puce - ‘Segreto Professionale dell'Avvocato’ (Lawyer's Professional Secrecy)

  3. La Legge Per Tutti - ‘Must a psychologist who knows of a crime report the patient?’

  4. TechCrunch - ‘Sam Altman warns there's no legal confidentiality when using ChatGPT as a therapist’

  5. Shinkai Blog - ‘OpenAI's ChatGPT Conversations Scanned, Reported to Police, Igniting User Outrage and Privacy Fears’

  6. Simon Willison - ‘OpenAI slams court order to save all ChatGPT logs, including deleted chats’

  7. Success Knocks - ‘OpenAI Lawsuit 2025: Appeals NYT Over ChatGPT Data’

Article by the AI research team. For more insights on artificial intelligence, privacy and regulation, follow us in our weekly newsletter.

Welcome to Electe’s Newsletter - English

This newsletter explores the fascinating world of how companies are using AI to change the way they work. It shares interesting stories and discoveries about artificial intelligence in business - like how companies are using AI to make smarter decisions, what new AI tools are emerging, and how these changes affect our everyday lives.

 

You don't need to be a tech expert to enjoy it - it's written for anyone curious about how AI is shaping the future of business and work. Whether you're interested in learning about the latest AI breakthroughs, understanding how companies are becoming more innovative, or just want to stay informed about tech trends, this newsletter breaks it all down in an engaging, easy-to-understand way.

 

It's like having a friendly guide who keeps you in the loop about the most interesting developments in business technology, without getting too technical or complicated

Subscribe to get full access to the newsletter and publication archives.