OpenAI Tightens ChatGPT Oversight, Will Alert Police on Violent Threats
|
Representational Image of People Using AI Chatbots Created with Meta AI Image Generator. Photo: RMN News Service
🔐 OpenAI Tightens ChatGPT Oversight, Will Alert Police on Violent Threats
OpenAI’s move has sparked debate over the balance between user privacy and public safety, especially as law enforcement agencies may now gain access to previously protected data.
RMN Digital News Report
📍 New Delhi, September 9, 2025 — In a major shift to its privacy policy, OpenAI has announced that conversations with its AI chatbot ChatGPT will now be actively monitored for signs of imminent violence. If flagged, these chats may be reviewed by human moderators and reported to law enforcement if deemed a credible threat.
This update follows a tragic murder-suicide incident allegedly influenced by the chatbot, prompting OpenAI to reevaluate its safety protocols. The company clarified that while it respects user privacy, it will intervene in cases involving threats of serious physical harm. Conversations related to self-harm, however, will not be reported—though legal experts warn this distinction may face scrutiny.
CEO Sam Altman previously cautioned users that ChatGPT interactions do not carry legal confidentiality protections, unlike conversations with licensed professionals such as therapists or attorneys. This means anything shared with the chatbot could potentially be used in legal proceedings.
OpenAI’s move has sparked debate over the balance between user privacy and public safety, especially as law enforcement agencies may now gain access to previously protected data. The company insists the policy is narrowly focused on preventing violence, but critics argue it opens the door to broader surveillance concerns.
For more details, you can read the full report on RMN News.