Representational Cybersecurity Image | RMN Digital News Service
- Applications, Artificial Intelligence, Enterprise, Enterprise Tech, Feature

OpenAI Unveils “Lockdown Mode” and “Elevated Risk” Labels to Combat AI Cyber Threats

Representational Cybersecurity Image | RMN Digital News Service
Representational Cybersecurity Image | RMN Digital News Service

OpenAI Unveils “Lockdown Mode” and “Elevated Risk” Labels to Combat AI Cyber Threats

While AI systems become more capable by connecting to the web and various apps, OpenAI emphasizes that these connections necessitate a change in security stakes.

RMN Digital Enterprise Tech Desk
New Delhi | February 14, 2026

SAN FRANCISCO – In a move to address the evolving security landscape of artificial intelligence, OpenAI announced on February 13, 2026, the launch of two major security features for ChatGPT: Lockdown Mode and “Elevated Risk” labels. These tools are specifically designed to protect users against prompt injection attacks, a method where malicious third parties attempt to hijack AI systems to reveal sensitive information or follow unauthorized instructions.

Lockdown Mode: Fortifying High-Stakes Environments

The new Lockdown Mode is an advanced, optional setting tailored for high-risk users, such as corporate executives and security teams at prominent organizations. According to the sources, this mode strictly limits how ChatGPT interacts with external systems to prevent data exfiltration.

A primary feature of Lockdown Mode is its treatment of web connectivity; for example, web browsing is restricted to cached content to ensure no live network requests leave OpenAI’s controlled environment. The company noted that while these restrictions are not necessary for the average user, they provide a “deterministic” safeguard for those handling highly sensitive data.

Lockdown Mode is currently available for ChatGPT Enterprise, ChatGPT Edu, ChatGPT for Healthcare, and ChatGPT for Teachers. OpenAI plans to extend this feature to general consumers in the coming months.

“Elevated Risk” Labels: Enhancing User Transparency

To help users make informed choices, OpenAI is also standardizing “Elevated Risk” labels across ChatGPT, ChatGPT Atlas, and Codex. These labels will appear alongside features that involve network-related capabilities which may introduce security vulnerabilities not yet fully addressed by industry standards.

For instance, developers using Codex to grant the AI network access for documentation lookups will now see an “Elevated Risk” label. This label includes an explanation of the potential risks and guidance on when such access is appropriate. OpenAI stated that these labels are temporary and will be removed once security advancements sufficiently mitigate the associated risks.

Building on Existing Infrastructure

These new layers of security build upon OpenAI’s existing framework, which includes sandboxing, URL-based data exfiltration protections, and role-based access controls. While AI systems become more capable by connecting to the web and various apps, OpenAI emphasizes that these connections necessitate a change in security stakes.

Workspace administrators will maintain granular control over these features, allowing them to specify which apps and actions remain available to users even when Lockdown Mode is engaged. Detailed oversight will be further supported by the Compliance API Logs Platform, which tracks app usage and shared data.

RMN Digital

About RMN Digital

Read All Posts By RMN Digital