
OpenAI Unveils GPT-5.4 Mini and Nano: A New Era of High-Speed, Efficient AI
In ChatGPT, the mini model is accessible to Free and Go users through the “Thinking” feature, while other tiers will use it as a rate-limit fallback.
RMN Digital AI Desk
New Delhi | March 18, 2026
SAN FRANCISCO — OpenAI announced the release of GPT-5.4 mini and nano on March 17, 2026, marking a significant advancement in the development of compact, high-performance artificial intelligence. These new models are designed to bring the capabilities of the flagship GPT-5.4 to faster, more cost-effective formats tailored for high-volume workloads and real-time applications.
Performance and Speed
The GPT-5.4 mini model offers a substantial performance boost, running more than 2x faster than the previous GPT-5 mini. Despite its reduced size, it closely rivals the larger GPT-5.4 model in specialized benchmarks. For instance, on the SWE-Bench Pro coding evaluation, GPT-5.4 mini achieved a 54.4% accuracy rate, nearing the 57.7% score of the standard GPT-5.4.
For users prioritizing cost above all else, GPT-5.4 nano serves as the smallest and most affordable entry in the lineup. It is specifically optimized for tasks like data extraction, classification, and supporting simpler coding subtasks.
Revolutionizing “Subagents” and Computer Use
A key feature of this release is the models’ suitability for complex AI systems. Developers are encouraged to use a “delegation” pattern where a flagship model like GPT-5.4 handles high-level planning while the faster mini subagents execute narrower tasks in parallel, such as searching codebases or reviewing documents.
Also Read:
[ OpenAI Bolsters Ecosystem with “Smoother” GPT-5.3 Model and Strategic Security Acquisition ]
[ Navigating WordPress and Domain Renewal: A Consumer Rights Perspective on Registry Syncing ]
The models also excel in multimodal tasks and “computer use”. GPT-5.4 mini can rapidly interpret dense user interface screenshots, scoring 72.1% on the OSWorld-Verified benchmark, which is nearly on par with the full-sized GPT-5.4’s 75.0%.
Availability and Pricing
OpenAI has made these models accessible across several platforms:
- GPT-5.4 mini: Now available in the API, ChatGPT, and Codex. In the API, it features a 400k context window and is priced at $0.75 per 1M input tokens and $4.50 per 1M output tokens.
- GPT-5.4 nano: Available exclusively via the API for high-efficiency tasks, costing $0.20 per 1M input tokens and $1.25 per 1M output tokens.
In ChatGPT, the mini model is accessible to Free and Go users through the “Thinking” feature, while other tiers will use it as a rate-limit fallback. Codex users can also benefit from the mini model, which uses only 30% of the standard GPT-5.4 quota, allowing for more frequent iterations at one-third the cost.






