OpenAI Replaces GPT-4.0 Mini with New GPT-4.1 Series in ChatGPT: Major Gains in Speed, Accuracy, and Coding Power
OpenAI replaces GPT-4.0 mini with GPT-4.1 and GPT-4.1 mini in ChatGPT, offering better coding, instruction, and context handling with lower cost and latency.
OpenAI has introduced its latest generation of AI models—GPT-4.1 and GPT-4.1 mini—into ChatGPT, officially phasing out the previous GPT-4.0 mini. The company claims the new models offer significant improvements in performance, particularly in coding tasks, instruction following, and long-context comprehension, all while reducing cost and latency.
These new models, previously launched in API alongside GPT-4.1 nano last month, are now available to ChatGPT Plus, Pro, and Team users through the model selector. Free-tier users will not have access to the GPT-4.1 family, while Enterprise and Edu users are expected to receive the update in the coming weeks.
What’s New in GPT-4.1 and GPT-4.1 Mini?
In an official blog post, OpenAI highlighted that both GPT-4.1 and its smaller counterpart outperform the GPT-4o series, particularly in complex tasks involving software development. The context window has been significantly extended—supporting up to 1 million tokens—allowing for better handling of large and complex documents or conversations.
“These models outperform GPT-4o and GPT-4o mini across the board, with major gains in coding and instruction following,” OpenAI said. “They also have larger context windows and improved long-context comprehension.”
Enhanced Coding Capabilities
OpenAI emphasized that GPT-4.1 is especially powerful in agentic coding tasks, such as:
Reliable frontend development
Accurate diff-based formatting
Reduced extraneous edits
Consistent tool usage
These features make GPT-4.1 a valuable tool for developers and IT teams relying on AI for software engineering workflows.
GPT-4.1 Mini: Better Performance at Lower Cost
The GPT-4.1 mini model brings a major leap in performance for smaller models. According to OpenAI, it:
Matches or outperforms GPT-4o in many intelligence benchmarks
Cuts latency nearly in half
Reduces costs by up to 83%
This makes it an ideal option for users who need high performance without high resource demands.
GPT-4.1 Nano: Speed and Efficiency for API Users
Though not integrated into ChatGPT yet, GPT-4.1 nano is available via API and is OpenAI's fastest and most affordable model. It supports the same extended 1 million token context window and is positioned for real-time tasks like classification and autocompletion.
It scores:
80.1% on MMLU
50.3% on GPQA
9.8% on Aider polyglot coding
These metrics even surpass those of GPT-4o mini, making nano the go-to model for lightweight, high-speed applications.