OpenAI has rolled out a new feature for ChatGPT that gives users more control over how long GPT-5 takes to reason through its responses. The update follows feedback that GPT-5’s “Thinking” mode, while often more accurate, could sometimes feel too slow.
Until now, GPT-5 offered two reasoning options: the standard “Thinking” mode and a faster, lighter “Thinking mini.” With the latest update, premium subscribers can now fine-tune response times by choosing different “thinking speeds.”
Here’s how the breakdown looks across subscription tiers:
- ChatGPT Plus and Business users: Standard and Extended modes
- ChatGPT Pro users: Light, Standard, Extended, and Heavy modes
The Standard setting is now the default across accounts, while Extended — which was previously the default for Plus subscribers — needs to be manually reselected. Pro users gain two extra modes: Light for faster responses and Heavy for deeper reasoning. OpenAI has not clarified how Light compares with the earlier “Thinking mini,” which was also designed to speed up responses.
Once selected, the chosen reasoning speed applies to all ongoing chats in the web app until changed again. The feature has not yet rolled out to mobile, though OpenAI says iOS and Android support will arrive in the coming weeks.
The practical difference between the speeds comes down to depth versus efficiency. Light and Standard modes aim to balance reasoning with responsiveness, while Extended and Heavy take more time to generate answers intended to be more thorough and reliable. For tasks requiring higher precision — such as complex problem-solving or research — the slower modes may be preferable.
Alongside the thinking speed control, OpenAI has also updated ChatGPT’s online search functionality. According to the company, the model is now better at handling queries with shopping intent, displaying products more accurately while continuing to refine general search accuracy.
The update underscores OpenAI’s ongoing effort to balance performance, usability, and flexibility in GPT-5, after earlier criticism led to the return of legacy models for premium users. Giving people control over reasoning speed may help smooth over concerns that GPT-5’s deeper thinking comes at the cost of responsiveness.