OpenAI has introduced its latest AI model, o1-pro, an enhanced version of its reasoning model o1, designed for developers seeking improved performance. However, the model comes with a notably high price tag.
According to OpenAI’s announcement, o1-pro utilizes increased computing power to deliver more consistent and accurate responses. The model also incorporates new features, including vision support, function calling, structured outputs, and compatibility with Responses and Batch APIs.
The increased computational demands of o1-pro result in higher costs, with pricing set at $150 per 1 million input tokens and $600 per 1 million output tokens. This pricing structure positions o1-pro at twice the cost of OpenAI’s GPT-4.5 and ten times the cost of the baseline o1, as noted by TechCrunch.
An OpenAI spokesperson clarified that o1-pro is intended to provide more reliable responses to complex problems, fulfilling requests from the developer community.
OpenAI is targeting o1-pro at developers, with current availability limited to select developers on tiers 1–5 of its API services. Higher-tier developers have access to increased request volumes within specific timeframes.
The high cost of o1-pro raises questions about developer willingness to adopt the model. User feedback from the model’s earlier rollout as part of ChatGPT Pro was mixed.
Reddit users expressed concerns about the model’s practical utility, with some describing it as “pathetic” and noting discrepancies between benchmark performance and real-world application. Conversely, other users found o1-pro beneficial for programming tasks, particularly when provided with detailed instructions.
The o1-pro model is now accessible on OpenAI’s development platform, contingent on the user’s willingness to accept the associated costs. The model’s features, including vision support and structured output capabilities, cater to developers requiring advanced functionalities. Potential users must weigh the benefits of these features against the model’s premium pricing.