OpenAI has rolled out an updated image generation model for ChatGPT, dubbed Images 2, along with fresh enterprise initiatives for its Codex coding tool and minor refinements to its iOS widgets.
The new model addresses longstanding shortcomings in earlier versions, particularly around rendering legible text within images—a persistent frustration for users attempting anything beyond simple visuals. It now supports resolutions up to 2K and a wider range of aspect ratios, from wide panoramic formats to tall vertical ones. OpenAI demonstrated the system creating cohesive layouts resembling full magazine spreads, complete with headlines, body copy, and illustrations generated in consistent styles. A “Thinking” mode adds deeper reasoning steps before output, while an “Instant” version prioritizes speed. The model can also query the web for current information to inform its generations and shows improved handling of non-Latin scripts such as Japanese, Korean, Chinese, Hindi, and Bengali.
These capabilities build on the rapid evolution of consumer-facing AI image tools since the debut of DALL-E models a few years ago. What once produced charming but error-prone results—garbled signage, inconsistent typography, or stylistic drift across a series—now edges closer to practical design workflows. Still, the claim that users can reliably produce entire magazines with it invites scrutiny; professional layout software and human editors remain essential for anything destined for print or high-stakes publication. The integration of real-time web research is a pragmatic step, though it raises familiar questions about accuracy, copyright, and the blurring line between generated and sourced material.
On the enterprise side, OpenAI continues pushing Codex beyond its coding roots. Recent Mac app updates introduced agentic computer control—allowing the tool to navigate desktop applications, click, type, and reason over screen content—along with built-in image generation and a feature called Chronicle that pulls context from recent activity to reduce repetitive prompting. The new Codex Labs program embeds OpenAI specialists inside organizations for workshops and hands-on integration sessions. The goal is to help teams across departments—not just developers—turn fragmented data into briefs, plans, checklists, and follow-through actions. Partnerships with major consultancies aim to accelerate adoption, reflecting a broader industry shift toward embedding AI agents into everyday business processes rather than treating them as isolated productivity boosters.
A small but noticeable polish arrived for the ChatGPT widget on iPhone and iPad, with more consistent icons that align better with the rest of the interface. It is a modest refinement, yet one that underscores OpenAI’s attention to the Apple ecosystem where many users first encounter its tools.
Taken together, these updates illustrate OpenAI’s dual focus: consumer-facing creativity tools that feel incrementally more capable, and deeper enterprise scaffolding designed for repeatable deployment at scale. The image model’s advances in text and layout are welcome, but they also highlight how much of the heavy lifting in professional design still depends on human judgment and established workflows. As Codex expands its reach beyond code, the real test will be whether organizations can integrate these agents without creating new layers of complexity or dependency. For now, the announcements signal steady iteration rather than a fundamental leap, consistent with the measured pace that has defined AI development in 2026.
