Anthropic has introduced Claude Haiku 4.5, the newest version of its compact AI model that emphasizes faster performance, lower costs, and stronger safety measures. Designed as a lightweight alternative within Anthropic’s Claude family, Haiku 4.5 offers roughly double the speed of Claude Sonnet 4 while operating at about a third of the cost. The company has positioned the model as a practical solution for developers and organizations seeking efficient AI systems that maintain reliable safeguards.
According to Anthropic, Haiku 4.5 achieves near-frontier performance in tasks such as coding, comparable to more advanced models in the lineup. It’s currently accessible through Anthropic’s own platforms, as well as Amazon Bedrock and Google Cloud’s Vertex AI. Pricing starts at $1 per million input tokens and $5 per million output tokens — placing it among the more affordable options in the growing AI-as-a-service market.
Anthropic also claims this is its safest model to date, citing internal evaluations that indicate fewer misalignment issues than the company’s previous releases. Haiku 4.5 has been classified at AI Safety Level 2, a mid-tier designation reflecting moderate risk and fewer operational restrictions than the ASL-3 label assigned to its more powerful models. The company suggests this balance between safety and performance makes Haiku particularly well-suited for real-time applications like chat assistants, coding copilots, and automated customer service tools.
The launch arrives amid heightened scrutiny of AI firms’ policy stances. Earlier this week, Bloomberg reported that Anthropic had drawn criticism from David Sacks, the Trump administration’s AI policy head, who accused the company of “regulatory capture” over its support for California’s new AI transparency law. Anthropic cofounder Jack Clark responded by reaffirming the company’s commitment to responsible regulation and noting alignment with federal goals on most AI policy matters.
In this context, Claude Haiku 4.5 serves not just as a technical update but as a statement of intent. Anthropic appears to be positioning itself as a company trying to reconcile rapid AI deployment with practical safety standards — a balance increasingly under debate as industry and government wrestle over how much oversight is enough.