Anthropic is expanding the capabilities of its coding assistant with the introduction of Code Review, a new feature built into its Claude Code platform that aims to help developers identify potential problems in software changes before they are merged into production. The update reflects a broader trend in AI-assisted development tools that attempt to automate parts of the traditional code review process, particularly as the volume of AI-generated code continues to grow.
Code Review works by automatically analyzing pull requests once they are opened. Instead of relying on a single automated pass, the system launches multiple AI agents that examine the code simultaneously. Each agent searches for possible bugs, problematic logic, or risky changes. The system then cross-checks those findings to filter out false positives and prioritizes the most significant issues.
The results appear directly inside the pull request as a summary comment highlighting the most relevant problems, along with detailed inline notes attached to specific sections of code. In practice, the goal is to surface meaningful feedback without forcing developers to sift through large volumes of automated warnings that may not be useful.
One of the distinguishing aspects of the feature is how it adapts its analysis depending on the size and complexity of the code changes. Smaller pull requests receive a lighter inspection, while larger or more complicated updates trigger additional AI agents and deeper analysis. According to Anthropic’s internal testing, the system typically completes a full review in roughly twenty minutes for an average pull request.
The company says the feature was developed partly in response to internal changes in development workflows. Over the past year, the amount of code generated per engineer within the organization reportedly increased by around 200 percent, largely due to AI-assisted programming tools. As the volume of code grows, manual review processes become harder to maintain at scale, creating pressure for automated systems that can assist with early bug detection.
Anthropic now runs the system across most internal pull requests and reports an increase in substantive feedback during reviews. While that does not eliminate the need for human oversight, it may help developers catch issues earlier in the process and reduce the time spent on repetitive inspection tasks.
The Code Review feature is currently rolling out to teams using Claude Code under research preview for Teams and Enterprise plans. It is not positioned as a lightweight automation tool, however. The service is billed based on token usage, and Anthropic estimates that each automated review typically costs between fifteen and twenty-five dollars, depending on the size and complexity of the pull request.
To address potential cost concerns for organizations, the company has introduced administrative controls including monthly usage caps, repository-level restrictions, and an analytics dashboard. These tools allow engineering managers to track how many pull requests are reviewed, monitor acceptance rates of suggested changes, and estimate overall spending.
The launch comes as Claude Code continues to expand its presence in the commercial developer tools market. Anthropic reports that the platform’s annualized revenue run rate has exceeded 2.5 billion dollars since its introduction, more than doubling since early 2026. Business subscriptions have also grown significantly during that period, with enterprise customers now accounting for more than half of the platform’s total revenue.
AI-assisted coding tools are becoming a routine part of software development workflows, and automated code review is emerging as one of the next areas for experimentation. While systems like this can accelerate debugging and highlight potential issues earlier, their long-term effectiveness will likely depend on how well they integrate with human reviewers rather than attempting to replace them.

