By using this site, you agree to our Privacy Policy and Terms of Service.
Accept
Absolute Geeks UAEAbsolute Geeks UAE
  • STORIES
    • TECH
    • AUTOMOTIVE
    • GUIDES
    • OPINIONS
  • REVIEWS
    • READERS’ CHOICE
    • ALL REVIEWS
    • ━
    • SMARTPHONES
    • CARS
    • HEADPHONES
    • ACCESSORIES
    • LAPTOPS
    • TABLETS
    • WEARABLES
    • SPEAKERS
    • APPS
  • WATCHLIST
    • TV & MOVIES REVIEWS
    • SPOTLIGHT
  • GAMING
    • GAMING NEWS
    • GAME REVIEWS
  • +
    • OUR STORY
    • GET IN TOUCH
Reading: Anthropic adds automated code review feature to Claude code
Share
Notification Show More
Absolute Geeks UAEAbsolute Geeks UAE
  • STORIES
    • TECH
    • AUTOMOTIVE
    • GUIDES
    • OPINIONS
  • REVIEWS
    • READERS’ CHOICE
    • ALL REVIEWS
    • ━
    • SMARTPHONES
    • CARS
    • HEADPHONES
    • ACCESSORIES
    • LAPTOPS
    • TABLETS
    • WEARABLES
    • SPEAKERS
    • APPS
  • WATCHLIST
    • TV & MOVIES REVIEWS
    • SPOTLIGHT
  • GAMING
    • GAMING NEWS
    • GAME REVIEWS
  • +
    • OUR STORY
    • GET IN TOUCH
Follow US

Anthropic adds automated code review feature to Claude code

MAYA A.
MAYA A.
Mar 10

Anthropic is expanding the capabilities of its coding assistant with the introduction of Code Review, a new feature built into its Claude Code platform that aims to help developers identify potential problems in software changes before they are merged into production. The update reflects a broader trend in AI-assisted development tools that attempt to automate parts of the traditional code review process, particularly as the volume of AI-generated code continues to grow.

Code Review works by automatically analyzing pull requests once they are opened. Instead of relying on a single automated pass, the system launches multiple AI agents that examine the code simultaneously. Each agent searches for possible bugs, problematic logic, or risky changes. The system then cross-checks those findings to filter out false positives and prioritizes the most significant issues.

The results appear directly inside the pull request as a summary comment highlighting the most relevant problems, along with detailed inline notes attached to specific sections of code. In practice, the goal is to surface meaningful feedback without forcing developers to sift through large volumes of automated warnings that may not be useful.

One of the distinguishing aspects of the feature is how it adapts its analysis depending on the size and complexity of the code changes. Smaller pull requests receive a lighter inspection, while larger or more complicated updates trigger additional AI agents and deeper analysis. According to Anthropic’s internal testing, the system typically completes a full review in roughly twenty minutes for an average pull request.

The company says the feature was developed partly in response to internal changes in development workflows. Over the past year, the amount of code generated per engineer within the organization reportedly increased by around 200 percent, largely due to AI-assisted programming tools. As the volume of code grows, manual review processes become harder to maintain at scale, creating pressure for automated systems that can assist with early bug detection.

Anthropic now runs the system across most internal pull requests and reports an increase in substantive feedback during reviews. While that does not eliminate the need for human oversight, it may help developers catch issues earlier in the process and reduce the time spent on repetitive inspection tasks.

The Code Review feature is currently rolling out to teams using Claude Code under research preview for Teams and Enterprise plans. It is not positioned as a lightweight automation tool, however. The service is billed based on token usage, and Anthropic estimates that each automated review typically costs between fifteen and twenty-five dollars, depending on the size and complexity of the pull request.

To address potential cost concerns for organizations, the company has introduced administrative controls including monthly usage caps, repository-level restrictions, and an analytics dashboard. These tools allow engineering managers to track how many pull requests are reviewed, monitor acceptance rates of suggested changes, and estimate overall spending.

The launch comes as Claude Code continues to expand its presence in the commercial developer tools market. Anthropic reports that the platform’s annualized revenue run rate has exceeded 2.5 billion dollars since its introduction, more than doubling since early 2026. Business subscriptions have also grown significantly during that period, with enterprise customers now accounting for more than half of the platform’s total revenue.

AI-assisted coding tools are becoming a routine part of software development workflows, and automated code review is emerging as one of the next areas for experimentation. While systems like this can accelerate debugging and highlight potential issues earlier, their long-term effectiveness will likely depend on how well they integrate with human reviewers rather than attempting to replace them.

Share
What do you think?
Happy0
Sad0
Love0
Surprise0
Cry0
Angry0
Dead0

WHAT'S HOT ❰

Tessan launches a cat-themed 65W desktop charging station
RØDE launches RØDECaster Video Core for hybrid video and audio production
iOS 26.4 adds new setting to reduce Liquid Glass visual effects
ChatGPT adds built-in Shazam music recognition feature
iOS 26.4 Emoji update: the new icons coming to iPhone
Absolute Geeks UAEAbsolute Geeks UAE
Follow US
AbsoluteGeeks.com was assembled by Absolute Geeks Media FZE LLC during a caffeine incident.
© 2014–2026. All rights reserved.
Proudly made in Dubai, UAE ❤️
Upgrade Your Brain Firmware
Receive updates, patches, and jokes you’ll pretend you understood.
No spam, just RAM for your brain.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?