By using this site, you agree to our Privacy Policy and Terms of Service.
Accept
Absolute Geeks UAEAbsolute Geeks UAE
  • STORIES
    • TECH
    • AUTOMOTIVE
    • GUIDES
    • OPINIONS
  • REVIEWS
    • READERS’ CHOICE
    • ALL REVIEWS
    • ━
    • SMARTPHONES
    • CARS
    • HEADPHONES
    • ACCESSORIES
    • LAPTOPS
    • TABLETS
    • WEARABLES
    • SPEAKERS
    • APPS
  • WATCHLIST
    • TV & MOVIES REVIEWS
    • SPOTLIGHT
  • GAMING
    • GAMING NEWS
    • GAME REVIEWS
  • +
    • WHO WE ARE
    • GET IN TOUCH
Reading: MiniMax-M2 sets new benchmark for open-source AI performance
Share
Notification Show More
Absolute Geeks UAEAbsolute Geeks UAE
  • STORIES
    • TECH
    • AUTOMOTIVE
    • GUIDES
    • OPINIONS
  • REVIEWS
    • READERS’ CHOICE
    • ALL REVIEWS
    • ━
    • SMARTPHONES
    • CARS
    • HEADPHONES
    • ACCESSORIES
    • LAPTOPS
    • TABLETS
    • WEARABLES
    • SPEAKERS
    • APPS
  • WATCHLIST
    • TV & MOVIES REVIEWS
    • SPOTLIGHT
  • GAMING
    • GAMING NEWS
    • GAME REVIEWS
  • +
    • WHO WE ARE
    • GET IN TOUCH
Follow US

MiniMax-M2 sets new benchmark for open-source AI performance

GEEK DESK
GEEK DESK
Oct 28

MiniMax-M2, the latest large language model from Chinese startup MiniMax, has entered the open-source AI landscape with unusually strong results—particularly in the emerging field of “agentic” tool use, where models independently perform web searches, execute commands, or operate software systems with minimal human direction. Released under the permissive MIT License, MiniMax-M2 can be freely used, modified, and commercialized, giving enterprises a flexible alternative to proprietary models like GPT-5 and Claude Sonnet 4.5.

Independent evaluations from Artificial Analysis place MiniMax-M2 at the top of all open-weight systems worldwide on its Intelligence Index, a composite metric that measures reasoning, coding, and task execution. In benchmarks targeting autonomous behavior, such as τ²-Bench (77.2), BrowseComp (44.0), and FinSearchComp-global (65.5), the model performs close to the best proprietary systems. Its architecture—a Mixture-of-Experts design with 230 billion total parameters but only 10 billion active per inference—balances capability with efficiency, allowing it to deliver near frontier-level reasoning while keeping computational costs moderate.

For enterprises, the model’s structure translates to lower deployment barriers. MiniMax-M2 can reportedly run on as few as four NVIDIA H100 GPUs using FP8 precision, an attainable configuration for mid-size organizations. The model supports both OpenAI and Anthropic API formats, easing migration for teams seeking to move away from closed ecosystems.

Benchmark data shows MiniMax-M2 competing closely with leading proprietary systems across coding, reasoning, and multi-step task execution. On SWE-Bench Verified, for example, it scores 69.4 compared to GPT-5’s 74.9. Its strong showing across diverse tests such as GAIA and ArtifactsBench underscores its versatility in automating research, software development, and enterprise operations that rely on complex, tool-augmented workflows.

A distinctive feature of MiniMax-M2 is its “interleaved thinking” mechanism, which retains visible reasoning traces between tags to help maintain continuity across multi-turn interactions. This transparency, combined with structured tool-calling capability via XML-style prompts, gives developers greater control and traceability in agentic systems—qualities valued in regulated industries or mission-critical software environments.

MiniMax’s rise in the global AI scene has been swift. Backed by Alibaba and Tencent, the company first gained attention with its AI video generation model “video-01” in 2024 before pivoting toward language systems optimized for reasoning and scalability. Its earlier releases, MiniMax-01 and MiniMax-M1, introduced extended context windows and reinforcement-learning refinements at notably low training costs. MiniMax-M2 builds on that foundation, representing a convergence of technical maturity and open-access principles.

The company’s approach stands out for blending cutting-edge research with practical engineering. Open licensing and cost efficiency—API pricing starts at $0.30 per million input tokens—position MiniMax-M2 as a competitive option for organizations seeking control, transparency, and affordability in AI infrastructure.

The model’s debut also reflects a broader trend: Chinese AI research groups are increasingly steering the open-weight movement, producing models that combine high performance with enterprise-oriented flexibility. As closed systems continue to dominate headlines, MiniMax-M2 offers a reminder that open models are catching up fast—and, in some specialized domains, may already be ahead.

Share
What do you think?
Happy0
Sad0
Love0
Surprise0
Cry0
Angry0
Dead0

WHAT'S HOT ❰

TikTok Next 2026 trend forecast highlights shift toward discovery and value
Fortnite teases Solo Leveling crossover featuring Sung Jin-Woo
Google confirms Android downloads backup to Drive with key limitations
Micron 9650 becomes first mass-produced PCIe 6.0 SSD with 28GB/s speeds
Apple muscles into MWC week with surprise March 4 event
Absolute Geeks UAEAbsolute Geeks UAE
Follow US
© 2014 - 2026 Absolute Geeks
Upgrade Your Brain Firmware
Receive updates, patches, and jokes you’ll pretend you understood.
No spam, just RAM for your brain.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?