By using this site, you agree to our Privacy Policy and Terms of Service.
Accept
Absolute GeeksAbsolute Geeks
  • STORIES
    • TECH
    • GAMING
    • AUTOMOTIVE
    • GUIDES
    • OPINIONS
  • JEDI TESTED
    • READERS’ CHOICE
    • ALL REVIEWS
    • ━
    • SMARTPHONES
    • HEADPHONES
    • ACCESSORIES
    • LAPTOPS
    • TABLETS
    • WEARABLES
    • SPEAKERS
    • GAMING
    • APPS
    • AUTOMOTIVE
  • WATCHLIST
    • TV & MOVIES REVIEWS
    • SPOTLIGHT
  • +
    • TMT LABS
    • WHO WE ARE
    • GET IN TOUCH
Reading: OpenAI teams up with Broadcom to build custom AI chips for massive compute expansion
Share
Absolute GeeksAbsolute Geeks
  • STORIES
    • TECH
    • GAMING
    • AUTOMOTIVE
    • GUIDES
    • OPINIONS
  • JEDI TESTED
    • READERS’ CHOICE
    • ALL REVIEWS
    • ━
    • SMARTPHONES
    • HEADPHONES
    • ACCESSORIES
    • LAPTOPS
    • TABLETS
    • WEARABLES
    • SPEAKERS
    • GAMING
    • APPS
    • AUTOMOTIVE
  • WATCHLIST
    • TV & MOVIES REVIEWS
    • SPOTLIGHT
  • +
    • TMT LABS
    • WHO WE ARE
    • GET IN TOUCH
Follow US

OpenAI teams up with Broadcom to build custom AI chips for massive compute expansion

GEEK STAFF
GEEK STAFF
Oct 14, 2025

OpenAI is moving deeper into the chipmaking race. The company has signed a major partnership with Broadcom to develop custom AI accelerators and computing systems, expanding its already massive push to secure hardware for large-scale artificial intelligence workloads. The deal, valued in the multiple billions of dollars, marks another step in OpenAI’s plan to build unprecedented levels of compute infrastructure.

Under the agreement, OpenAI will design custom AI chips and systems, while Broadcom will handle manufacturing and deployment. The rollout of these specialized systems is set to begin in the second half of 2026, with completion expected by 2029. The companies have reportedly been collaborating for the past 18 months, and the deal covers roughly 10 gigawatts of compute capacity, an enormous addition to OpenAI’s growing hardware footprint.

This latest agreement builds on a string of partnerships with major hardware and cloud providers. NVIDIA recently committed $100 billion in infrastructure investment, supplying 10 gigawatts of capacity to OpenAI, while AMD is providing another six gigawatts in a deal worth tens of billions that could also see OpenAI take up to a 10% stake in AMD. Earlier this year, Oracle joined the list through a 4.5-gigawatt data center agreement under OpenAI’s so-called Stargate Project, which aims to create a distributed global computing network optimized for AI development.

These combined deals suggest that OpenAI is strategically diversifying its hardware partnerships to mitigate reliance on any single supplier while securing enough capacity to support increasingly demanding AI models. But they also hint at the staggering scale of the company’s ambitions — and costs. CEO Sam Altman has reportedly told employees that OpenAI’s long-term goal is to build 250 gigawatts of compute power within eight years, up from the roughly 2GW it expects to operate by the end of 2025. For perspective, that figure represents about one-fifth of the total electricity generation capacity of the United States.

At current prices, building out that level of compute could cost close to $10 trillion, an astronomical sum for a company projecting $13 billion in revenue this year. Altman has suggested that new financing structures will be necessary to fund the expansion, though details remain unclear. The company’s approach seems to rely on forming long-term, multi-stakeholder alliances — combining private capital, infrastructure partnerships, and chip co-development — rather than attempting to build a vertically integrated hardware ecosystem on its own.

For Broadcom, the partnership represents a significant foothold in the rapidly growing AI chip market, which has been dominated by NVIDIA and, to a lesser degree, AMD. Supplying custom silicon for OpenAI gives Broadcom a high-profile validation of its design and manufacturing capabilities, especially as hyperscale customers seek alternatives to off-the-shelf GPUs to reduce costs and optimize performance.

The move underscores a broader shift across the AI industry: the biggest players are no longer content to rely solely on traditional chip vendors. With workloads for generative AI expanding exponentially, companies like OpenAI, Google, Amazon, and Meta are increasingly pursuing in-house silicon design to secure supply chains, control costs, and tailor performance for their unique models.

In short, OpenAI’s Broadcom deal signals the next phase of its evolution — not just as an AI software company, but as a player in the global computing infrastructure business, one determined to build the physical backbone of the next era of artificial intelligence.

Share
What do you think?
Happy0
Sad0
Love0
Surprise0
Cry0
Angry0
Dead0

WHAT'S HOT ❰

Telegram brings “Liquid Glass” design to iOS, no iOS 26 required
Microsoft unveils Mai-Image-1, its first in-house AI image generator
OpenAI integrates ChatGPT directly into Slack for smarter workplace collaboration
Google Meet adds AI makeup so you’re always camera-ready
Dyson’s new Airwrap Co-anda 2x brings smarter styling and more power to the middle east
Absolute GeeksAbsolute Geeks
Follow US
© 2014-2025 Absolute Geeks, a TMT Labs L.L.C-FZ media network - Privacy Policy
Ctrl+Alt+Del inbox boredom
Smart reads for sharp geeks - subscribe to our newsletter and stay updated
No spam, just RAM for your brain.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?