Nvidia and OpenAI have announced a sweeping partnership that pairs money with massive computing power, underscoring the escalating costs and scale of AI development. Under the agreement, OpenAI will build at least 10 gigawatts of AI datacenters running on Nvidia systems, representing millions of GPUs dedicated to training and deploying future models.
The financial component is just as significant. Nvidia has committed to investing up to $100 billion in OpenAI, but the investment will be tied to infrastructure milestones—deployed “per gigawatt” as new datacenters come online. This phased approach gives OpenAI the capital and hardware it needs to accelerate toward increasingly large-scale AI systems, while locking in Nvidia as its preferred compute and networking partner.
OpenAI CEO Sam Altman framed the deal in economic terms, calling compute infrastructure the “basis for the economy of the future.” With ChatGPT now reporting 700 million weekly active users, the company’s demand for GPUs is immense, and Nvidia’s hardware will form the backbone of its next wave of model training.
The partnership also highlights shifting dynamics in OpenAI’s alliances. For years, Microsoft was its primary compute provider, having invested more than $13 billion. But earlier this year Microsoft’s exclusivity ended, with its role reduced to having a “right of first refusal” on future compute deals. Since then, OpenAI has diversified aggressively—striking a $300 billion cloud deal with Oracle and investing in its own data centers. Tensions with Microsoft have also been reported, particularly around the “AGI clause” that could limit Microsoft’s financial stake once OpenAI achieves artificial general intelligence.
For Nvidia, the deal further cements its dominance in AI infrastructure. Its GPUs already power most leading AI models, and by tying its investment directly to OpenAI’s growth, it ensures a seat at the center of one of the most closely watched pushes toward advanced AI. For OpenAI, the infusion of compute and capital may prove essential as it competes with rivals scaling up their own supercomputing resources.

