AMD and Saudi-based AI firm HUMAIN have entered into a long-term collaboration to deploy large-scale AI infrastructure aimed at supporting the increasing demand for compute power across both public and private sectors. The agreement, valued at up to $10 billion over five years, will focus on building out up to 500 megawatts of AI compute capacity, anchored by AMD’s full hardware portfolio and open-source software ecosystem.
The initiative will be led by HUMAIN, which will oversee the planning, development, and operation of data center infrastructure, including power systems and global connectivity. AMD will provide compute hardware across its product lines—GPUs, CPUs, DPUs, and edge AI chips—along with its ROCm software stack, which supports a wide range of AI development frameworks and tools.
Unlike traditional cloud deployments that centralize compute resources, the collaboration is designed around a hybrid architecture that enables AI processing from centralized facilities to edge devices. This approach is expected to lower latency, improve power efficiency, and broaden access to AI capabilities across geographies and industries.
The AI infrastructure rollout will stretch from Saudi Arabia to North America, reflecting a strategy to create a globally distributed, open-access compute network. The companies aim to make the system modular and adaptable, offering infrastructure that can serve enterprises, startups, and government entities with different levels of compute needs and regulatory requirements.
The partnership places particular emphasis on openness and interoperability—positioning the infrastructure as an alternative to closed, proprietary AI systems. AMD’s ROCm platform, which supports leading AI frameworks like PyTorch and SGLang, is intended to reduce barriers for developers and institutions looking to run custom AI models on high-performance hardware.
Initial deployments of the infrastructure are already in progress, with multi-exaflop capacity targeted for activation by early 2026. According to HUMAIN, the platform is also intended to support the growing ecosystem of large language models, including the Arabic LLMs co-developed with Saudi Arabia’s data and AI authority, SDAIA.
The deal is part of Saudi Arabia’s broader plan to establish itself as a major hub for digital infrastructure and AI capability under its Vision 2030 initiative. By coupling domestic investment with partnerships from global technology companies, the Kingdom aims to build not only capacity, but also long-term expertise in data center operations, semiconductor systems, and open AI development.
While such large-scale infrastructure promises benefits in compute availability and geographic diversity, it also raises questions around governance, environmental sustainability, and long-term maintenance. HUMAIN has signaled that the centers will be built with energy efficiency in mind, but specific details on renewable sourcing and emissions impact have yet to be made public.
For AMD, the deal extends its reach into sovereign cloud and AI infrastructure markets at a time when demand for high-performance compute has surged globally. The collaboration may also serve as a proving ground for AMD’s open-source approach to AI development, offering an alternative to the closed ecosystems dominated by a few large players.
With a focus on accessibility and openness, the AMD–HUMAIN partnership aims to support a broader shift toward more decentralized, inclusive AI infrastructure—though its real-world impact will depend on execution, uptake across industries, and sustained policy and talent support in participating countries.