CORSAIR has announced the AI Workstation 300, a 4.4-liter small form factor PC designed to handle advanced AI workloads, large language models (LLMs), and intensive creative applications. Built around AMD’s latest Ryzen AI Max 300 Series processors, the system focuses on delivering high performance for local model inference and AI development without relying solely on cloud resources.
Configurations can scale up to the AMD Ryzen AI Max+ 395, which integrates 96GB of Variable Graphics Memory, RDNA 3.5 graphics with 40 compute units, and XDNA 2 NPU architecture capable of up to 50 trillion operations per second (TOPS) for AI acceleration. With AMD’s “Strix Halo” unified memory technology, the workstation supports up to 128GB of LPDDR5X, with up to 96GB dynamically available as VRAM. This allows it to handle AI models that exceed the memory limits of most gaming GPUs, such as the 123-billion-parameter Mistral Large (BF16), which requires 92GB of GPU memory.
For developers using Model Context Protocol (MCP) servers or working with extended context lengths, the system’s large unified memory pool reduces paging and maintains consistent performance. This makes it suitable for running local tools like LM Studio, Stable Diffusion with Amuse-ai, and other resource-intensive AI frameworks.
Despite its compact footprint, the AI Workstation 300 includes a dual-fan cooling system to maintain stable performance under sustained heavy loads. A built-in Performance Level Selector lets users prioritize efficiency or maximum output depending on the task. The chassis is designed for modular flexibility, allowing integration into desktop setups or portable deployments.
The workstation comes preloaded with the CORSAIR AI Software Suite, which includes tools for AI, engineering, and creative work, as well as security features such as chip-to-cloud protection for safeguarding sensitive data and models. The unit is covered by a 2-year warranty.
While not a replacement for high-end dedicated GPU clusters, the CORSAIR AI Workstation 300 offers a compact, relatively quiet, and high-memory option for researchers, developers, and content creators who want to run AI workloads locally with fewer compromises.

