
Vast Experience
Supermicro’s AI Data Center Building Block Solutions power the largest liquid-cooled-AI Data Center deployment in the World.
End-to-End AI Data Center Building Block Solutions powered by NVIDIA Blackwell – delivering breakthrough performance for next-generation AI, HPC, and data center workloads.
with Worldwide Manufacturing
and Management Software
with Extensive Experience
In this transformative moment of AI, where the evolving scaling laws continue to push the limits of data center capabilities, Supermicro’s latest NVIDIA Blackwell-powered solutions – developed through close collaboration with NVIDIA – offer unprecedented computational performance, density, and efficiency with next-generation air-cooled and liquid-cooled architectures.
With readily deployable AI Data Center Building Block solutions, Supermicro is your premier partner to start your NVIDIA Blackwell journey, providing sustainable, cutting-edge solutions that accelerate AI innovations.
Select from a broad range of air-cooled and liquid-cooled systems with multiple CPU options, featuring a full data center management software suite, turn-key rack level integration with full networking, cabling, and cluster level L12 validation, global delivery, support, and service.
Supermicro’s AI Data Center Building Block Solutions power the largest liquid-cooled-AI Data Center deployment in the World.
Air or liquid-cooled, GPU-optimized, multiple system and rack form factors, CPUs, storage, and networking options, optimized for your needs.
Proven, scalable, and plug-and-play liquid-cooling solutions to sustain the AI revolution. Designed specifically for NVIDIA Blackwell.
Accelerated delivery with global capacity, world class deployment expertise, on-site services, to bring your AI to production, fast.
The new liquid-cooled 4U NVIDIA HGX B200 8-GPU system features newly developed cold plates and advanced tubing design paired with the new 250kW coolant distribution unit (CDU) more than doubling the cooling capacity of the previous generation in the same 4U form factor. The new architecture further enhances efficiency and serviceability of the predecessor that are designed for NVIDIA HGX H100/ H200 8-GPU. Available in 42U, 48U or 52U configurations, the rack scale design with the new vertical coolant distribution manifolds (CDM) means that horizontal manifolds no longer occupy valuable rack units. This enables 8 systems, 64 NVIDIA Blackwell GPUs in a 42U rack and all the way up to 12 systems with 96 NVIDIA GPUs in a 52U rack.
The new air-cooled 10U NVIDIA HGX B200 system features a redesigned chassis with expanded thermal headroom to accommodate eight 1000W TDP Blackwell GPUs. Up to 4 of the new 10U air-cooled systems can be installed and fully integrated in a rack, the same density as the previous generation, while providing up to 15x inference and 3x training performance. All Supermicro NVIDIA HGX B200 systems are equipped with a 1:1 GPU-to-NIC ratio supporting NVIDIA BlueField®-3 or NVIDIA ConnectX®-7 for scaling across a high-performance compute fabric.
Supermicro NVIDIA GB200 NVL72 SuperCluster features the new advanced in-rack coolant distribution unit (CDU) and custom coldplates designed for the compute trays housing the NVIDIA GB200 Grace™ Blackwell Superchips. The NVIDIA GB200 NVL72 delivers exascale computing capabilities in a single rack with fully integrated Liquid-Cooling. It incorporates 72 NVIDIA Blackwell GPUs and 36 Grace CPUs interconnected by NVIDIA’s largest NVLink™ network to date. The NVLink Switch System facilitates 130 terabytes per second (TB/s) of total GPU communications with low latency, enhancing performance for AI and high-performance computing (HPC) workloads.
Total Liquid-Cooling Offerings for a Wide Range of AI Data Center Environments
Supermicro systems can be adapted to a wide range of applications including AI inference and fine-tuning, HPC, 3D rendering, media encoding, and virtualization, with support for the latest generation of NVIDIA PCIe GPUs including NVIDIA RTX PRO™ 6000 Blackwell Server Edition as well as NVIDIA H200 NVL, NVIDIA H100 NVL, L40S, and L4. Supermicro NVIDIA-Certified™ Systems with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs will serve as the building blocks for Enterprise AI Factories, integrating with Spectrum-X networking, NVIDIA-Certified Storage, and NVIDIA AI Enterprise software to create full-stack solutions, accelerating the deployment of on-premises AI.