Next-Gen NVIDIA Blackwell Solutions

End-to-End AI Data Center Building Block Solutions powered by NVIDIA Blackwell – delivering breakthrough performance for next-generation AI, HPC, and data center workloads.

  • 5,000+ Racks Per Month

    with Worldwide Manufacturing

  • Onsite Deployment Services

    and Management Software

  • Unmatched Time-to-Online

    with Extensive Experience

NVIDIA Blackwell liquid-cooled rack

The Most Powerful and Efficient NVIDIA Blackwell Architecture Solutions

In this transformative moment of AI, where the evolving scaling laws continue to push the limits of data center capabilities, Supermicro’s latest NVIDIA Blackwell-powered solutions – developed through close collaboration with NVIDIA – offer unprecedented computational performance, density, and efficiency with next-generation air-cooled and liquid-cooled architectures.

With readily deployable AI Data Center Building Block solutions, Supermicro is your premier partner to start your NVIDIA Blackwell journey, providing sustainable, cutting-edge solutions that accelerate AI innovations.

    Why Choose AI Data Center Building Block Solutions?

    Select from a broad range of air-cooled and liquid-cooled systems with multiple CPU options, featuring a full data center management software suite, turn-key rack level integration with full networking, cabling, and cluster level L12 validation, global delivery, support, and service.

    Supermicro Solutions power the largest liquid-cooled-AI Data Center

    Vast Experience

    Supermicro’s AI Data Center Building Block Solutions power the largest liquid-cooled-AI Data Center deployment in the World.

    Air or liquid-cooled, GPU-optimized

    Flexible Offerings

    Air or liquid-cooled, GPU-optimized, multiple system and rack form factors, CPUs, storage, and networking options, optimized for your needs.

    liquid-cooling solutions to sustain the AI revolution

    Liquid-Cooling Pioneer

    Proven, scalable, and plug-and-play liquid-cooling solutions to sustain the AI revolution. Designed specifically for NVIDIA Blackwell.

    global capacity, world class deployment expertise, on-site services

    Fast Time to Online

    Accelerated delivery with global capacity, world class deployment expertise, on-site services, to bring your AI to production, fast.

    Air- and Liquid-Cooled NVIDIA Blackwell Data Center Solutions

    4U Liquid-Cooled Rack Configuration

    4U Liquid-Cooled Rack Configuration

    The new liquid-cooled 4U NVIDIA HGX B200 8-GPU system features newly developed cold plates and advanced tubing design paired with the new 250kW coolant distribution unit (CDU) more than doubling the cooling capacity of the previous generation in the same 4U form factor. The new architecture further enhances efficiency and serviceability of the predecessor that are designed for NVIDIA HGX H100/ H200 8-GPU. Available in 42U, 48U or 52U configurations, the rack scale design with the new vertical coolant distribution manifolds (CDM) means that horizontal manifolds no longer occupy valuable rack units. This enables 8 systems, 64 NVIDIA Blackwell GPUs in a 42U rack and all the way up to 12 systems with 96 NVIDIA GPUs in a 52U rack.

      10U Air-Cooled Rack Configuration

      The new air-cooled 10U NVIDIA HGX B200 system features a redesigned chassis with expanded thermal headroom to accommodate eight 1000W TDP Blackwell GPUs. Up to 4 of the new 10U air-cooled systems can be installed and fully integrated in a rack, the same density as the previous generation, while providing up to 15x inference and 3x training performance. All Supermicro NVIDIA HGX B200 systems are equipped with a 1:1 GPU-to-NIC ratio supporting NVIDIA BlueField®-3 or NVIDIA ConnectX®-7 for scaling across a high-performance compute fabric.

        NVIDIA GB200 NVL72 Rack Configuration

        NVIDIA GB200 NVL72 Rack Configuration

        Supermicro NVIDIA GB200 NVL72 SuperCluster features the new advanced in-rack coolant distribution unit (CDU) and custom coldplates designed for the compute trays housing the NVIDIA GB200 Grace™ Blackwell Superchips. The NVIDIA GB200 NVL72 delivers exascale computing capabilities in a single rack with fully integrated Liquid-Cooling. It incorporates 72 NVIDIA Blackwell GPUs and 36 Grace CPUs interconnected by NVIDIA’s largest NVLink™ network to date. The NVLink Switch System facilitates 130 terabytes per second (TB/s) of total GPU communications with low latency, enhancing performance for AI and high-performance computing (HPC) workloads.

          AI Data Center End-to-End Liquid-Cooling

          Total Liquid-Cooling Offerings for a Wide Range of AI Data Center Environments

          AI Data Center End-to-End Liquid-Cooling

          Broad Range of PCIe GPU Solutions

          Complete Flexibility for AI and Visual Computing

          Supermicro systems can be adapted to a wide range of applications including AI inference and fine-tuning, HPC, 3D rendering, media encoding, and virtualization, with support for the latest generation of NVIDIA PCIe GPUs including NVIDIA RTX PRO™ 6000 Blackwell Server Edition as well as NVIDIA H200 NVL, NVIDIA H100 NVL, L40S, and L4. Supermicro NVIDIA-Certified™ Systems with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs will serve as the building blocks for Enterprise AI Factories, integrating with Spectrum-X networking, NVIDIA-Certified Storage, and NVIDIA AI Enterprise software to create full-stack solutions, accelerating the deployment of on-premises AI.

            Advance Enterprise AI with Superimcro and NVIDIA RTX PRO 6000 Blackwell:

            • • Significantly enhanced performance for Enterprise AI workloads including AI inference & fine-tuning, AI development, generative AI, AI-driven graphics & rendering, video content & streaming, and game development.
            • • More than 20 Supermicro systems are ready to support RTX PRO 6000 Blackwell GPUs, and more than 100 support NVIDIA PCIe GPUs.
            • • Wide-ranging workload support to adapt to almost any application, including virtualized and cloud environments with NVIDIA Multi-Instance GPU (MIG).
            • • Deploy from data center to edge with a wide range of form factors.

            Talk to Expert