HPC systems will be with the first elastic

GIGABYTE Announces HPC Systems Powered by NVIDIA A100 Tensor Core GPUs

GIGABYTE, a supplier of high-performance computing (HPC) systems, disclosed four NVIDIA HGX™ A100 platforms under development. These platforms will be available with NVIDIA A100 Tensor Core GPUs. These four products include G262 series servers that can hold four NVIDIA A100 GPUs and G492 series that can provide eight A100 GPUs. Each series also distinguishes between two models, which support the 3rd generation Intel Xeon Scalable processor and the 2nd generation AMD EPYC processor. The NVIDIA HGX A100 platform is a key element in the NVIDIA accelerated data center concept that brings huge parallel computing power to customers, thereby helping customers accelerate their digital transformation.

GPU computing power for different computing scales

With GPU acceleration becoming the mainstream technology in today's data center. Scientists, researchers and engineers are committed to using GPU-accelerated HPC and artificial intelligence (AI) to meet the important challenges of the current world. The NVIDIA accelerated data center concept, including GIGABYTE high-performance servers with NVIDIA NVSwitch, NVIDIA NVLink, and NVIDIA A100 GPUs, will provide GPU computing power required for different computing scales. The NVIDIA accelerated data center also features NVIDIA Mellanox HDR InfiniBand high-speed networking and NVIDIA Magnum IO software that supports GPUDirect RDMA and GPUDirect Storage. Through these combinations, a single HGX A100 platform can be quickly expanded from 4 or 8 GPUs to tens of thousands of GPUs to train the most complex AI networks at the fastest speed. A100 also introduces a new multi-instance GPU technology that allows users to partition each A100 to seven instances to achieve the best GPU utilization depending on computing needs. HGX A100 can be quickly expanded to large clusters. A single A100 Tensor Core GPU can be partitioned into multiple instances performing lightweight operations, which can improve the resource usage required to meet different scale workloads, thereby accelerating customer insight and shortening product/service launch time. The A100 Tensor Core GPU is also designed to accelerate all major deep learning frameworks and more than 700 HPC applications, while the NGC catalog of container software can help developers more easily start and run programs.

Support the maximum number of GPUs

By providing high-performance motherboards and NVIDIA GPUs, GIGABYTE has become a leading brand of computer hardware and is known for its excellent product performance and stability. With its product autonomy, GIGABYTE has mastered the process from product planning to production and chose to work closely with NVIDIA. Through years of experience accumulation and design capabilities, GIGABYTE’s platform can support the maximum number of GPUs in 2U and 4U spaces. The modular concept divides the G262/G492 servers into simplified GPU and CPU parts, and uses a barrier design to separate the two areas to form a larger air tunnel and prevent heat conduction, thereby solving the heat problem. In terms of power supply, G262/G492 servers were built with 80+ high efficiency power supplies and implemented N + 1 redundancy to ensure users a safe data environment. The G262/G492 series servers can meet customer needs for HPC and AI. GIGABYTE will also rely on the industry-leading design knowledge to push the boundary of our product performance.

Learn more about GIGABYTE high-performance systems and NVIDIA Tensor Core GPUs at vad@asbis.com

>>Source