Supermicro, a leader in IT solutions for AI, Cloud, Storage, and 5G/Edge, is making significant strides in AI technology with the upcoming support for NVIDIA HGX H200 Tensor Core GPUs. Announced at the Supercomputing Conference (SC23) in San Jose, California, these advancements promise to enhance the performance, scalability, and reliability of AI systems.
Enhanced AI Capabilities
Supermicro’s latest offerings include 8U and 4U Universal GPU Systems that are ready to support the HGX H200 8-GPU and 4-GPU configurations. These new systems feature nearly twice the capacity and 1.4 times higher bandwidth HBM3e memory compared to the previous NVIDIA H100 Tensor Core GPUs. Additionally, Supermicro’s portfolio now includes the NVIDIA MGX systems, which will support the NVIDIA Grace Hopper Superchip with HBM3e memory, further boosting the performance of generative AI, large language model (LLM) training, and high-performance computing (HPC) applications.
Industry-Leading Liquid Cooling Solutions
Supermicro is introducing a revolutionary liquid-cooled 4U server featuring NVIDIA HGX H100 8-GPUs, designed to double computing density per rack and reduce total cost of ownership (TCO). This compact, high-performance GPU server allows data center operators to minimize footprints and energy costs while maximizing AI training capabilities.
Unmatched Customer Support and Warranty
ICC’s commitment to customer satisfaction extends beyond delivering superior products. Their service excellence is highlighted by comprehensive warranty plans, including a standard three-year warranty covering a wide range of service needs. This dedication ensures clients receive ongoing support, maintaining the performance and reliability of their IT systems.
First-to-Market Innovation
Supermicro’s strategic partnership with NVIDIA ensures that their systems are among the first to market with the latest technologies. This collaboration allows customers to deploy generative AI faster, thanks to the advanced features of the NVIDIA H200 GPU, which includes NVIDIA® NVLink™ and NVSwitch™ high-speed GPU-GPU interconnects at 900GB/s and up to 1.1TB of high-bandwidth HBM3e memory per node.
Expanding Product Lines
Supermicro’s broad range of AI servers now includes the popular 8U and 4U Universal GPU systems, which are drop-in ready for the H200 GPUs. Each NVIDIA H200 GPU boasts 141GB of memory with a bandwidth of 4.8TB/s, making them ideal for training larger language models more efficiently.
Future-Proof AI Solutions
The introduction of the NVIDIA MGX servers with the NVIDIA GH200 Grace Hopper Superchips is a significant step forward. These servers are engineered to integrate the NVIDIA H200 GPU with HBM3e memory, allowing for the acceleration of large language models with hundreds of billions of parameters. This integration facilitates the training of generative AI models in less time and supports multiple larger models in a single system for real-time inference.
Showcasing at SC23
At SC23, Supermicro is showcasing their latest innovations, including a 4U Universal GPU System featuring the eight-way NVIDIA HGX H100 with advanced liquid-cooling technology. This system improves density and efficiency, driving the evolution of AI while reducing data center footprints and energy costs.
A Global Reach with Define Tech
Supermicro (NASDAQ: SMCI) is a global leader in application-optimized total IT solutions. Based in San Jose, California, Supermicro specializes in delivering innovative technologies for enterprise, cloud, AI, and 5G Telco/Edge IT infrastructure. Their comprehensive product portfolio includes servers, AI, storage, IoT, switch systems, software, and support services. With in-house design and manufacturing capabilities across the US, Taiwan, and the Netherlands, Supermicro optimizes for improved TCO and reduced environmental impact.