Nvidia CEO Jensen Huang announced at Computex the launch of Nvidia Blackwell architecture-powered systems by the world's leading computer manufacturers. These systems feature Grace CPUs and Nvidia's networking solutions, enabling enterprises to construct AI factories and advanced data centers.
The Nvidia Blackwell GPUs offer 25 times better energy efficiency and reduced costs for AI processing tasks. The innovative Nvidia GB200 Grace Blackwell Superchip, which integrates multiple chips in one package, delivers remarkable performance enhancements, achieving up to 30 times the performance increase for large language model (LLM) inference workloads compared to previous models.
With a focus on advancing generative AI, major companies, including ASRock Rack, Asus, Gigabyte, Ingrasys, Inventec, Pegatron, QCT, Supermicro, Wistron, and Wiwynn, are set to deliver cloud, on-premises, embedded, and edge AI systems using Nvidia’s GPUs and advanced networking.
Huang stated, “The next industrial revolution has begun. Companies and nations are partnering with Nvidia to transform trillion-dollar traditional data centers into accelerated computing environments, building a new type of data center — AI factories — to produce artificial intelligence.” He emphasized the industry-wide shift towards Nvidia's Blackwell for driving AI-powered innovation across all fields.
To meet applications of all types, offerings will include single and multi-GPU configurations, a range of processor options (from x86 to Grace-based), and air- to liquid-cooling technologies.
To expedite the development of diverse systems, the Nvidia MGX modular reference design platform now supports Blackwell products, including the GB200 NVL2 platform, designed for high-performance mainstream large language model inference, retrieval-augmented generation, and data processing.
Asus Chairman Jonney Shih stated, “ASUS is collaborating with NVIDIA to elevate enterprise AI with our robust server lineup showcased at COMPUTEX. Utilizing NVIDIA’s MGX and Blackwell platforms, we can create tailored data center solutions to handle a variety of workloads, including training, inference, data analytics, and high-performance computing (HPC).”
The GB200 NVL2 is ideally positioned for emerging market sectors like data analytics, where companies invest billions annually. It harnesses high-bandwidth memory performance from NVLink-C2C interconnects along with Blackwell's dedicated decompression engines for up to 18x faster data processing and 8x improved energy efficiency over x86 CPUs.
Nvidia MGX supplies computer manufacturers with a flexible reference architecture enabling the rapid and cost-effective design of over 100 different system configurations. Manufacturers can start with a basic server architecture and select GPUs, DPUs, and CPUs tailored to specific workloads. Over 90 systems leveraging the MGX architecture from more than 25 partners have been released or are in development, significantly up from 14 systems from six partners last year. This platform can reduce development costs by up to 75% and cut development time to just six months.
AMD and Intel will also support the MGX architecture with their own CPU host processor module designs, including the next-generation AMD Turin platform and Intel Xeon 6 processors featuring P-cores. Server builders can leverage these reference designs to streamline development while ensuring consistent performance.
Nvidia’s GB200 NVL2 integrates with MGX and Blackwell, featuring a scale-out, single-node design that supports numerous configurations and seamless networking options for accelerating computing capabilities in existing data center infrastructures.
Nvidia Blackwell boasts an impressive 208 billion transistors and benefits from a robust partner ecosystem, including TSMC, the leading semiconductor manufacturer, and other global electronics makers providing essential components for AI factories. Innovations in server racks, power delivery, cooling solutions, and more are being developed by partners like Amphenol, AVC, Cooler Master, CPC, Danfoss, Delta Electronics, and LITEON.
This infrastructure enhances data center capabilities to meet the demands of enterprises globally, powered by Blackwell technology, NVIDIA Quantum-2 or Quantum-X800 InfiniBand networking, Nvidia Spectrum-X Ethernet networking, and NVIDIA BlueField-3 DPUs, all integrated into servers from Dell Technologies, Hewlett Packard Enterprise, and Lenovo.
Nvidia's AI Enterprise software platform, which includes NIM inference microservices, allows enterprises to create and run production-grade generative AI applications.
During his keynote, Huang highlighted Taiwan's rapid adoption of Blackwell technology by key companies. For instance, Chang Gung Memorial Hospital plans to leverage the Blackwell computing platform for advancements in biomedical research and to enhance clinical workflows.
Foxconn CEO Young Liu remarked, “As generative AI reshapes industries, Foxconn is prepared with cutting-edge solutions to meet diverse computing needs. We utilize the latest Blackwell platform in our servers and provide key components for Nvidia, facilitating faster time-to-market for our customers.”
Barry Lam, chairman of Quanta Computer, commented, “In an AI-driven world, Nvidia Blackwell is not just a power source; it ignites this industrial revolution. We proudly collaborate with Nvidia to define the future of generative AI.”
Charles Liang, President and CEO of Supermicro, added, “Our building-block architecture and liquid-cooling solutions enable us to swiftly deliver a variety of Nvidia AI platform-based products. Our high-performance systems optimized for Blackwell architecture will provide customers with exceptional computing options for next-level AI applications.”
C.C. Wei, CEO of TSMC, emphasized, “TSMC collaborates closely with Nvidia to push semiconductor innovation that supports their AI visions. Our advanced manufacturing technologies have been instrumental in shaping Nvidia’s groundbreaking GPUs, including those based on Blackwell architecture.”