In less than two years, NVIDIA’s H100 chips have become essential for nearly every AI company worldwide, powering large language models that support services like ChatGPT. Recently, NVIDIA unveiled its next-generation platform, Blackwell, featuring chips that are seven to thirty times faster than the H100 while consuming 25 times less power.
"Blackwell GPUs are the engine of this new Industrial Revolution," stated NVIDIA CEO Jensen Huang at the annual GTC event in San Jose, likening its energy to a Taylor Swift concert. Huang emphasized, "Generative AI is the defining technology of our time. By collaborating with the most dynamic companies, we will unlock AI’s potential across all industries."
Named after renowned mathematician David Harold Blackwell, NVIDIA’s Blackwell chips aim to be the world’s most powerful, offering a remarkable upgrade with speeds of 20 petaflops compared to the H100’s 4 petaflops. This leap in performance is attributed to the 208 billion transistors in Blackwell, up from 80 billion in the H100. NVIDIA achieved this by connecting two large chip dies capable of communicating at speeds of up to 10 terabytes per second.
The dependency of modern AI on NVIDIA is underscored by testimonials from eight CEOs leading companies worth trillions. This notable roster includes leaders from OpenAI, Microsoft, Alphabet, Meta, Google DeepMind, Oracle, Dell, Amazon, and Tesla, all praising NVIDIA hardware for AI applications. Elon Musk remarked, “There is currently nothing better than NVIDIA hardware for AI." Sam Altman added, "Blackwell offers massive performance leaps, accelerating our ability to deliver leading-edge models."
Although NVIDIA has not disclosed the price for Blackwell chips, the H100 chips range from $25,000 to $40,000 each, with systems powered by these chips priced up to $200,000. Despite the high costs, demand remains high, with last year’s delivery wait times reaching up to 11 months. Access to NVIDIA’s AI chips is increasingly viewed as a status symbol among tech companies competing for AI talent. Earlier this year, Mark Zuckerberg highlighted Meta's investment in infrastructure, stating, "By the end of this year, we will have approximately 350,000 NVIDIA H100s and around 600,000 equivalents when including other GPUs."