In its Q1 2025 earnings call on Wednesday, Nvidia CEO Jensen Huang underscored the rapid expansion of generative AI (GenAI) startups leveraging Nvidia’s accelerated computing platform.
“There are approximately 15,000 to 20,000 generative AI startups across diverse fields, including multimedia, digital characters, design, application productivity, and digital biology,” Huang noted. He highlighted the significant shift of the automotive industry toward Nvidia for training end-to-end models aimed at enhancing the operational capabilities of self-driving cars.
Huang emphasized that the demand for Nvidia’s GPUs is “incredible” as various sectors—including consumer internet, enterprise, cloud computing, automotive, and healthcare—invest heavily in “AI factories” built on thousands of Nvidia GPUs.
He stated that the transition to generative AI marks a groundbreaking "full-stack computing platform shift," moving from mere information retrieval to generating intelligent outputs. “[The computer] is now generating contextually relevant, intelligent answers,” Huang explained, predicting a transformation in computing stacks worldwide, even affecting desktop systems.
To fulfill rising demand, Nvidia began shipping its H100 “Hopper” architecture GPUs in Q1 and announced the upcoming “Blackwell” platform, which boasts 4 to 30 times faster AI training and inference than its predecessor. More than 100 Blackwell systems from leading computer manufacturers are expected to launch this year to facilitate widespread adoption.
Huang stressed that Nvidia’s comprehensive AI platform provides a significant competitive edge against narrower solutions as AI workloads continue to evolve. He anticipates that demand for Nvidia’s Hopper, Blackwell, and future architectures will outstrip supply into next year amidst the GenAI surge.
Despite a record-breaking $26 billion revenue posted in Q1, Nvidia is grappling with demand that far exceeds its capacity to supply AI GPUs. “We’re racing every single day,” Huang remarked, reflecting on the relentless pressure to fulfill orders. He acknowledged that the demand for the flagship H100 GPU will continue to exceed supply, even as production of the Blackwell architecture ramps up.
Huang pointed to the competitive advantage gained by companies that are first to market with innovative AI models, noting, “The next company who reaches the next major plateau gets to announce a groundbreaking AI.” The urgency is palpable, with cloud providers and AI startups striving to secure GPU capacity to outpace competitors. He predicts that the supply crunch will persist well into 2024.
Huang revealed how cloud providers can achieve significant financial returns by hosting AI models on Nvidia’s platforms. “For every $1 spent on Nvidia AI infrastructure, cloud providers can earn $5 in hosting revenue over four years,” he explained.
He illustrated this with an example of a 70 billion parameter language model using Nvidia’s latest H200 GPUs, claiming a single server can generate 24,000 tokens per second and support 2,400 concurrent users. “This means that for every $1 spent on Nvidia H200 servers, an API provider can generate $7 in revenue over four years,” Huang stated.
Nvidia’s ongoing software improvements also continue to enhance its GPUs’ inference performance, delivering a 3X speedup on the H100 and subsequently reducing costs for customers. This exceptional return on investment is driving demand from major cloud providers like Amazon, Google, Meta, Microsoft, and Oracle, as they compete to expand AI capacity and attract developers.
While Nvidia is well-known for its GPUs, the company is also making a significant push in datacenter networking with its Infiniband technology. In Q1, Nvidia reported robust growth in networking, propelled by increased Infiniband adoption. Huang identified Ethernet as a major opportunity for Nvidia, with the launch of its Spectrum-X platform optimized for AI workloads over Ethernet. “Spectrum-X opens a brand new market to Nvidia networking, enabling Ethernet-only datacenters to support large-scale AI,” he said, projecting it to evolve into a multi-billion dollar product line within a year.
Nvidia's record Q1 results were driven by remarkable performance in the Data Center and Gaming segments, with overall revenue soaring to $26 billion—an 18% sequential increase and a 262% rise year-over-year. The Data Center business was the primary growth driver, generating $22.6 billion and experiencing a staggering 427% year-over-year increase. CFO Colette Kress noted that compute revenue surged more than five times and networking revenue grew more than three times from the previous year.
Gaming revenue reached $2.65 billion, reflecting a seasonal decline of 8% but an increase of 18% year-over-year, while Professional Visualization and Automotive revenues also showed positive year-over-year growth.
For Q2, Nvidia projects revenues of approximately $28 billion, plus or minus 2%, with anticipated sequential growth across all market platforms.
Nvidia stock increased by 5.9% in after-hours trading to $1,005.75 following the announcement of a 10-for-1 stock split.
(Note: The author owns securities of Nvidia Corporation (NVDA). This is not investment advice; consult a professional advisor before making investment decisions.)