Nvidia CEO Explains How Generative AI and Accelerated Computing Will Transform the Future

During his keynote address at the annual Computex event in Taiwan, Jensen Huang, CEO of Nvidia, articulated a compelling vision: generative AI and accelerated computing are set to "redefine the future." He emphasized the pivotal moment we are experiencing in the computing landscape, stating, “Today, we’re at the cusp of a major shift in computing. Generative AI is reshaping industries and unlocking new avenues for innovation and growth.”

Nvidia has firmly established itself as a leader in the field of artificial intelligence. Its advanced computing hardware, particularly its GPUs, is in high demand as businesses seek to harness and scale the capabilities of new generative AI applications and services. Huang pointed out that AI is transforming accelerated computing not only in consumer-grade AI PCs but also in enterprise-level data center platforms. He affirmed, “The future of computing is accelerated. With our innovations in AI and accelerated computing, we’re pushing the boundaries of what’s possible and driving the next wave of technological advancement.”

Huang introduced the upcoming architecture, dubbed “Rubin,” following the earlier revelation of the next-gen Blackwell GPUs. He shared a roadmap indicating that Rubin is slated for release in 2026, with updated versions of Blackwell arriving in 2025 and further enhancements to Rubin expected in 2027. He explained that Nvidia has adopted a “one-year rhythm” for hardware updates, empowering businesses to regularly deploy cutting-edge technology that scales in performance to meet increasing AI workloads.

“Our core philosophy is straightforward: build an entire data center scale, disaggregate it, and sell components on a one-year rhythm while pushing everything to the limits of technology,” Huang stated. He highlighted that the new GPU designs promise significant cost reductions for businesses running AI applications, as they use less energy and deliver up to 100 times the performance improvements. Notably, energy consumption for running OpenAI’s GPT-4 on Blackwell has been reduced by an astounding 350 times.

Nvidia's hardware performance surpasses the projections of Moore’s Law, which posits that the number of integrated circuits on a chip typically doubles every two years. In just eight years, the company has achieved a remarkable 1000-fold increase in AI computing power, soaring from 19 TFLOPS with the Pascal GPUs in 2016 to an impressive 20,000 TFLOPS with the latest Blackwell architecture. Huang remarked, “Whenever we elevate computation levels, the cost decreases. Accelerated computing is sustainable computing.”

Looking ahead, Huang shared insights into what he calls "Physical AI," highlighting remarkable advancements in robotics. “AI that understands the laws of physics and can work alongside humans is no longer a concept of science fiction,” he said. “Robotics is already here, and it's revolutionizing practices across Taiwan.” Nvidia is positioning itself to meet the growing demands in robotics with pretrained model suites that facilitate training for these applications. Earlier this year, the company announced its foray into humanoid robotics with the GR00T platform, designed to enable robots to comprehend natural language and imitate human movements.

In Huang's view, the future of manufacturing will be inherently robotic. “Factories will orchestrate robots, and those robots will be responsible for producing robotic products,” he stated, painting an exciting picture of advanced automation reshaping our industrial landscape. With these advancements, Nvidia reaffirms its commitment to driving innovation and harnessing the transformative potential of AI and robotics in the global market.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles