TensorWave Aims to Challenge Nvidia's Dominance in AI Compute with Innovative AMD-Powered Cloud Solutions

Chipmaker Nvidia achieved $30 billion in revenue during the last fiscal quarter, primarily fueled by the AI industry's relentless demand for GPUs. These graphics processing units are crucial for training and running AI models, as they feature thousands of cores that operate in parallel, efficiently solving the linear algebra equations that underpin these models.

The appetite for AI continues to soar, and Nvidia’s GPUs have emerged as the hardware of choice among AI developers across the spectrum. However, TensorWave, a startup founded in late 2022, is challenging this trend by offering a cloud service that exclusively utilizes hardware from Nvidia's competitor, AMD, for AI workloads.

“We identified an unhealthy monopoly that was limiting access to computing power and hindering innovation in the AI sector,” stated Darrick Horton, TensorWave’s CEO and co-founder. “Driven by a commitment to democratize AI, we aimed to provide a viable alternative that enhances competition and choice.”

Unconventional Beginnings

Horton’s path to co-founding TensorWave was serendipitously paved by his interest in pickleball. He met co-founders Jeff Tatarchuk and Piotr Tomasik—longtime friends and pickleball doubles partners—during a match. After playing, they invited Horton, a former colleague of Tatarchuk, to join them at their favorite Las Vegas hangout.

“Our conversation led us to discuss the monopolistic stranglehold on GPU compute capacity, which was creating supply issues,” Horton recounted. “That realization sparked the creation of TensorWave.”

The trio shares a rich background beyond pickleball; Tatarchuk previously collaborated with Horton to co-found the cloud vendor VMAccel before selling his startup, CRM developer Lets Rolo, to digital identity firm LifeKey. Horton holds both mechanical engineering and physics degrees and has a background with Lockheed Martin's Skunk Works R&D, along with co-founding VaultMiner Technologies, a crypto mining venture and VMAccel’s parent company. Tomasik has co-founded Lets Rolo with Tatarchuk and is also a co-founder of the influencer marketing platform Influential, which was acquired by French firm Publicis for $500 million last July.

A Strategic Location

Based in Las Vegas, TensorWave may seem like an unconventional choice for a cloud infrastructure startup, but Horton believes it positions them uniquely. “Vegas has the potential to evolve into a thriving tech and startup ecosystem,” he asserted.

The claims aren't far-fetched. Data from Dealroom.co reveals that Las Vegas hosts over 600 startups employing more than 11,000 people, collectively attracting over $4 billion in investment in 2022. Additionally, lower energy costs and overhead in Vegas compared to many major U.S. cities benefit startups. Horton, Tatarchuk, and Tomasik also have close ties to the city’s venture capital community.

Tomasik had previously served as a general partner at the Las Vegas-based seed fund 1864 Fund and currently collaborates with nonprofit accelerators StartUp Vegas and Vegas Tech Ventures. Tatarchuk is an angel investor at Fruition Lab, which began as a Christian organization and has since evolved into a notable incubator.

These connections have been instrumental in establishing TensorWave as one of the first cloud providers to offer AMD Instinct MI300X instances for AI workloads. The service features dedicated storage and high-speed interconnects, renting GPU capacity by the hour with a minimum six-month contract.

“In the cloud sector, we believe we have found our niche,” Horton expressed. “We see ourselves as complementary, providing AI-specific computing power at competitive price-to-performance ratios.”

A Growing Market

The landscape for startups focused on low-cost, on-demand, GPU-powered clouds for AI is thriving. CoreWeave, which initially operated as a cryptocurrency venture, recently secured $1.1 billion in new funding and inked a multi-billion-dollar deal with Microsoft. Lambda Labs raised a special purpose financing vehicle of up to $500 million and seeks an additional $800 million. Voltage Park, supported by crypto billionaire Jed McCaleb, is investing $500 million in GPU-backed data centers, while Together AI secured $106 million in a recent funding round led by Salesforce.

So how does TensorWave plan to carve its niche? First, by offering competitive pricing. Horton highlighted that the MI300X is significantly cheaper than Nvidia’s leading GPU for AI, the H100, enabling TensorWave to pass those savings to its customers. Although Horton couldn't disclose exact pricing, he indicated they aim to undercut H100 plans, which hover around $2.50 per hour—a challenging target.

“Pricing ranges from about $1 per hour to $10 per hour based on workload requirements and GPU configurations,” he noted, adding that TensorWave must keep certain cost details confidential.

The second competitive edge lies in performance. Horton referenced benchmarks suggesting that the MI300X outperforms the H100 in specific applications, notably text-generating AI models like Meta’s Llama 2.

His claims appear well-founded, considering the growing interest in the MI300X from notable tech leaders. Meta announced its intention to use MI300X chips for running its Meta AI assistant, while OpenAI, the creator of ChatGPT, plans to integrate MI300X support into its developer tools.

Market Competition

Other players betting on AMD’s AI chips include startups like Lamini and Nscale, as well as larger cloud providers like Azure and Oracle, while Google Cloud and AWS remain skeptical about AMD's competitiveness.

These vendors currently benefit from Nvidia's GPU shortage and delays regarding Nvidia's upcoming Blackwell chip. However, this might soon change as manufacturing of key chip components, particularly memory, increases, allowing Nvidia to expand shipments of the H200, the successor to the H100.

A looming challenge for new cloud startups relying on AMD hardware is navigating the competitive advantages that Nvidia has established around its AI chips. Nvidia’s development software is often seen as more mature and user-friendly compared to AMD’s, which remains widely used. AMD CEO Lisa Su has acknowledged the challenges of adoption.

Looking ahead, maintaining competitive pricing could become increasingly difficult as hyperscalers invest more in custom hardware for AI model training and inference. Google provides TPUs, while Microsoft has recently introduced custom chips like Azure Maia and Azure Cobalt, and AWS offers Trainium, Inferentia, and Graviton chips.

“As developers actively seek alternatives for their AI workloads, especially amidst increasing demands for memory and performance, coupled with ongoing production challenges, we believe AMD will continue to play a pivotal role in democratizing compute capabilities in the AI landscape,” Horton stated.

Early Success

TensorWave began onboarding customers in late spring as part of a preview program. Horton reported that the startup is already generating $3 million in annual recurring revenue and anticipates this figure will soar to $25 million by year’s end, driven by plans to scale capacity to 20,000 MI300Xs.

With an estimated $15,000 per GPU, investing in 20,000 MI300Xs would total $300 million; however, Horton claims that TensorWave's burn rate remains “well within sustainable levels.” TensorWave previously indicated it would leverage its GPUs for a significant debt financing round, a strategy employed by other data center companies like CoreWeave; Horton confirmed this is still in the works.

“This underscores our robust financial health,” he elaborated. “We are strategically positioned to navigate potential challenges while delivering critical value.”

When asked about the number of current customers, Horton opted for confidentiality but emphasized TensorWave's publicly announced collaborations with Edgecore Networks and MK1, an AI inferencing startup founded by former Neuralink engineers.

“We’re rapidly scaling our capacity with multiple nodes and continuously increasing our offerings to meet the burgeoning demand from our pipeline,” Horton stated, revealing plans to integrate AMD's next-gen MI325X GPUs, slated for release by Q4 2024, as early as November or December.

Investors are encouraged by TensorWave's growth trajectory. Last week, Nexus VP announced they led a $43 million funding round for the company, joined by Maverick Capital, StartupNV, Translink Capital, and AMD Ventures.

This funding round—the startup's first—values TensorWave at $100 million post-money. “AMD Ventures shares TensorWave’s vision of transforming AI compute infrastructure,” said Mathew Hein, SVP of AMD Ventures. “Their deployment of the AMD Instinct MI300X and ability to offer public instances to AI customers positions them as an early competitor in the AI space, and we are excited to support their growth.”

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles