CoreWeave Secures $1.1 Billion Funding, Indicating a Surge in Demand for Alternative Cloud Solutions

The demand for alternative cloud solutions has never been higher. A case in point is CoreWeave, a GPU infrastructure provider that initially started as a cryptocurrency mining operation. This week, CoreWeave secured $1.1 billion in funding from notable investors like Coatue, Fidelity, and Altimeter Capital, valuing the company at $19 billion post-money. With this latest investment, CoreWeave’s total funding has reached an impressive $5 billion in both debt and equity—an extraordinary achievement for a company still in its early years.

CoreWeave isn't alone in this surge of investment. Lambda Labs, which provides a variety of cloud-hosted GPU instances, recently obtained a “special purpose financing vehicle” worth up to $500 million, following a successful $320 million Series C round earlier this year. Additionally, Voltage Park—a nonprofit backed by cryptocurrency mogul Jed McCaleb—announced an investment of $500 million in GPU-based data centers last October. Moreover, Together AI, which focuses on cloud GPU hosting and generative AI research, secured $106 million in a Salesforce-led round in March.

So, what fuels this excitement and influx of capital into the alternative cloud sector? The answer is simple: generative artificial intelligence.

As the generative AI boom continues, the demand for robust hardware to run and train AI models at scale has skyrocketed. Graphics Processing Units (GPUs) are particularly suited for this purpose due to their architecture, which features numerous cores capable of performing parallel computations essential for generative models.

However, the high cost of installing GPUs often leads developers and organizations to opt for cloud solutions instead. Major players in the cloud computing arena—such as Amazon Web Services (AWS), Google Cloud, and Microsoft Azure—offer an extensive range of GPU and specialized hardware optimized for generative AI workloads. Yet, for certain models and use cases, alternative cloud providers may offer more cost-effective options with better availability.

For example, renting an Nvidia A100 40GB GPU—the go-to choice for model training—costs $2.39 per hour on CoreWeave, totaling approximately $1,200 monthly. In contrast, Azure charges $3.40 per hour (around $2,482 monthly), while Google Cloud’s rate is $3.67 per hour, culminating in about $2,682 per month. Given that generative AI tasks typically involve clusters of GPUs, these cost differences can quickly compound.

According to Sid Nag, VP of Cloud Services and Technologies at Gartner, “Companies like CoreWeave participate in a market we classify as specialty ‘GPU as a service’ cloud providers." He notes that the high demand for GPUs has created an alternative pathway to access these critical resources beyond what the hyperscalers can provide. Even large tech firms are turning to these alternative cloud providers as they encounter capacity constraints.

For instance, last June, it was reported that Microsoft inked a multi-billion-dollar agreement with CoreWeave to guarantee that OpenAI—its partner and the creator of ChatGPT—has sufficient computing power for its generative AI models. Nvidia, which supplies most of CoreWeave's GPUs, views this shift positively, as it enables them to provide select cloud providers with preferred access to their GPUs.

Lee Sustar, a principal analyst at Forrester, highlights that cloud providers like CoreWeave might find success partly because they can navigate the market without the "infrastructure baggage" that larger providers must manage. He states, “Given the hyperscaler dominance in the public cloud market, which demands significant investments in diverse infrastructure and services yielding minimal revenue, challengers like CoreWeave can thrive by focusing on premium AI services without the overhead of hyper-scaler-level investments.”

But is this rapid growth sustainable? Sustar expresses some reservations, suggesting that the future of alternative cloud providers depends on their ability to scale GPU availability and maintain competitive pricing. As major providers like Google, Microsoft, and AWS ramp up their investments in custom hardware for model training, pricing pressures might intensify. For example, Google offers Tensor Processing Units (TPUs); Microsoft recently introduced two custom chips, Azure Maia and Azure Cobalt, while AWS developed Trainium, Inferentia, and Graviton.

“Hyperscalers will leverage custom silicon to reduce their reliance on Nvidia, while Nvidia will continue to engage with GPU-centric AI clouds like CoreWeave,” Sustar explains.

Moreover, while many generative AI tasks benefit from GPUs, not all workloads require them, especially if they are not time-sensitive. CPUs can manage these tasks but typically at a slower rate compared to GPUs or custom hardware.

A significant risk for alternative cloud providers is the potential collapse of the generative AI market, which could leave them with an excess of GPUs and insufficient customers. Nevertheless, both Sustar and Nag remain optimistic about the near-term future, anticipating a steady emergence of new cloud providers.

“GPU-oriented cloud startups will create robust competition for incumbents, particularly among customers already using multiple clouds who are equipped to handle management, security, risk, and compliance challenges across different platforms,” Sustar asserts. “These customers are open to trying new AI cloud options as long as they boast credible leadership, solid financial backing, and GPUs with no delays.”

Most people like

Find AI tools in YBX