CentML Secures $27M from Nvidia and Others to Optimize AI Model Performance

Contrary to popular belief, the era of substantial seed rounds is alive and well—especially in the AI sector.

CentML, a startup focused on providing tools that lower the costs and enhance the performance of machine learning model deployment, announced today it has secured $27 million in an extended seed round. This funding round saw participation from Gradient Ventures, TR Ventures, and Nvidia, as well as Misha Bilenko, Vice President of Microsoft Azure AI.

Originally closing its seed round in 2022, CentML increased its funding in recent months due to heightened interest in its innovative product, bringing its total funding to $30.5 million. The new capital will be directed towards advancing CentML's product development and research initiatives, alongside expanding its engineering team and overall workforce, which currently comprises 30 employees across the U.S. and Canada, according to co-founder and CEO Gennady Pekhimenko.

Pekhimenko, who also serves as an associate professor at the University of Toronto, co-founded CentML last year with Akbar Nurlybayev and PhD students Shang Wang and Anand Jayarajan. The founders are united by a vision to enhance access to compute resources amid ongoing challenges in the AI chip supply chain crisis.

“Machine learning costs, talent shortages, and chip availability are significant hurdles any AI and machine learning company faces, often simultaneously,” Pekhimenko explained in an email interview. “The highest-performing chips are frequently unavailable due to overwhelming demand from both enterprises and startups. This scarcity forces companies to compromise on model size or results in increased inference latencies for live deployments.”

Companies training models, particularly generative AI models like ChatGPT and Stable Diffusion, heavily rely on GPU-based hardware because their ability to process multiple computations simultaneously makes them ideal for training advanced AI.

However, the supply of these critical chips is insufficient to meet current demands. Microsoft's recent earnings report indicated that the shortage of server hardware necessary for AI operations could potentially lead to service disruptions. In parallel, Nvidia's top-performing AI GPUs are reportedly sold out until 2024.

As a result, tech giants like OpenAI, Google, AWS, Meta, and Microsoft are considering or actively developing custom chips for model training. Nevertheless, even those efforts have encountered challenges; Meta has experienced setbacks with some of its experimental hardware, while Google is struggling to keep up with the demand for its homegrown GPU equivalent, the Tensor Processing Unit (TPU).

According to Gartner, spending on AI-focused chips is projected to reach $53 billion this year, with growth expected to more than double over the next four years. Recognizing this landscape, Pekhimenko believes it’s the perfect moment to launch software designed to optimize model efficiency on existing hardware.

“The costs associated with training AI and machine learning models continue to escalate,” Pekhimenko noted. “With CentML’s optimization technology, we can cut expenses by up to 80% without sacrificing speed or accuracy.”

This assertion is impressive. At its core, CentML's software operates with straightforward principles. The platform identifies bottlenecks in model training and predicts the overall time and costs of deployment. Additionally, CentML offers a compiler to optimize model training workloads for performance on specific hardware such as GPUs.

Pekhimenko claims that CentML’s software maintains model integrity and requires minimal effort from engineers to implement. “For one of our customers, we optimized their Llama 2 model to operate three times faster using Nvidia A10 GPU cards,” they added.

CentML isn’t alone in its software-centric approach to model optimization. It faces competition from MosaicML, recently acquired by Databricks for $1.3 billion, and OctoML, which received an $85 million investment in November 2021 for its machine learning acceleration platform.

However, Pekhimenko asserts that CentML’s methodologies do not compromise model accuracy as some of MosaicML's techniques can, and he emphasizes that CentML’s compiler represents a more advanced generation than OctoML’s.

Looking ahead, CentML plans to extend its focus beyond optimizing model training to include inference, which involves running models post-training. Given that GPUs are also critical for inference, Pekhimenko sees this as a promising opportunity for growth.

“The CentML platform is capable of running any model,” Pekhimenko stated. “We produce optimized code for various GPUs, reducing the memory required for model deployment and enabling teams to deploy on smaller, more cost-effective GPUs.”

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles