As businesses increasingly adopt artificial intelligence within their operations and products, the demand for efficient tools and platforms to create, test, and deploy machine learning models is surging. This space, known as machine learning operations (MLOps), has become competitive, with startups like InfuseAI, Comet, Arrikto, Arize, Galileo, Tecton, Diveplane, as well as offerings from established players like Google Cloud, Azure, and AWS.
Among these contenders, South Korean MLOps platform VESSL AI is making a name for itself by optimizing GPU costs through a hybrid infrastructure that incorporates both on-premise and cloud environments. Recently, the startup completed a Series A funding round, securing $12 million to accelerate the development of its infrastructure, which is geared towards companies looking to create custom large language models (LLMs) and specialized AI agents.
VESSL AI already serves 50 enterprise customers, including notable names such as Hyundai, LIG Nex1 (a leading South Korean aerospace and defense manufacturer), TMAP Mobility (a collaborative mobility-as-a-service venture between Uber and SK Telecom), as well as tech startups like Yanolja, Upstage, ScatterLab, and Wrtn.ai. The company has also forged strategic partnerships with Oracle and Google Cloud in the U.S., boasting over 2,000 users, according to co-founder and CEO Jaeman Kuss An.
Founded in 2020 by An, Jihwan Jay Chun (CTO), Intae Ryoo (CPO), and Yongseon Sean Lee (tech lead), the team brings experience from Google, the mobile gaming company PUBG, and various AI startups. They set out to address a significant challenge An faced while developing machine learning models at a previous medical tech company: the overwhelming complexity of machine learning tool utilization.
The team realized that a hybrid infrastructure model could streamline this process, making it more efficient and notably more affordable. VESSL AI’s MLOps platform utilizes a multi-cloud approach combined with spot instances to reduce GPU expenses by up to 80%, An explained. This strategy not only mitigates GPU shortages but also simplifies the training, deployment, and operation of AI models, including large-scale LLMs.
“VESSL AI’s multi-cloud strategy harnesses GPUs from multiple cloud service providers, including AWS, Google Cloud, and Lambda,” An stated. “The system automatically selects the most cost-effective and efficient resources, leading to significant savings for our customers.”
The platform boasts four core features: VESSL Run, which automates AI model training; VESSL Serve, facilitating real-time deployment; VESSL Pipelines, which seamlessly integrates model training and data preprocessing to enhance workflow efficiency; and VESSL Cluster, optimizing GPU resource utilization within a cluster setting.
The Series A funding round has now raised VESSL AI's total capital to $16.8 million, with investors such as A Ventures, Ubiquoss Investment, Mirae Asset Securities, Sirius Investment, SJ Investment Partners, Wooshin Venture Investment, and Shinhan Venture Investment. The company employs 35 staff members in South Korea and operates an office in San Mateo, U.S.
Enterprise organizations increasingly recognize MLOps as essential for enhancing reliability and performance in AI-driven applications.