As artificial intelligence becomes more deeply embedded in business operations, the demand for tools that simplify the creation, testing, and deployment of machine learning models is growing rapidly. This burgeoning category, known as machine learning operations (MLOps), is already competitive, with companies like InfuseAI, Comet, and Arize in the mix alongside tech giants such as Google Cloud, AWS, and Microsoft Azure.
Amid this crowded space, South Korean MLOps platform VESSL AI is carving out its niche. The start-up is addressing a critical concern for companies building large language models (LLMs) and AI agents: the high cost of GPU usage. By utilizing a hybrid infrastructure that combines on-premise and cloud environments, VESSL AI is able to optimize GPU expenses, making the process up to 80% more cost-effective.
VESSL AI recently raised $12 million in a Series A funding round, which it will use to accelerate infrastructure development. The funding round was supported by notable investors including A Ventures, Ubiquoss Investment, and Mirae Asset Securities, bringing the company’s total funding to $16.8 million.
VESSL AI’s platform is already in use by over 50 enterprise customers, including major names such as Hyundai, aerospace manufacturer LIG Nex1, and mobility joint venture TMAP Mobility (a partnership between Uber and South Korea’s SK Telecom). It has also gained traction with tech start-ups like Yanolja, Upstage, ScatterLab, and Wrtn.ai.
According to Jaeman Kuss An, co-founder and CEO of VESSL AI, the hybrid infrastructure approach leverages multi-cloud resources and spot instances, making it easier and cheaper for companies to develop custom AI models. “Our platform uses GPUs from providers like AWS, Google Cloud, and Lambda. The system automatically selects the most cost-effective and efficient resources, cutting customer GPU costs significantly,” An said.
The company’s platform includes key features such as:
- VESSL Run: Automates AI model training.
- VESSL Serve: Supports real-time deployment of AI models.
- VESSL Pipelines: Integrates model training and data preprocessing to streamline workflows.
- VESSL Cluster: Optimizes GPU resource usage in clustered environments.
- Addressing a Growing Market Need
Founded in 2020 by Jaeman An and co-founders Jihwan Jay Chun (CTO), Intae Ryoo (CPO), and Yongseon Sean Lee (tech lead), VESSL AI was born out of the founders’ frustrations with the complex and resource-intensive process of building machine learning models at their previous companies. They recognized an opportunity to make the process more efficient while reducing costs, especially in light of global GPU shortages.
With offices in South Korea and the U.S., and over 2,000 users globally, VESSL AI is well-positioned to grow its customer base by helping businesses train and deploy AI models in a cost-efficient manner.
Main Image: KED Global