SambaNova Launches AI Samba-CoE v0.2: Outperforming Databricks DBRX in Innovation and Performance

AI chip-maker SambaNova Systems has announced a major milestone with its Samba-CoE v0.2 Large Language Model (LLM).

This model processes an impressive 330 tokens per second, outperforming notable competitors like Databricks’ newly launched DBRX, MistralAI’s Mixtral-8x7B, and Elon Musk’s xAI Grok-1.

What sets this achievement apart is the model's efficiency. It operates at high speeds without sacrificing accuracy, requiring only 8 sockets compared to alternatives that need 576 sockets and operate at lower bit rates.

In our tests, the LLM generated responses incredibly quickly—producing 330.42 tokens in just one second for a comprehensive 425-word answer about the Milky Way. A question on quantum computing yielded a similarly fast response, clocking in at 332.56 tokens per second.

Efficiency Advancements

SambaNova's strategy of utilizing fewer sockets while maintaining high bit rates represents a significant breakthrough in computing efficiency. The company is also teasing the upcoming release of Samba-CoE v0.3 in collaboration with LeptonAI, signaling ongoing innovation.

These advancements are grounded in open-source models from Samba-1 and the Sambaverse, using a distinctive approach to ensembling and model merging. This methodology not only supports the current version but also indicates a scalable path for future developments.

Comparisons with other models, including GoogleAI’s Gemma-7B, MistralAI’s Mixtral-8x7B, Meta’s Llama2-70B, Alibaba Group’s Qwen-72B, TIIuae’s Falcon-180B, and BigScience’s BLOOM-176B, highlight Samba-CoE v0.2’s competitive edge in the AI landscape.

This announcement is poised to ignite interest within the AI and machine learning communities, spurring discussions around efficiency, performance, and the future of AI model evolution.

Background on SambaNova

Founded in 2017 in Palo Alto, California by Kunle Olukotun, Rodrigo Liang, and Christopher Ré, SambaNova Systems initially focused on custom AI hardware chips. Its mission has since broadened to encompass a wide array of offerings, including machine learning services and the SambaNova Suite—a comprehensive enterprise AI training, development, and deployment platform launched in early 2023. Earlier this year, the company introduced Samba-1, a 1-trillion-parameter AI model derived from 50 smaller models in a “Composition of Experts” approach.

This transition from a hardware-centric startup to a full-service AI innovator reflects the founders’ commitment to making AI technologies scalable and accessible. SambaNova is establishing itself as a formidable competitor to industry giants like Nvidia, having raised $676 million in Series D funding at a valuation exceeding $5 billion in 2021. Today, it competes with other dedicated AI chip startups like Groq, in addition to established players like Nvidia.

Most people like

Find AI tools in YBX