Vectara Secures $25M Funding for the Launch of Mockingbird LLM, Targeting Enterprise RAG Solutions

Vectara, an early innovator in Retrieval Augmented Generation (RAG) technology, is raising a $25 million Series A funding round today due to increasing demand from enterprise users. This brings Vectara's total funding to $53.5 million.

Emerging from stealth mode in October 2022, Vectara first branded its technology as a neural search-as-a-service platform. It has since transitioned to calling it ‘grounded search,’ now more widely recognized as RAG. Grounded search involves using responses from a large language model (LLM) that reference an enterprise's knowledge base, typically a vector-capable database. The Vectara platform integrates various elements to create an efficient RAG pipeline, including its proprietary Boomerang vector embedding engine.

Alongside the funding announcement, Vectara has introduced its new Mockingbird LLM, specifically designed for RAG applications.

“We are launching Mockingbird, a large language model trained to prioritize factual accuracy and logical reasoning,” said Amr Awadallah, co-founder and CEO of Vectara, in an exclusive interview.

Enterprise RAG: Beyond the Vector Database

As interest in enterprise RAG has surged over the past year, numerous players have entered the market. Technologies from providers like Oracle, PostgreSQL, DataStax, Neo4j, and MongoDB support vector and RAG use cases, intensifying competition. Awadallah highlights that Vectara stands apart through several key differentiators; its platform is much more than merely connecting a vector database to an LLM.

Vectara has developed an advanced hallucination detection model that enhances accuracy beyond standard RAG grounding. Additionally, the platform offers result explanations and robust security features to safeguard against prompt attacks, which are crucial for regulated industries.

Another competitive edge for Vectara is its integrated pipeline. Unlike other solutions that require customers to piece together separate components (vector database, retrieval model, and generation model), Vectara provides a comprehensive RAG pipeline with all necessary elements.

“Our differentiation is straightforward: we offer essential features for regulated industries,” Awadallah noted.

Embracing the Mockingbird: A Path to Enterprise RAG Agents

With Mockingbird, Vectara aims to carve a niche in the competitive enterprise RAG landscape.

While many RAG solutions utilize general-purpose LLMs like OpenAI’s GPT-4, Mockingbird is finely tuned specifically for RAG workflows. This targeted optimization significantly lowers the risk of hallucinations and improves citation accuracy.

“It ensures that all references are included correctly,” Awadallah explained. “To achieve strong extensibility, you must provide all possible citations within the response, and Mockingbird excels in this area.”

Moreover, Vectara has engineered Mockingbird to produce structured outputs, such as JSON, which are increasingly vital for agent-driven AI workflows.

“As RAG pipelines invoke APIs for agentic AI functions, structured output is critical for effective API calls, and this is what we deliver,” Awadallah concluded.

Most people like

Find AI tools in YBX