In the era of AI, organizations seek to enhance critical internal functions with large language models (LLMs). While investments are substantial, achieving tangible ROI from these technologies is challenging. New York-based startup Hebbia, dedicated to streamlining information retrieval, recently announced a $130 million Series B funding round led by Andreessen Horowitz, Index Ventures, Peter Thiel, and Google's venture capital arm.
Hebbia is developing a user-friendly, LLM-native productivity interface that simplifies data-driven decision-making, regardless of data type or size. The platform is already collaborating with major players in the financial services sector, including hedge funds and investment banks, and plans to extend its technology to a broader range of enterprises.
“AI is undoubtedly the most important technology of our lives. But technology doesn’t drive revolutions—products do. Hebbia is building the human layer—the product layer—to AI,” said George Sivulka, founder and CEO of Hebbia, in a blog post. Previously, the company raised $31 million in several funding rounds.
What Hebbia Offers
LLM-based chatbots often struggle with complex business queries, either due to context window limitations or the intricacy of the questions. This can undermine teams’ confidence in language model capabilities.
Founded in 2020, Hebbia addresses this challenge with its LLM-linked copilot, Matrix, designed for enterprise environments. Matrix empowers knowledge workers to ask intricate questions related to internal documents—ranging from PDFs and spreadsheets to audio transcripts—by utilizing an infinite context window.
When a user submits a query along with the relevant documents, Matrix deconstructs the prompt into manageable tasks for the underlying LLM to execute. This process enables simultaneous analysis of vast amounts of information, yielding structured insights. According to Hebbia, the platform can reason across millions to billions of documents and data types while providing citations for transparency and traceability.
“Designed for knowledge workers, Hebbia allows you to instruct AI agents to perform tasks exactly as you would, handling complexity and large datasets with the flexibility and transparency akin to a spreadsheet or human analyst,” Sivulka explained.
Future Impact
Sivulka originally aimed to simplify workflows for finance professionals who frequently sifted through extensive documents. However, the platform has since gained traction across various sectors. Hebbia now claims over 1,000 use cases in production with notable clients, including CharlesBank, American Industrial Partners, Oak Hill Advisors, Center View Partners, Fisher Phillips, and the U.S. Air Force.
“Over the last 18 months, we grew revenue 15X, quintupled headcount, accounted for over 2% of OpenAI’s daily volume, and set the stage for customers to transform their working methodologies,” Sivulka reported. It remains to be clarified whether OpenAI is the sole LLM used in the Matrix platform or if additional options exist for users.
With the recent funding, Hebbia plans to further enhance its platform, simplifying knowledge retrieval for even more large enterprises.
“I envision a future where AI agents significantly contribute to global GDP, surpassing all human employees. I believe Hebbia will lead us there,” Sivulka remarked, emphasizing that the company aims to create one of the most consequential software products in the next century.
However, it’s vital to acknowledge that Hebbia faces competition. Other companies, such as Glean—a Palo Alto startup that reached unicorn status in 2022 with a ChatGPT-like productivity assistant—are also advancing AI-driven knowledge retrieval. Additionally, firms like Vectara are focused on enabling generative AI experiences based on enterprise data.