An MIT spinoff co-founded by renowned robotics expert Daniela Rus is set to revolutionize the field of artificial intelligence with its innovative liquid neural network technology. The new company, named Liquid AI, officially emerged from stealth mode today, announcing a substantial $37.5 million raised in a two-stage seed round. Investors include prominent venture capital firms such as OSS Capital, PagsGroup, Automattic (the parent company of WordPress), Samsung Next, Bold Capital Partners, and ISAI Cap Venture, along with angel investors like GitHub co-founder Tom Preston Werner, Shopify co-founder Tobias Lütke, and Red Hat co-founder Bob Young.
This funding round values Liquid AI at an impressive $303 million post-money. The founding team alongside Rus includes Ramin Hasani (CEO), Mathias Lechner (CTO), and Alexander Amini (Chief Scientific Officer). Hasani previously served as the principal AI scientist at Vanguard and has held research positions at MIT. Both Lechner and Amini are seasoned MIT researchers who, along with Hasani and Rus, significantly contributed to the development of liquid neural networks.
So, what exactly are liquid neural networks? As outlined by my colleague Brian Heater, these networks represent a new wave in AI technology. A key research paper titled “Liquid Time-Constant Networks,” published in late 2020 by Hasani, Rus, Lechner, Amini, and their team, put liquid neural networks in the spotlight after years of development, originating as a concept back in 2018.
“The idea was first conceived at the Vienna University of Technology in Austria, in Professor Radu Grosu's lab, where I earned my Ph.D. and Mathias Lechner completed his master’s,” Hasani shared in an email. “The concept was refined and expanded at Rus’ lab at MIT CSAIL, where Amini and Rus joined Mathias and me.”
Liquid neural networks function with "neurons" controlled by equations that forecast each neuron's behavior over time, similar to many contemporary model architectures. The term "liquid" highlights the architecture's adaptability, inspired by the neural networks of roundworms, allowing these models to be significantly more compact than traditional AI systems while demanding considerably less computational power to operate.
For comparison, consider GPT-3, OpenAI's previous model, which boasts around 175 billion parameters and approximately 50,000 neurons. Here, “parameters” refer to the elements learned from training data, shaping the model's capabilities. In contrast, a liquid neural network tailored for tasks such as drone navigation may require just 20,000 parameters and fewer than 20 neurons.
The reduced size and complexity mean that training and running liquid neural networks can be less resource-intensive, a crucial factor as the demand for AI computing power soars. For instance, a liquid neural network designed for autonomous driving could theoretically operate on a Raspberry Pi.
Moreover, the compact design and straightforward structure of liquid neural networks enhance interpretability. It's intuitively easier to understand the function of each neuron in a liquid network than to decipher the roles of thousands of neurons in more complex models like GPT-3.
While models with fewer parameters already exist for various applications, liquid neural networks stand out due to their unique capacity for dynamic adaptation over time. Unlike conventional models that analyze static snapshots of data, liquid neural networks continuously process data sequences and adjust neuron interactions in real-time. This capability enables them to respond promptly to changing conditions, such as fluctuating weather during self-driving tasks.
In practical applications, liquid neural networks have consistently outperformed state-of-the-art algorithms in predicting future trends across diverse datasets, from atmospheric chemistry to traffic patterns. Notably, during recent testing, the Liquid AI team trained a liquid neural network using data from an experienced drone pilot and successfully deployed it across a fleet of quadrotors for various tasks. This model outperformed other navigation systems, successfully locating targets in challenging terrains without prior exposure to those environments.
Applications for liquid neural networks include drone search and rescue, wildlife monitoring, and delivery services, but their potential extends far beyond these uses. According to the Liquid AI team, this architecture can effectively analyze any time-based phenomenon, from electric grid fluctuations to medical readouts, financial transaction patterns, and severe weather events. Essentially, as long as sequential data is available, such as video feeds, liquid neural networks can be harnessed for training.
What ambitions does Liquid AI have moving forward with this powerful technology? Ultimately, they aim for commercialization. “We compete with foundation model companies developing generative AI tools,” Hasani noted, alluding to leading players like OpenAI and its competitors (e.g., Anthropic, Stability AI, Cohere, AI21 Labs). “This funding will enable us to create cutting-edge liquid foundation models that surpass traditional generative models.”
Moreover, advancements in the liquid neural network architecture are expected; in 2022, Rus’ lab made significant strides in scaling these networks beyond previous computational limits, with more breakthroughs anticipated.
In addition to focusing on model design and training, Liquid AI plans to deliver on-premises and private AI infrastructure for its clients, along with a platform that allows users to develop custom models for their specific needs—adhering to Liquid AI’s terms.
“Ensuring accountability and safety in large AI models is critically important,” Hasani emphasized. “Liquid AI provides capital-efficient, reliable, explainable, and powerful machine learning models for both specialized and generative AI applications.”
Liquid AI currently operates out of both Palo Alto and Boston, with a team of 12 members, which is expected to expand to 20 by early next year.