As the French startup ecosystem thrives with companies like Mistral, Poolside, and Adaptive, Paris-based Bioptimus has emerged from stealth mode, announcing a $35 million seed funding round. With the goal of creating the first universal AI foundation model for biology, Bioptimus aims to integrate generative AI across various biological scales, from molecules to cells, tissues, and entire organisms.
Bioptimus is backed by a team of former Google DeepMind alumni and scientists from Owkin, a leading AI biotech startup and French unicorn. This collaboration will utilize AWS computing resources and Owkin's data generation capabilities, which include access to multimodal patient data from top academic hospitals worldwide. As stated in a press release, "This empowers the creation of computational representations that differentiate us from models reliant solely on public datasets and single data modalities, which fail to capture biology's full diversity."
In an interview, Jean-Philippe Vert, co-founder and CEO of Bioptimus as well as chief R&D Officer of Owkin, emphasized that being an agile and independent company allows Bioptimus to access critical data more swiftly than Google DeepMind. "We can collaborate securely with partners, establishing trust by sharing our AI expertise and providing models for research," he noted. "Such collaboration can be challenging for large tech firms. Bioptimus will also implement some of the strongest sovereignty controls available today."
Rodolphe Jenatton, a former Google DeepMind research scientist, has joined the Bioptimus team, emphasizing the commitment to open-source and open-science models akin to Mistral's releases. "Transparency, sharing, and community are fundamental to our mission," he stated.
Currently, many AI models focus on isolated aspects of biology. Vert explained, "For instance, several companies are developing language models for protein sequences, while others are targeting cell image foundation models." However, a comprehensive view of biology remains elusive. "The encouraging news is that AI technology is advancing rapidly, with certain architectures enabling all data to contribute to a unified model," he added. "This is our goal—creating a holistic model that does not yet exist, but one that I believe will be realized soon."
The most significant challenge, according to Vert, is access to data. "Training a large language model on web text is vastly different from what we aim to do. Fortunately, our partnership with Owkin provides us with unparalleled access to data."