DynamoFL Secures $15.1M Funding to Aid Enterprises in Adopting Compliant Large Language Models (LLMs)

DynamoFL, a company specializing in software that integrates large language models (LLMs) into enterprise environments while fine-tuning these models on sensitive data, has announced the successful conclusion of a $15.1 million Series A funding round. This round was co-led by Canapi Ventures and Nexus Venture Partners, with contributions also from Formus Capital and Soma Capital. This new funding boosts DynamoFL's total capital raised to $19.3 million. Co-founder and CEO Vaikkunth Mugunthan stated that the funds will be allocated toward expanding DynamoFL’s product range and enhancing its team of privacy researchers.

“Overall, DynamoFL empowers enterprises to deploy private, compliant LLM solutions that maintain high performance,” Mugunthan shared in an email interview.

Founded in 2021 in San Francisco by Mugunthan and Christian Lau, both graduates of MIT’s Electrical Engineering and Computer Science Department, DynamoFL was established to tackle crucial data security risks in AI models. “Generative AI has introduced new vulnerabilities, particularly the potential for LLMs to ‘memorize’ sensitive training data, which can be exposed to malicious actors,” Mugunthan explained. He noted the significant challenge enterprises face in mitigating these risks. Addressing LLM vulnerabilities requires specialized teams of privacy machine learning researchers and a robust infrastructure for continuous testing against emerging security threats.

As businesses incorporate LLMs, they encounter various compliance hurdles. Concerns are rising about safeguarding confidential data from developers who train these models—leading companies like Apple, Walmart, and Verizon to prohibit employee use of tools like OpenAI’s ChatGPT.

A recent Gartner report highlighted six critical legal and compliance risks associated with responsible LLM use. These include the accuracy of LLM responses, data privacy and confidentiality, and model bias—such as inherent stereotypes in gender and profession associations. These requirements can vary significantly across jurisdictions, adding complexity to compliance efforts; for instance, California mandates disclosure when a customer interacts with a bot.

DynamoFL addresses these challenges by deploying its solutions within a customer’s virtual private cloud or on-premises infrastructure. Its offerings include an LLM penetration testing tool that identifies and documents data security risks, determining if an LLM has memorized or could potentially leak sensitive data. Research shows that LLMs can inadvertently expose personal information, posing significant risks to companies handling proprietary data.

Additionally, DynamoFL provides a comprehensive LLM development platform that incorporates strategies to mitigate data leakage and security vulnerabilities. Developers can optimize their models to function efficiently in resource-constrained environments like mobile devices and edge servers.

While many startups, such as OctoML, Seldon, and Deci, offer tools to optimize AI models, and others focus on privacy like LlamaIndex and Contextual AI, DynamoFL distinguishes itself through the thoroughness of its solutions. Mugunthan emphasizes that the company works closely with legal experts to ensure compliance with privacy regulations across the U.S., Europe, and Asia.

This comprehensive approach has attracted several Fortune 500 clients, particularly within finance, electronics, insurance, and automotive sectors. “Existing products can redact personally identifiable information from queries sent to LLM tools, but these frequently fail to meet stringent regulatory standards in finance and insurance, where redacted info can be re-identified through sophisticated attacks,” he noted. “DynamoFL's in-depth expertise in AI privacy vulnerabilities allows us to build a solution that meets the regulatory demands of enterprises concerned about LLM data security.”

However, DynamoFL has yet to tackle one of the most pressing issues surrounding LLMs: intellectual property and copyright concerns. Commercial LLMs often draw from extensive internet data, occasionally reproducing this content and potentially exposing companies to copyright infringement risks.

Mugunthan hinted at an expanding suite of tools and solutions powered by DynamoFL's recent funding. “Meeting regulatory demands is increasingly essential for IT leaders, especially in finance and insurance sectors,” he stated. “Non-compliance can severely damage customer trust, lead to substantial fines, and disrupt enterprise operations. DynamoFL’s privacy evaluation suite offers automated testing for data extraction vulnerabilities and necessary documentation to meet security and compliance mandates.”

Currently, DynamoFL employs approximately 17 people, with plans to grow its team to 35 by the end of the year.

Most people like

Find AI tools in YBX