Distributional Aims to Create Software Solutions for Mitigating AI Risks

Companies are increasingly exploring AI's potential to enhance productivity, yet they remain cautious about the associated risks. A recent Workday survey reveals that enterprises identify data timeliness, reliability, potential biases, and privacy concerns as the main obstacles to AI adoption.

Recognizing a business opportunity, Scott Clark, co-founder of the AI training platform SigOpt (acquired by Intel in 2020), launched a company named Distributional. His vision is to develop "software that makes AI safe, reliable, and secure." The aim is to create a platform that standardizes tests across various AI applications.

“Distributional is building a cutting-edge enterprise platform for AI testing and evaluation,” Clark shared in an email interview. “As AI applications become more powerful, the risk of harm increases. Our platform is designed for AI product teams to proactively identify, understand, and mitigate these risks before they affect users in real-world scenarios.”

Clark's drive to establish Distributional stemmed from the AI-related challenges he faced at Intel after the SigOpt acquisition. Managing a team as Intel’s VP and GM of AI and high-performance computing, he realized that maintaining high-quality AI testing on a consistent schedule was nearly impossible.

“The insights from my diverse experiences highlighted the necessity for effective AI testing and evaluation,” Clark explained. “AI models can exhibit hallucinations, instability, and inaccuracies, leaving teams struggling to identify and address risks through testing. Comprehensive AI testing demands a profound understanding, which is a significant challenge to overcome.”

Distributional's flagship product focuses on identifying and diagnosing potential AI “harm” stemming from large language models like OpenAI’s ChatGPT. The software aims to semi-automatically determine testing parameters and provide organizations with a thorough understanding of AI risks in a controlled, pre-production environment that functions like a sandbox.

“Many teams choose to overlook model behavior risks, assuming issues are inevitable,” Clark commented. “Some resort to manual testing methods that are resource-heavy and often disorganized, while others depend on passive monitoring after deployment. Our platform addresses these gaps through a robust testing framework that continually analyzes stability and resilience, an interactive dashboard for visualizing test outcomes, and an intelligent test suite to effectively design and prioritize the necessary tests.”

Clark, however, did not disclose specific details about how Distributional's platform operates, citing that it is still in the early stages of co-designing the product with enterprise partners.

With Distributional currently in a pre-revenue and pre-launch phase, how will it compete against established AI testing and evaluation platforms? Competitors such as Kolena, Prolific, Giskard, and Patronus are already well-funded, and tech giants like Google Cloud, AWS, and Azure also provide model evaluation tools.

Clark believes Distributional stands out due to its enterprise focus. "From the beginning, we are building a solution that addresses the data privacy, scalability, and complexity needs of large enterprises across both regulated and unregulated sectors," he said. “The enterprises we are collaborating with have requirements that surpass the functionalities offered by tools that cater mainly to individual developers.”

If all goes as planned, Distributional anticipates generating revenue by next year following its platform’s general launch and a transition of a few design partners to paying customers. In the meantime, the startup is securing financing from venture capitalists, having recently closed an $11 million seed round led by Andreessen Horowitz’s Martin Casado, with contributions from Operator Stack, Point72 Ventures, SV Angel, Two Sigma, and various angel investors.

“We aim to establish a virtuous cycle for our customers,” stated Clark. “With superior testing, teams will gain confidence in integrating AI into their applications. As they deploy more AI solutions, they will witness exponential growth in impact, leading them to tackle more complex and significant challenges, which will in turn necessitate even more rigorous testing.”

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles