Distributional Secures $19M to Streamline AI Model and App Testing Automation

Distributional, the AI Testing Platform Founded by Intel's Former GM of AI Software, Secures $19 Million in Series A Funding

Distributional, an innovative AI testing platform established by Scott Clark, former General Manager of AI Software at Intel, has successfully closed a $19 million Series A funding round, led by Two Sigma Ventures.

Inspired by the challenges he faced while implementing AI at Intel—and previously at Yelp in the ad-targeting division—Clark created Distributional to address critical AI testing needs.

“As the importance of AI applications rises, so do operational risks,” Clark explained. “AI product teams leverage our platform to proactively and continuously identify, comprehend, and mitigate AI risks before they escalate during production.”

Clark's journey to Intel began with the acquisition of SigOpt, a model experimentation and management platform he co-founded in 2020. After joining Intel, he was appointed VP and GM of the AI and Supercomputing Software Group in 2022. During his tenure at Intel, Clark and his team struggled with issues related to AI monitoring and observability.

He emphasized the unpredictable nature of AI: “AI is non-deterministic,” he stated, noting that it generates varying outputs even from the same input data. Given the numerous dependencies within AI models—such as software infrastructure and training data—troubleshooting bugs can be akin to searching for a needle in a haystack.

According to a 2024 Rand Corporation survey, over 80% of AI projects ultimately fail, with generative AI presenting significant challenges. Gartner projects that by 2026, a third of these deployments will be abandoned. “It requires writing statistical tests on distributions of many data properties,” Clark added. “AI must be continuously and adaptively tested throughout its life cycle to catch behavioral changes.”

To streamline the AI auditing process, Clark developed Distributional by utilizing techniques devised with SigOpt while working with enterprise clients. The platform can automatically generate statistical tests tailored to developers' specifications and compile the results in a central dashboard.

From this dashboard, users can collaborate on test “repositories,” prioritize failed tests, and adjust them as needed. Distributional’s environment can be deployed on-premises or via a managed plan and seamlessly integrates with popular alerting and database solutions.

“We offer visibility across the organization regarding what, when, and how AI applications were tested and how these factors have evolved over time,” Clark noted. “Our platform provides a repeatable AI testing process for similar applications through shareable templates, configurations, filters, and tags.”

AI can indeed be complex. Even leading AI labs struggle with effective risk management. Distributional aims to alleviate the testing burden, potentially helping companies achieve a solid return on investment—this is at the core of Clark's vision.

“Whether it's instability, inaccuracy, or any of the many other challenges, detecting AI risks can be daunting,” he continued. “If teams miss out on effective AI testing, they risk having their applications fail to launch—or, if deployed, they may behave unpredictably, creating unforeseeable and possibly harmful consequences.”

While Distributional is not the only player in the market for AI reliability analysis tools—competitors include Kolena, Prolific, Giskard, and Patronus, alongside major tech platforms like Google Cloud, AWS, and Azure—Clark believes Distributional stands out.

Distributional is set to offer a more personalized experience, managing installation, implementation, and integration for clients, as well as providing AI testing troubleshooting services (for a fee).

“Typical monitoring tools often focus on high-level metrics and specific outliers, which can give a limited view of consistency, lacking insights into broader application behavior,” Clark explained. “The objective of Distributional’s testing is to help teams define the intended behavior of any AI application, verify that it performs as expected throughout development and production, detect any behavioral changes, and determine what adjustments are necessary to restore a steady state.”

With the new funding from its Series A round, Distributional plans to expand its technical team, emphasizing user interface and AI research engineering. Clark anticipates the company’s workforce will grow to 35 by year’s end, marking the beginning of its first wave of enterprise deployments.

“We’ve achieved significant funding within just a year of our inception, and even with our expanding team, we’re in a strong position to capitalize on this enormous opportunity in the upcoming years,” Clark concluded.

The Series A round also saw participation from Andreessen Horowitz, Operator Collective, Oregon Venture Fund, Essence VC, and Alumni Ventures, bringing the total funding raised by the Berkeley, California-based startup to $30 million.

Most people like

Find AI tools in YBX