Intel and Partners Pioneer Development of Open Generative AI Tools for Enterprises

Can generative AI tailored for enterprises—such as tools that autocompletion for reports and spreadsheet formulas—achieve interoperability? A coalition of organizations, including Cloudera and Intel, alongside the Linux Foundation, aims to explore this question.

On Tuesday, the Linux Foundation unveiled the Open Platform for Enterprise AI (OPEA), a project designed to advance the development of open, multi-provider, and composable (i.e., modular) generative AI systems. Operated under the Linux Foundation’s LF AI and Data organization, which specializes in AI and data platform initiatives, OPEA aims to facilitate the release of "hardened" and "scalable" generative AI systems that leverage innovative open-source solutions from across the ecosystem. Ibrahim Haddad, executive director of LF AI and Data, stated in a press release, "OPEA will unlock new possibilities in AI by establishing a detailed, composable framework that leads the way in technology stacks. This initiative solidifies our mission to promote open-source innovation and collaboration in the AI and data communities through a neutral governance model."

In addition to Cloudera and Intel, OPEA—part of the Linux Foundation’s Sandbox Projects, which acts as an incubator—boasts notable enterprise members including IBM-owned Red Hat, Hugging Face, Domino Data Lab, MariaDB, and VMware.

What collaborative innovations might emerge? Haddad hints at several initiatives, such as optimized support for AI toolchains and compilers that enable AI workloads to function across diverse hardware components, along with “heterogeneous” pipelines for retrieval-augmented generation (RAG).

RAG is gaining traction in the enterprise applications of generative AI, and its appeal is evident. Traditional generative AI models are confined to the data they’ve been trained on. RAG extends a model’s knowledge base by incorporating information beyond its original training data, using external resources like proprietary company data or public databases to enhance responses and task performance.

Intel elaborated in its press release that enterprises struggle with a DIY approach to RAG due to the absence of definitive standards across components, hindering the selection and implementation of open, interoperable RAG solutions for swift market entry. OPEA aims to tackle these challenges by partnering with industry players to standardize frameworks, architectural blueprints, and reference solutions.

Evaluation will be a critical aspect of OPEA’s mission. Within its GitHub repository, the group has introduced a grading rubric for assessing generative AI systems across four key areas: performance, features, trustworthiness, and “enterprise-grade” readiness. As defined by OPEA, performance encompasses "black-box" benchmarks derived from real-world applications. Features assess a system’s interoperability, deployment options, and user-friendliness. Trustworthiness evaluates a model's robustness and quality, while enterprise readiness focuses on the necessary requirements for seamless system deployment.

Rachel Roumeliotis, Intel's director of open-source strategy, mentioned that OPEA aims to collaborate with the open-source community to develop tests based on the rubric, as well as provide assessments and custom evaluations of generative AI deployments.

While OPEA's additional initiatives remain somewhat undefined, Haddad mentioned the prospect of open model development, akin to Meta’s expanding Llama family and Databricks’ DBRX. As evidence of this, Intel has contributed reference implementations for a generative AI chatbot, document summarizer, and code generator optimized for its Xeon 6 and Gaudi 2 hardware.

OPEA members are distinctly motivated—and potentially self-interested—in cultivating enterprise generative AI tools. Cloudera is actively forming partnerships to establish what it refers to as an “AI ecosystem” in the cloud. Domino offers a comprehensive suite of applications for building and auditing business-driven generative AI solutions. VMware is also focusing on the infrastructure side of enterprise AI, with the rollout of new “private AI” compute products last August.

The pressing question remains: Will these vendors collaborate to develop cross-compatible AI tools through OPEA? The rewards are clear. Customers prefer the flexibility of choosing multiple vendors based on their distinct needs, resources, and budgets. However, history warns against a tendency toward vendor lock-in. Let's hope this collaboration avoids that fate.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles