Dell and Meta Team Up to Deliver On-Premises Llama 2 Open Source AI for Enterprise Users

The open-source Llama 2 large language model (LLM) developed by Meta is experiencing a significant boost in enterprise adoption, thanks to Dell Technologies.

Dell has announced its integration of Llama 2 support into its Dell Validated Design for Generative AI hardware and its on-premises generative AI solutions. This partnership enhances access to Llama 2 in enterprise environments, marking a shift from cloud reliance to on-premises deployments.

Originally released by Meta in July, Llama 2 is already supported by major cloud providers, including Microsoft Azure, Amazon Web Services, and Google Cloud. However, Dell's approach is distinct as it focuses on bringing this open-source LLM directly to enterprise users.

Not only is Dell enabling Llama 2 for its clients, but it is also utilizing the model for its internal applications. Matt Baker, Senior Vice-President of AI Strategy at Dell, highlighted that supporting Llama 2 aligns perfectly with the company’s vision of integrating AI into enterprise data. "The vast majority of data lives on-premises, and this open access model enables direct integration with your data," Baker stated. He further emphasized the sophistication of the Llama 2 family, which boasts up to 70 billion parameters for robust on-premises applications.

In addition to supporting Llama 2, Dell is already employing it in various internal capacities, contributing to the development of applications like Retrieval Augmented Generation (RAG) to enhance its knowledge management system.

While Dell aims to generate revenue through its hardware and professional services for generative AI, Baker clarified that the company is not monetizing Llama 2. "We believe Llama 2 is a valuable capability, and our goal is to simplify customer access to it," he added.

Meta's confidence in Dell's support for Llama 2 is reinforced by the impressive adoption numbers, with approximately 30 million downloads in the past month alone, according to Joe Spisak, Head of Generative AI Open Source at Meta. Llama 2 serves not only as a powerful LLM but also as a crucial component of a broader generative AI ecosystem that includes the PyTorch machine learning framework.

Spisak noted that Llama 2's adoption spans various sectors within the AI ecosystem, with cloud providers leveraging it for LLM optimization alongside hardware partners like Qualcomm. The partnership with Dell is particularly vital as it enables the deployment of Llama 2 in on-premises environments, which is essential for organizations concerned about data privacy.

“While public cloud use is an option, running Llama 2 in environments where data sensitivity is a concern is where this open model excels,” Spisak explained. “Llama 2 strikes the right balance, allowing deployment in diverse settings.”

Collaboration with Dell will also facilitate a deeper understanding within the Llama development community regarding enterprise needs. As more use cases emerge, it will provide invaluable insights for optimizing Llama's capabilities. "Working with partners like Dell enhances our platform, ultimately contributing to the development of improved versions such as Llama 3 and Llama 4, fostering a safer and more open ecosystem," Spisak concluded.

Most people like

Find AI tools in YBX