HPE Unveils Innovative Vision for a Next-Generation AI-Native Architecture

Hewlett Packard Enterprise (HPE) is significantly advancing its artificial intelligence (AI) initiatives, unveiling a range of enhancements today at the HPE Discover Barcelona 2023 event.

Among the key updates is an expanded partnership with Nvidia, focusing on both hardware and software to optimize AI for enterprise workloads. HPE's Machine Learning Development Environment (MLDE), first launched in 2022, has been upgraded with new features to facilitate the consumption, customization, and creation of AI models. Furthermore, HPE is introducing MLDE as a managed service on AWS and Google Cloud, while also enhancing its cloud capabilities with new AI-optimized instances in HPE GreenLake and improved file storage performance tailored for AI tasks.

The latest improvements align with HPE's vision for a comprehensive AI-native architecture, optimized from hardware to software. Modern enterprise AI workloads are highly computationally intensive, require data as a primary input, and demand massive scalability for processing.

Evan Sparks, VP/GM of AI Solutions and Supercomputing Cloud at HPE, emphasized the need for a fundamentally different architecture for AI, noting, “The workload is fundamentally different than the classic transaction processing and web services workloads that have dominated computing for the last couple of decades.”

Empowering Enterprises with Generative AI Workflows

The enhancements to HPE MLDE aim to simplify the integration of AI workloads into enterprise operations. Sparks highlighted that the new features will empower customers to embrace generative AI workflows, offering tools for prompt engineering, retrieval-augmented generation (RAG), and fine-tuning pre-trained models. The focus is to bridge the gap between cutting-edge research and practical applications for users.

Additionally, the HPE Ezmeral Unified Analytics software suite will benefit from improved model training and optimization through deep integration with MLDE. “Our objective is to accelerate time to value for organizations looking to deploy AI solutions,” Sparks stated.

Data-Centric AI for Enterprises

To harness AI effectively, enterprises must leverage their own data for model training and insights. This requires optimal data storage that supports the speed and scale essential for AI.

Enhancements to the HPE GreenLake for File Storage service address these needs, delivering improved performance, density, and throughput. Patrick Osborne, SVP/GM of HPE Storage, stated, “We’re announcing a 1.8x capacity expansion, with plans to support up to 250 petabytes of data starting in Q2." This significant increase caters to organizations developing large language models, a requirement HPE has noted is increasingly common.

Strengthening the Partnership with Nvidia

HPE is also expanding its collaboration with Nvidia to include new integrated hardware solutions. The partnership, initially announced in June, aims to optimize HPE hardware systems for AI inference using Nvidia GPUs, now broadening to encompass training workloads as well.

Neil MacDonald, EVP and GM at HPE Compute, explained that most organizations will not create their own foundational models but rather deploy pre-existing models to transform business processes. He identified the challenge of building and deploying the necessary infrastructure for fine-tuning and experimentation.

As part of this partnership expansion, HPE is introducing purpose-built systems for AI, including the HPE ProLiant Compute DL380a, which integrates Nvidia L40S GPUs, Nvidia BlueField-3 DPUs, and Nvidia Spectrum-X technology. Both HPE MLDE and Ezmeral Software will see optimizations for Nvidia GPUs, alongside collaboration involving Nvidia AI Enterprise and the NeMo framework.

"Enterprises must evolve to become AI-powered, or they risk becoming obsolete," MacDonald concluded.

Most people like

Find AI tools in YBX