Ampere and Qualcomm Collaborate to Unveil an Innovative Arm-Based AI Server

Ampere and Qualcomm Announce Strategic Partnership for AI-Focused Server Solutions

In a surprising collaboration, Ampere and Qualcomm are joining forces to develop an AI-centric server solution that leverages Ampere’s powerful CPUs alongside Qualcomm’s Cloud AI 100 Ultra inferencing chips. While both companies produce Arm-based chips designed for data center servers—though Qualcomm primarily focuses on mobile—their combined expertise aims to enhance the efficiency and scalability of AI inferencing applications.

As Ampere seeks to capitalize on the booming AI market, its core strength lies in delivering fast and power-efficient server chips. While utilizing Arm IP enables them to incorporate some functionalities related to AI, it hasn't been their main focus. To fill this gap, Ampere has teamed up with Qualcomm and SuperMicro for seamless integration of their solutions, as shared by Arm’s CTO, Jeff Wittich.

“The goal is to demonstrate exceptional performance with Ampere CPUs handling AI inferencing tasks. However, for larger models—such as those with multi-100 billion parameters—one size does not fit all,” Wittich explained. “By collaborating with Qualcomm, we pair our highly efficient CPUs for general-purpose tasks with their outstandingly efficient cards, creating a robust server-level solution.”

Wittich emphasized the importance of forming this partnership: “We’ve experienced really fruitful collaboration with Qualcomm. Our aligned interests drive this compelling partnership. They are building exceptionally efficient solutions across various market segments, while we focus on maximizing efficiency within the server CPU domain.”

This collaboration is part of Ampere’s annual roadmap announcement and highlights the upcoming 256-core AmpereOne chip, designed using a cutting-edge 3nm process. Although these new chips are not yet available on the market, Wittich noted that they are ready for production and expected to launch later this year.

A standout feature of the new AmpereOne chips is their 12-channel DDR5 RAM, which significantly enhances memory access tuning for Ampere’s data center customers.

Beyond just performance enhancements, the partnership prioritizes lower power consumption and operational costs for data centers—critically important factors when it comes to AI inferencing. Ampere often compares its power efficiency and performance against Nvidia’s A10 GPUs, showcasing the value of its technology.

Importantly, Ampere is not phasing out its existing chip lineup for these new products. Wittich reassured that current chips continue to serve numerous applications effectively.

Additionally, Ampere announced a new collaboration with NETINT to create a combined solution that integrates Ampere’s CPUs and NETINT’s video processing chips. This innovative server will support the simultaneous transcoding of 360 live video channels, while utilizing OpenAI’s Whisper model for real-time subtitling of 40 streams.

“We embarked on this journey six years ago, recognizing its potential,” said Ampere CEO Renee James during the announcement. “Traditionally, low power was associated with low performance, but Ampere has shattered that myth. We have pushed the efficiency frontier of computing, delivering outstanding performance beyond the capabilities of legacy CPUs—all while maintaining energy efficiency.”

Most people like

Find AI tools in YBX