Tech Giants Join Forces to Create Industry Coalition for Next-Gen AI Chip Development

Intel, Google, Microsoft, Meta, and other tech giants have joined forces to create the Ultra Accelerator Link (UALink) Promoter Group. This new coalition aims to set standards for connecting AI accelerator chips within data centers.

Announced on Thursday, the UALink Promoter Group, which also includes AMD (though Arm is not yet a member), Hewlett Packard Enterprise, Broadcom, and Cisco, is pushing for a unified industry standard to link the increasingly prevalent AI accelerators in servers. These accelerators encompass a range of chips, including GPUs and custom-designed solutions, intended to enhance the training, fine-tuning, and deployment of AI models.

According to Forrest Norrod, AMD’s General Manager of Data Center Solutions, “The industry requires an open standard that can be rapidly advanced in a collaborative manner, allowing diverse companies to contribute to the ecosystem.” He emphasized the need for a standard that fosters swift innovation without the constraints imposed by individual companies.

The initial version of the proposed standard, UALink 1.0, aims to connect up to 1,024 AI accelerators, specifically GPUs, within a single computing "pod"—defined as one or more server racks. Built on open standards that include AMD’s Infinity Fabric, UALink 1.0 will enable direct loads and stores between the memory of AI accelerators, substantially improving speed while lowering data transfer latency compared to current interconnect standards, according to the UALink Promoter Group.

The consortium, dubbed the UALink Consortium, is set to form in the third quarter to oversee the ongoing development of the UALink specifications. Companies joining the consortium will gain access to UALink 1.0 around the same time, with a higher-bandwidth update, UALink 1.1, scheduled for release in the fourth quarter of 2024. Norrod mentioned that the first UALink products are expected to launch "in the next couple of years."

Notably absent from the member list is Nvidia, the dominant player in the AI accelerator market, holding an estimated 80% to 95% share. Nvidia did not respond to inquiries regarding this article. However, its lack of involvement in UALink is understandable, given the company’s proprietary interconnect technology for linking GPUs in data centers, making it unlikely to endorse a specification based on competing technologies.

In Nvidia’s latest fiscal quarter (Q1 2025), its data center sales, including those of AI chips, surged over 400% year-over-year. If this trend continues, Nvidia is poised to overtake Apple as the world’s second-most valuable company later this year.

Simply put, Nvidia is in a position where it does not need to participate in UALink unless it chooses to.

As for Amazon Web Services (AWS), the only major cloud provider not involved in UALink, it appears to be taking a cautious approach while it develops its in-house accelerator hardware. Given AWS's stronghold in the cloud services market, it may not see any advantage in opposing Nvidia, which supplies a significant portion of the GPUs used by its customers.

AWS did not respond to requests for comment.

The primary beneficiaries of UALink—beyond AMD and Intel—are likely Microsoft, Meta, and Google, all of whom have spent billions on Nvidia GPUs to enhance their cloud offerings and support their expanding AI initiatives. Each company is eager to reduce its dependence on a vendor they consider overly dominant in the AI hardware space.

According to a recent Gartner report, the value of AI accelerators in servers is expected to reach $21 billion this year, growing to $33 billion by 2028. Additionally, revenue from AI chips is projected to hit $33.4 billion by 2025.

Companies like Google utilize custom chips, including TPUs and Axion, for their AI training and operational needs. Amazon has developed various AI chip families, while Microsoft has entered the landscape with its Maia and Cobalt chips. Meta is also refining its offerings in the accelerator market.

Moreover, Microsoft, along with its partner OpenAI, is reportedly set to invest at least $100 billion in a supercomputer for AI model training, which will be equipped with future iterations of its Cobalt and Maia chips. These chips will require effective interconnection—potentially through UALink.

Most people like

Find AI tools in YBX