Hugging Face and Google Join Forces to Fast-Track Open AI Development

As enterprises across various sectors strive to realize their AI ambitions, vendors are consolidating resources to support these efforts in a single platform. A notable example is the recent strategic partnership between Google and Hugging Face, which provides developers with an efficient way to access Google Cloud services, expediting the creation of open generative AI applications.

Through this collaboration, teams utilizing open-source models from Hugging Face will have the capability to train and deploy them on Google Cloud. This integration offers comprehensive access to Google Cloud's AI tools, including the specialized Vertex AI, tensor processing units (TPUs), and graphics processing units (GPUs).

Clement Delangue, CEO of Hugging Face, stated, “From the original Transformers paper to T5 and the Vision Transformer, Google has been instrumental in advancing AI and the open science movement. This partnership simplifies how Hugging Face users and Google Cloud customers can utilize the latest open models alongside optimized AI infrastructure and tools, significantly enhancing developers' capacity to create their own AI models.”

What can Hugging Face users anticipate?

Hugging Face has emerged as a central hub for AI, hosting over 500,000 AI models and 250,000 datasets. More than 50,000 organizations depend on this platform for their AI initiatives. Simultaneously, Google Cloud is focused on providing enterprises with AI-centric infrastructure and tools while actively contributing to open AI research.

With this partnership, the hundreds of thousands of Hugging Face users on Google Cloud each month will gain the ability to train, fine-tune, and deploy their models using Vertex AI, the end-to-end MLOps platform designed for building generative AI applications.

Users will access these capabilities via the Hugging Face platform with just a few clicks. They will also have the option to train and deploy models using Google Kubernetes Engine (GKE), allowing for a customizable infrastructure that can scale Hugging Face-specific deep learning containers on GKE.

Additionally, developers will leverage Google Cloud's advanced hardware capabilities, including TPU v5e, A3 virtual machines (VMs) powered by Nvidia H100 Tensor Core GPUs, and C3 VMs utilizing Intel Sapphire Rapids CPUs.

“Models can be effortlessly deployed for production on Google Cloud with inference endpoints. AI developers will be able to speed up their applications using TPU on Hugging Face spaces. Organizations can efficiently manage usage and billing for their Enterprise Hub subscription via their Google Cloud account,” wrote Jeff Boudier, Head of Product and Growth at Hugging Face, along with Technical Lead Philipp Schmid in a joint blog post.

Not available just yet

While this collaboration has been announced, it’s important to note that the enhanced capabilities, including Vertex AI and GKE deployment options, are currently unavailable. The companies aim to launch these features for Hugging Face Hub users in the first half of 2024.

Most people like

Find AI tools in YBX