Cloudflare is enhancing its platform by enabling more developers to integrate AI applications from Hugging Face. The company has also made its serverless GPU-powered inference solution, Workers AI, widely available.
Originally announced nearly seven months ago, the Cloudflare-Hugging Face integration simplifies the deployment of models on Workers AI. With just one click, developers can instantly distribute their models. Cloudflare currently supports fourteen curated Hugging Face models for tasks such as text generation, embeddings, and sentence similarity.
"The recent surge in generative AI is prompting significant investment from companies across various industries," said Cloudflare CEO Matthew Prince. "While demonstrations are straightforward, transitioning AI into production is considerably challenging. We aim to alleviate this by minimizing the cost and complexity of developing AI-powered applications."
"Workers AI stands out as an affordable and accessible option for running inference," he continued. "In partnership with Hugging Face, which shares our vision of democratizing AI, we empower developers to seamlessly select a model and scale their AI applications globally, all in an instant."
Through Hugging Face, developers can choose their preferred open-source model, select “Deploy to Cloudflare Workers AI,” and distribute it instantly. This ensures real-time delivery at optimal locations, eliminating lag and enhancing user experiences.
Hugging Face co-founder and CTO Julien Chaumond remarked, "Providing the most popular open models with a serverless API, backed by a global network of GPUs, is a game-changer for the Hugging Face community."
With Workers AI, developers can leverage GPUs located in over 150 cities worldwide, including Cape Town, Durban, Johannesburg, Lagos, Amman, Buenos Aires, Mexico City, Mumbai, New Delhi, and Seoul. Additionally, Cloudflare is enhancing AI support for fine-tuned model weights, enabling developers to create and deploy specialized, domain-specific applications.