Google Launches Gemma 2 Series: Discover the Two Lightweight Model Options - 9B and 27B

Google has announced that its lightweight model series, Gemma 2, will be accessible to researchers and developers via Vertex AI starting next month. Initially featuring a 27-billion parameter model, the series has expanded to include a 9-billion parameter variant, surprising many in the tech community.

Gemma 2 was unveiled at Google I/O in May, succeeding the previous 2-billion and 7-billion parameter models launched earlier this year. This next-generation model is optimized for use with Nvidia’s latest GPUs or a single TPU host in Vertex AI. It aims to support developers looking to integrate AI into applications and edge devices like smartphones, IoT devices, and personal computers.

The new Gemma 2 models reflect advancements in AI technology, allowing for smaller and more efficient models that cater to various user needs. With the introduction of both the 9-billion and 27-billion parameter options, Google offers developers flexibility for on-device or cloud applications. The open-source nature of Gemma 2 also facilitates easy customization and integration into diverse projects.

It will be interesting to see how the existing Gemma variants—CodeGemma, RecurrentGemma, and PaliGemma—leverage these new models for enhanced capabilities.

Moreover, Google plans to introduce a 2.6-billion parameter model soon, aimed at “bridging the gap between lightweight accessibility and powerful performance.”

Gemma 2 is currently available through Google AI Studio, with model weights downloadable from Kaggle and Hugging Face. Researchers can access Gemma 2 for free via Kaggle or take advantage of the free tier provided for Colab notebooks.

Most people like

Find AI tools in YBX