Google has unveiled a new trio of generative AI models, boasting features that aim to make them “safer,” “smaller,” and “more transparent” than their competitors—a significant assertion. These models are part of Google’s Gemma 2 family, which first launched in May. The new models, named Gemma 2 2B, ShieldGemma, and Gemma Scope, cater to different applications while sharing a common commitment to safety.
Distinct from Google’s Gemini series, which keeps its source code private for internal use and developer access, the Gemma models are designed to build goodwill within the developer community—similar to Meta’s approach with Llama.
Gemma 2 2B is a lightweight model optimized for text generation and analysis, capable of running on a variety of hardware, from laptops to edge devices. It’s available for specific research and commercial purposes and can be downloaded from platforms like Google’s Vertex AI model library, Kaggle, and Google’s AI Studio toolkit.
ShieldGemma serves as a suite of "safety classifiers" focused on detecting harmful content such as hate speech, harassment, and sexually explicit materials. This model builds on the Gemma 2 foundation, enabling it to filter both prompts directed at a generative AI and the content it produces.
Finally, Gemma Scope provides developers with the ability to examine specific components of the Gemma 2 model, enhancing its interpretability. Google explains it as follows: “[Gemma Scope comprises] specialized neural networks that help unpack the dense, complex information processed by Gemma 2, transforming it into a more accessible format for analysis. By examining these expanded views, researchers can gain significant insights into how Gemma 2 identifies patterns, processes data, and ultimately makes predictions.”
The introduction of these new Gemma 2 models follows the U.S. Commerce Department’s recent endorsement of open AI models in a preliminary report. It emphasizes that open models expand access to generative AI for smaller companies, researchers, nonprofits, and individual developers while highlighting the necessity for monitoring mechanisms to address potential risks.