Google Launches Open Source AI Models to Compete with Meta and Mistral

Google has unveiled a new family of open-source language models, Gemma, designed to compete with offerings from Meta, Mistral, and other open-source players. These models are based on Gemini, Google’s most advanced large multimodal architecture to date, yet they specialize exclusively in text and code.

The company introduced two versions of its “state of the art” Gemma models: a 2-billion parameter model and a larger 7-billion parameter model. Notably, Google asserts that Gemma either outperforms or matches the capabilities of competitors like Meta’s Llama 2 and Mistral in various domains, including dialogue, reasoning, mathematics, and coding.

**Key Features of Gemma Models**

- **2-Billion Parameter Model**: Optimized for deployment on CPUs, making it suitable for edge applications on hardware such as laptops.

- **7-Billion Parameter Model**: Requires more robust computing resources like GPUs and TPUs.

Both models are available via Google Cloud's Vertex AI platform and Google Kubernetes Engine. Google allows "responsible" commercial use of Gemma across organizations of all sizes, contrasting with Meta’s policy, which limits free use of Llama 2 to organizations with fewer than 700 million monthly active users, requiring licensing for larger entities.

Research accessibility is also a priority for Google. They offer free access through Kaggle, a tier for Colab notebooks, and $300 in cloud credits for first-time Google Cloud users. Additionally, researchers can apply for Google Cloud credits of up to $500,000 to support their projects.

**Expert Insights on Gemma**

Victor Botev, CTO of Iris.ai, commented on the significance of Google's launch: “The introduction of Gemma is a testament to the rapidly advancing capabilities of smaller language models. A model that can run directly on laptops and deliver performance comparable to Llama 2 is a milestone, fundamentally reducing adoption barriers for many organizations.”

Botev emphasized that the practical application of these models outweighs their parameter count. He believes that the unique advantages of smaller models can only be fully utilized if user interfaces and experiences are tailored to specific domains. By creating specialized workflows, organizations can maximize the effectiveness of these nimble yet sophisticated models.

**Training and Technical Specifications**

The 2-billion parameter version of Gemma underwent training on two trillion tokens of text, while the 7-billion parameter model was trained on an impressive six trillion tokens, primarily in English. Both models were developed utilizing similar architectures and training procedures as Gemini, based on the transformer decoder framework.

Google is also making model weights available, including pre-trained and instruction-tuned variants, along with toolchains for inference and supervised fine-tuning across popular frameworks like JAX, PyTorch, and TensorFlow with Keras 3.0. Additionally, Gemma is integrated with platforms such as Hugging Face, MaxText, and NVIDIA NeMo.

**A Responsible Approach to AI**

In tandem with Gemma, Google is introducing a Responsible Generative AI Toolkit aimed at fostering the development of safer applications. Researchers behind the project assert that the responsible release of large language models is essential for enhancing safety, ensuring equitable access to advanced technology, and enabling meticulous evaluation of existing techniques.

Regarding privacy, Google employs automated techniques to filter out sensitive data and personal information before training. The company has also implemented safety measures within Gemma to address potential risks. Recognizing the irreversible nature of releasing open-source models, Google urges the community to adopt a more nuanced and collaborative conversation around the risks and benefits, steering away from simplistic “open vs. closed” paradigms.

However, concerns remain. Melissa Ruzzi, director of AI at AppOmni, expressed skepticism about the ability to implement sufficient safeguards to prevent misuse by malicious actors while maintaining functional models. She highlighted this challenge as a critical issue that needs addressing in the landscape of open-source AI.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles