On Wednesday, Google introduced its highly anticipated new AI flagship model, Gemini 2.0 Pro Experimental, alongside a slew of other exciting updates in the AI space. The release also includes the company’s reasoning-focused model, Gemini 2.0 Flash Thinking, now available through the Gemini app.
This move comes as the tech world remains captivated by the growing influence of DeepSeek, a Chinese AI startup offering cost-effective reasoning models that match or even surpass the performance of established American tech giants. DeepSeek's models have generated significant attention for their competitive pricing and performance, especially through its accessible API.
While both Google and DeepSeek unveiled their AI reasoning models in December, DeepSeek’s R1 model garnered the lion’s share of the spotlight. Now, Google seems keen to shift the focus back to its own models, hoping to expand visibility for Gemini 2.0 Flash Thinking by making it available in the widely used Gemini app.
As for the Gemini 2.0 Pro, the successor to Google’s Gemini 1.5 Pro model launched last February, the tech giant touts it as the most advanced offering within the Gemini AI family. Despite an accidental pre-announcement in the Gemini app changelog last week, the official launch of Gemini 2.0 Pro on Wednesday marks a major milestone. This experimental version of the model is available on Google’s AI development platforms, Vertex AI and Google AI Studio, and will also be accessible to subscribers of the Gemini Advanced plan through the Gemini app.
Google’s new flagship model is designed to shine in areas like coding and complex prompt handling, with the company claiming it provides a deeper “understanding and reasoning of world knowledge” than any previous models. Gemini 2.0 Pro can also utilize tools such as Google Search and execute code on behalf of users, further enhancing its capabilities.
One of the most impressive features of Gemini 2.0 Pro is its enormous context window of 2 million tokens, which allows it to process roughly 1.5 million words in a single pass. To put that into perspective, Gemini 2.0 Pro could analyze all seven books of the Harry Potter series in one go, with 400,000 words left over to spare.
In addition to the launch of Gemini 2.0 Pro, Google is also making its Gemini 2.0 Flash model generally available on Wednesday. Initially unveiled in December, the model is now available to all users within the Gemini app.
Finally, in an attempt to further compete with DeepSeek's budget-friendly offerings, Google is debuting a new, more affordable model: Gemini 2.0 Flash-Lite. The company claims this new model surpasses its predecessor, Gemini 1.5 Flash, in terms of performance while maintaining the same cost-efficiency and speed.
With these latest advancements, Google is positioning Gemini as a leader in the competitive AI space, as it seeks to navigate the growing competition from both established players and new disruptors like DeepSeek. The ongoing battle for AI supremacy continues to heat up, with both cost and performance becoming central factors in the race.