Presented by Google for Games
Google made a significant impact at this year's Game Developer Conference (GDC), emphasizing the transformative role of generative AI (Gen AI) in the gaming industry. A three-part presentation series showcased insights from Google leaders in gaming and AI, discussing the company's AI development solutions, the influence of AI on in-game experiences, and practical advice for developers looking to embark on their own AI journeys.
Gen AI Enabling “Living Games”
“Games are entering a new era,” declared Jack Buser, Google Cloud's Director for Games, highlighting that “living games” will likely debut within the next three to five years. These games will merge traditional live service models with generative AI capabilities.
“Ultimately, games will respond to players’ implicit or explicit instructions,” he shared. “They will generate content on the fly to meet the specific needs of individual players or small groups. Expect to see exciting developments from my colleagues at Google soon.”
Integrating Gen AI in Game Development
Many gaming studios are already incorporating generative AI into their production pipelines, utilizing tools like Google Cloud’s Vertex AI for game development, localization, and enhancing in-game experiences. Buser noted that some developers are even creating their own large language models (LLMs), citing Google Cloud's collaboration with NCSOFT to develop its VARCO LLM.
Generative AI is also revolutionizing game publishing and distribution, explained Lei Zhang, Director of Play Partnerships at Google.
“We're transitioning from merely distributing games to managing the entire lifecycle for developers and gamers,” Zhang stated. “Generative AI is enhancing game discovery and supporting developers in creating marketing assets for the Play Store. In the future, store descriptions and graphical assets may be generated by AI.”
Simon Tokumine, Director of Product Management at Google AI, added, “Generative AI has the potential to transform every aspect of the gaming business. Our cutting-edge models on our Labs portal are designed to enhance creative workflows.” He referenced collaborations with artists like Lupe Fiasco and Dan Deacon, who use AI to inspire creativity and enhance live performances.
Gemini 1.5 Expanding Game Development Potential
Recently launched in over 180 countries, Gemini 1.5 Pro is Google’s mid-sized multimodal model that optimizes text, image, video, audio, and coding tasks. It is capable of processing up to 1 million tokens in production (and 10 million in lab settings).
“The models can maintain context and generate coherent responses,” Tokumine said. “This addresses significant challenges in information retrieval. I'm excited to see the creative possibilities these models will unlock.”
Google Cloud provides developers with a secure managed platform for data, model refinement, and access to a vast library of third-party models, including over 100,000 from a recent partnership with Hugging Face.
Unlocking LLMs for Game Development
Glenn Cameron, Product Marketing Manager at Google, explored the benefits of LLMs in game development, emphasizing their ability to handle complex queries and provide nuanced responses—making them valuable creative assistants.
“They can serve as inspiration engines to help overcome creative blocks, especially during the early stages of game development,” Cameron explained. “From fleshing out quests to character stories, their potential as collaborators is transformative.”
Text-to-image models and Google’s DreamBooth technology can visualize characters and environments, generating images from text descriptions, and even producing code based on specific requirements. With the groundbreaking 1 million context window, models like Gemini 1.5 Pro can track lore details and create immersive narrative experiences, enabling NPCs to engage in memory-based, dynamic dialogues.
Google's Lightweight Open Model for Development
Google offers two model families: Gemini and the newly released Gemma, which is lighter and more accessible. Gemma is available across major libraries, including Keras, JAX, TensorFlow, PyTorch, and Hugging Face. It comes in two sizes—2 billion and 7 billion parameters—making it suitable for local devices as well as powerful desktop GPUs. Gemma also includes a responsible AI toolkit to help developers ensure safe and enjoyable player experiences.
“With great power comes great responsibility,” Cameron cautioned. “Training on vast datasets may introduce biases and toxicity, which can result in dangerous content. Developers must monitor and manage these risks.”
Starting with Gen AI for Living Games
In the final presentation, Dan Zaratsian and Giovane Moura Jr. demonstrated how generative AI and Google Cloud can revolutionize player interactions. They showcased a multiplayer game built on Google Kubernetes Engine (GKE), designed for scalability and interoperability across AI workloads globally.
Spanner plays a crucial role in storing embeddings for quick lookups, integrating structured data to enhance global game consistency. By maintaining long-term memory, Spanner allows NPCs to recall past interactions, leveraging this history for more intelligent and context-aware responses.
The Future of Generative AI in Game Development
Looking ahead, new tools and services will continue to reshape how developers engage with game creation. The Google Cloud team is developing federated queries to enhance NPC behaviors, enabling complex query interactions across multiple endpoints.
“If you combine these NPCs and LLMs in a chained manner, you can unlock far more potential than traditional single-pass systems,” Zaratsian concluded.