At this week’s Game Developers Conference, Ubisoft showcased a glimpse into the potential of AI in gaming. The company demonstrated a prototype utilizing Nvidia’s Ace microservice to create fully voiced “smart NPCs” that players can engage with through voice commands. While skepticism surrounded the concept online, experiencing the demo first-hand was impressive. I found myself in a surprisingly intelligent conversation with an eco-conscious NPC discussing the ethics of eco-terrorism—an off-script dialogue made possible through AI technology. This is one of the most compelling applications we've observed so far, yet it faces a significant challenge: linguistic bias.
During the brief demo, I played as a character in a space-faring adventure who joins a resistance battling against a megacorporation. The three-part demo allowed me to interact with two uniquely crafted characters, each with extensive backstories developed by Ubisoft and integrated into Nvidia’s Ace tool. I explored the sci-fi universe through dialogue with an NPC, inquiring about their teammates, and subsequently orchestrating a heist with creative inputs. After finishing, I spoke with two Ubisoft team members involved in the project, asking if they encountered any frustrations with the tool. While they were generally positive about the technology, their deep sighs hinted at a long list of challenges still to be addressed before fully adopting it. The primary concern they raised was the inherent bias present in the English language—a pervasive issue that AI tends to replicate.
The demoists highlighted two notable instances of bias that I hadn’t initially recognized during gameplay. At one point, I asked an NPC about their least favorite crew member. Although the NPC expressed affection for all teammates, they singled out a character named Iron as a problematic team member. The issue? Iron wasn’t intended to be a male character. According to the demoists, the AI mistakenly associates the term “iron” with masculinity, leading it to default to a male interpretation. Conversely, when discussing a character named Bloom, the NPC described them as a nurturing woman, which reflects the machine's tendency to associate the name with femininity, imposing maternal stereotypes onto a male character. Though these examples may seem minor, they highlight deeper issues that could arise if left unaddressed, particularly around how generative AI handles race.
This issue is not unique to Nvidia’s tool or AI in general; it illustrates a critical aspect of language. Large language models (LLMs), like the one used in this demo, mirror human communication patterns based on vast amounts of training data. As a result, they can inadvertently adopt some undesirable habits, such as reinforcing gender stereotypes. While Ubisoft's NPCs boast intricately woven backstories crafted by writers, they still learn from broader language datasets that might not accurately reflect nuanced human experiences. AI firms like Convai, which contributed to Nvidia’s Kairos demo at CES, admit they are unclear about the precise content within these training datasets.
This challenge deepens when considering the implications for other languages. The developers I spoke to noted that generative AI is currently primarily focused on English. Different languages can have vastly different structures and contexts, complicating the implementation of tools like Ace. What is second nature to English speakers may not translate well to other languages. How can one effectively localize a system that is primarily trained in one language?
Ubisoft and Nvidia acknowledge these limitations. The team members I spoke with emphasized the significance of collaborating with internal Diversity, Equity, and Inclusion (DEI) teams to help identify biases and refine the AI-generated dialogue. While I was impressed by the engaging philosophical exchange with the AI NPC, it’s evident that these systems require substantial human intervention to correct the biases we have unintentionally instilled in them.