Researchers Warn AI Chatbot Developers Against Imitating the Deceased

Researchers at the University of Cambridge are sounding the alarm for companies developing AI chatbots that simulate conversations with deceased loved ones, urging them to prioritize user sensitivity and ethical design to avoid causing distress. These so-called "deadbots" or "griefbots" allow users to interact through natural language with AI representations of their departed relatives, reminiscent of existing chatbots like ChatGPT but specifically tailored for this poignant purpose.

While this niche within the expanding AI chatbot industry offers intriguing possibilities, the researchers caution that developers risk exploiting vulnerable consumers if they do not create these systems thoughtfully. The findings, recently published in the journal *Philosophy and Technology*, highlight the emotional toll on users—an effect they liken to a haunting experience whereby individuals may feel overwhelmed or drained by their interactions with these digital apparitions.

“Rapid advancements in generative AI technology have made it possible for almost anyone with internet access to conjure a version of a deceased loved one,” remarked Katarzyna Nowaczyk-Basińska, a co-author of the study and researcher at the university’s Leverhulme Center for the Future of Intelligence. She emphasized the importance of maintaining the dignity of the deceased, suggesting that the motives behind these digital afterlife services should never overshadow respect and ethical considerations.

Among the notable companies venturing into the deadbot realm is HereAfter AI, which allows users to record personal stories and memories for future family access. Utilizing generative AI, the mobile app crafts responses to inquiries from relatives, drawing on the archived content to create a semblance of interaction. Another example is StoryFile, which produces interactive chatbots based on extensive video recordings of individuals. Notably, they captured the legacy of actor William Shatner in 2021, although they have recently filed for bankruptcy.

Previously, the project known as December employed OpenAI technology to simulate interactions with loved ones before moving to develop its proprietary technology.

The Cambridge researchers advocate for responsible design practices among deadbot developers, suggesting measures such as age restrictions and clear transparency alerts to remind users they are engaging with AI rather than a living human being. These alerts could be akin to content warnings, emphasizing the nature of the interaction to mitigate emotional confusion.

Additionally, they propose implementing user-friendly opt-out features that empower individuals to deactivate their deadbot engagements at any time. The researchers express concern that such technologies could lead to exploitative practices, with companies potentially bombarding family members with unwanted notifications—an unsettling experience akin to “being digitally stalked by the dead.”

“People might form deep emotional attachments to these simulations, making them especially susceptible to manipulation,” noted co-author Dr. Tomasz Hollanek. He advocates for the development of respectful protocols for the eventual retirement of deadbots, suggesting the creation of digital funerals or other ceremonial practices tailored to different cultural contexts.

As the integration of AI into the fabric of our lives progresses, protecting users’ emotional well-being while honoring the memories of loved ones will be increasingly critical. The call from Cambridge researchers is a reminder of the ethical responsibilities that lie ahead in the evolving landscape of artificial intelligence and its intersection with human grief.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles