OpenAI Launches Limited Access to ChatGPT’s Advanced Voice Mode for Mobile Users

OpenAI has initiated the alpha rollout of its Advanced Voice Mode for a select group of ChatGPT Plus users, enhancing natural conversation with the AI chatbot on the ChatGPT mobile app for iOS and Android.

On its X account, OpenAI announced that this feature is initially available to "a small group of ChatGPT Plus users," with plans to gradually expand access to all Plus subscribers by fall 2024.

ChatGPT Plus, the $20/month subscription service, provides users access to OpenAI's advanced large language model (LLM) chatbot, along with other tiers: Free, Team, and Enterprise.

It's unclear how OpenAI selected the first users for the Advanced Voice Mode; however, those chosen will receive an email and an in-app notification with instructions. Interested users should stay tuned for updates in their ChatGPT mobile app.

First showcased at OpenAI’s Spring Update event in May 2024, the Advanced Voice Mode enables real-time conversations with four AI-generated voices. The chatbot aims for natural interactions, managing interruptions, and displaying emotional nuances in its speech.

OpenAI highlighted several real-world applications for this feature, including tutoring assistance, fashion advice, and support for the visually impaired when paired with its Vision capabilities.

Although originally slated for release in late June, the rollout faced delays following a controversy involving actor Scarlett Johansson, who alleged that OpenAI had tried to mimic her voice. In response, OpenAI removed the AI voice “Sky” from its library.

On X, the official ChatGPT App account recently confirmed the long-awaited rollout of Advanced Voice Mode: "The long-awaited Advanced Voice Mode [is] now beginning to roll out!"

Mira Murati, OpenAI’s Chief Technology Officer, expressed her excitement for the new feature, stating, “Richer and more natural conversations make the technology less rigid — we’ve found it more collaborative and helpful, and we think you will as well.”

In its official announcement, OpenAI emphasized its commitment to safety and quality. "Since we first demoed Advanced Voice Mode, we’ve been reinforcing the safety of voice conversations to prepare this technology for millions of users," the company stated, noting extensive testing of voice capabilities with over 100 external red teamers across 45 languages. To uphold privacy, the model will only utilize the four preset voices and will block any out-of-scope outputs. Additionally, protective measures are in place against requests for violent or copyrighted content.

This news arrives as concerns about AI's potential for fraud and impersonation regain attention. Currently, OpenAI’s Voice Mode does not support new voice generation or cloning, but it could still pose risks for those unaware of its AI nature.

In a separate incident, Elon Musk faced backlash for sharing a voice clone of U.S. Democratic presidential candidate Kamala Harris in a critical video, highlighting ongoing issues surrounding voice cloning technology.

Since its Spring Update, OpenAI has released several papers addressing safety and AI model alignment while facing scrutiny over its focus on product releases over safety concerns. The cautious rollout of Advanced Voice Mode aims to counter these criticisms and reassure users and regulators of OpenAI's commitment to safety alongside innovation.

The introduction of Advanced Voice Mode further distinguishes OpenAI from competitors like Meta and Anthropic, intensifying competition in the emotive AI voice technology space.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles