OpenAI has taken a significant step forward in making ChatGPT more conversational by rolling out its Advanced Voice feature to Plus and Teams subscribers this week. This feature, powered by the GPT-4 model, allows users to speak directly to the chatbot, eliminating the need for text prompts. It’s like having a real-life conversation with an AI assistant.
The Advanced Voice feature was initially announced at OpenAI’s Spring Update event and made available to a select group of ChatGPT Plus subscribers for beta testing in July. Now, all paying subscribers have access to this innovative functionality.
Alongside the widespread release of Advanced Voice, OpenAI has introduced five new voices for the chatbot – Arbor, Maple, Sol, Spruce, and Vale. These join the existing four voices (Breeze, Juniper, Cove, and Ember), offering a wider range of tones and personalities for users to choose from. Both Standard and Advanced Voice modes support these new voices.
While video and screen sharing are not currently supported with Advanced Voice, OpenAI has confirmed that these capabilities will be added at a later date.
To further enhance the user experience, OpenAI is incorporating two new tools for Advanced Voice: memory and custom instructions. Previously, Advanced Voice could only access information from the current chat session. Now, with the memory function, the AI can recall details from previous conversations, making interactions more seamless and reducing the need for users to repeat themselves.
Custom instructions are designed to provide users with more control over how the model generates responses. For example, you can specify that all code-related responses be presented in Python. This allows for a more personalized and tailored interaction.
Plus and Teams subscribers will receive an in-app notification when the Advanced Voice feature becomes available on their accounts. However, it’s important to note that this feature is not yet available in the EU, the U.K., Switzerland, Iceland, Norway, and Liechtenstein.
OpenAI’s announcement comes on the heels of Google’s recent release of Gemini Live to all users, including those on the free tier. This marks a significant step towards making AI more accessible and conversational for everyone. With the continued development of these AI technologies, we can expect even more innovative and user-friendly features in the future.