OpenAI has announced a delay in launching new voice and emotion-reading features for its ChatGPT chatbot, citing the need for more safety testing. Originally set to be available to some paying subscribers in late June, these features will be rolled out in the fall.
We're sharing an update on the advanced Voice Mode we demoed during our Spring Update, which we remain very excited about:
We had planned to start rolling this out in alpha to a small group of ChatGPT Plus users in late June, but need one more month to reach our bar to launch....
The postponement follows a demonstration last month that garnered user excitement and sparked controversy, including a potential lawsuit from actress Scarlett Johansson, who claimed her voice was mimicked for an AI persona.
OpenAI's demo showcased the chatbot's ability to speak in synthetic voices and respond to users' tones and expressions, with one voice resembling Johansson's role in the movie 'Her.' However, CEO Sam Altman denied using Johansson's voice, clarifying that a different actor was used for training. The company aims to ensure the new features meet high safety and reliability standards before release.
The delay highlights ongoing challenges in the AI industry. Companies like Google and Microsoft have faced similar setbacks, dealing with errors and controversial outputs from their AI tools.
OpenAI emphasised the complexity of designing chatbots that interpret and mimic emotions, which can introduce new risks and potential for misuse. Additionally, the competition in AI industry is growing swiftly to satisfy the demand of a more and more demanding market and customer field. However, the company seems to be committed to releasing these advanced features thoughtfully and safely.