OpenAI Is Bringing a More “Natural” Sounding Voice to ChatGPT
The beta arrives two months after Scarlet Johansson accused the company of mimicking her voice.
OpenAI is trying to make the experience of using ChatGPT a bit more personal, starting with using a more “natural” sounding voice than the current robotic default.
The AI bot currently offers a Voice Mode, where users can talk and ask questions through their device’s microphone and the bot, in response, will vocalize its answers. The company has now rolled out a beta of its Advanced Voice Mode, which “features more natural, real-time conversations that pick up on and respond with emotion and non-verbal cues.”
OpenAI has been reported to be working on improving the ChatGPT’s voice for several weeks – and was reportedly eyeing Scarlet Johansson for the voice acting role. Though the actress turned down the company’s offer, she accused OpenAI of mimicking her voice anyway upon hearing ChatGPT’s new voice demo.
“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” Johansson stated. “Mr. Altman even insinuated that the similarity was intentional, tweeting a single word ‘her’ – a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.”
After delaying the release of Advanced Voice Mode by a month, OpenAI demoed the feature at a recent event. Available exclusively to a select test group of ChatGPT+ subscribers, the mode comes with four preset voices and is designed to be able to pick up on more nuances, such as sarcasm or jokes.
It’ll also take less time for users to receive a response using the Advanced Voice Mode and the feature contains “filters” that’ll prevent the bot from generating music or other sounds that may be copyrighted.