At an event last year, OpenAI chief Sam Altman said his favorite science fiction movie was Her, the 2013 film in which a man falls in love with a virtual assistant. Now, that same movie is causing the latest headache for Altman and his executive team.
In September—the same month Altman revealed his top film—OpenAI released a set of voices to compliment its ChatGPT large language model. One of those voices, a sophisticated-sounding woman named Sky, has garnered media attention in the last week for sounding similar to the very AI bot at the center of Her, voiced by actress Scarlett Johansson.
“We believe that AI voices should not deliberately mimic a celebrity’s distinctive voice—Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” OpenAI said in a blog post on Sunday.
Despite this clarification, the company is disabling Sky’s voice, it said in a post on X late Sunday night, with no mention of when it will return. Since then, dozens of ChatGPT users have lost access to the voice option, according to posts on OpenAI’s developer forum.
The move appears to be a direct response to recent public attention—including a joke on the latest Saturday Night Live episode—since the voice has been available for months. Altman, whether intentionally or not, contributed to the speculation that Sky is inspired by Johansson by posting the single-word film title, “her,” on X last week, shortly after demonstrating OpenAI’s new GPT-4o model that comes with updated voice capabilities.
OpenAI is already receiving pushback from users on X and its own developer forum regarding the takedown of Sky.
“I believe it is not justified to remove a voice simply because it remotely resembles another person,” Ben Parry, an AI researcher not affiliated with OpenAI, wrote on the company forum. “Such actions can set a dangerous precedent where individuals start demanding the removal of other voices based on subjective similarities.”
Others have pointed out that if the voice came from a separate voice actress, as OpenAI said, it is unclear why the company must take it down. The decision shines a light on how the company at the forefront of AI development handles public criticism regarding creative industries.
Voice actors have performed in films, audiobooks, and video games for decades, and narrating for chatbots is just the latest way they can sell their voices. AI is also one of the biggest threats to their industry because the technology can cheaply read audiobooks or scripts in place of human actors.
“We support the creative community and worked closely with the voice acting industry to ensure we took the right steps to cast ChatGPT’s voices,” OpenAI said in the Sunday blog post.
As the company places bigger bets on voice communication with future models, it is likely to clash further with the creatives, whether they sound like Johansson or not.
With that, here’s the latest tech news.
Rachyl Jones
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
NEWSWORTHY
Patient No. 2. Elon Musk’s brain implant company Neuralink got U.S. Food and Drug Administration approval to put its chip in a second patient, the Wall Street Journal reported. The FDA also signed off on proposed fixes to the implant, following a complication in the first patient in which the chip’s threads came loose.
No synergy. Three months after Meta and South Korea’s LG Electronics agreed to work together on extended reality devices, including virtual reality headsets, LG has ended the partnership, citing a lack of synergy, KED Global reported. China’s Tencent has emerged as a potential new partner for Meta.
Cable tapping. U.S. officials warned Google, Meta, and other telecommunications providers about the threat of Chinese maintenance companies tapping the undersea cables that carry internet traffic across the world, the Wall Street Journal reported.
IN OUR FEED
“this is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have.”
—OpenAI CEO Sam Altman posted on X in response to reports that OpenAI’s off-boarding documents prohibit employees who leave the company from ever criticizing OpenAI or risk losing the equity they earned while employed.
IN CASE YOU MISSED IT
Amazon CEO Andy Jassy: An ‘embarrassing’ amount of your success in your 20s depends on your attitude, by Orianna Rosa Royle
Satya Nadella has made Microsoft 10 times more valuable in his decade as CEO. Can he stay ahead in the AI age?, by Jeremy Kahn
TikTok-owner ByteDance takes the lead in the race to have China’s most popular ChatGPT-like app, by Bloomberg
Elon Musk travels to Bali to launch Starlink in Indonesia, his first trip after years of wooing from the Southeast Asian country, by Lionel Lim
AI isn’t coming for your job—at least not yet, by Jeremy Kahn
BEFORE YOU GO
Happy face, sad face. Artificial intelligence models that try to read human expressions to make determinations about a user’s emotion may have issues with accuracy, because expressions are not universal, according to a Wall Street Journal essay.
In a job interview, a candidate may furrow their brows while thinking of an answer to a question, and the AI model may suggest they became angry. This calculation could signal to interviewers that the applicant is quick to anger—an incorrect assumption that could hurt the interviewee’s chance at getting the job. No matter how big a training dataset is or how advanced the algorithm, AI engineers cannot rely on stereotypical facial expressions (like smile = happy, and frown = sad) to determine real-life emotions, the Journal said.
This is the web version of Data Sheet, a daily newsletter on the business of tech. Sign up to get it delivered free to your inbox.
Original Article Published at Fortune.com
________________________________________________________________________________________________________________________________