Unauthorized Use of Scarlett Johansson’s Voice in ChatGPT Sparks Controversy
The controversy surrounding OpenAI’s use of a voice similar to actress Scarlett Johansson’s for their ChatGPT model has sparked a heated debate in the tech industry. Johansson expressed anger and concern over the use of a voice called Sky, which she claims bears a striking resemblance to her own voice.
OpenAI’s CEO, Sam Altman, denied that the voice was intentionally modeled after Johansson’s, but a tweet he posted on the day of the model’s launch seemed to hint at a connection to the movie “Her,” in which Johansson voiced a digital companion.
This incident raises important questions about the ethics of using someone’s voice without their consent, especially as AI technology makes it easier to clone voices. Legal precedents set by cases like Bette Midler’s lawsuit against Ford Motor Company for using a voice similar to hers in advertisements highlight the importance of protecting vocal likeness and identity.
Furthermore, the gendered nature of AI voices, with feminized personas often assigned to digital assistants, reflects broader issues of gender stereotyping in technology. Scholars have criticized the industry for perpetuating submissive tropes through these choices.
Johansson’s call for transparency and legislation to protect vocal likeness underscores the need for greater awareness and regulation in the use of AI technology. This case serves as a reminder of the ongoing debates surrounding voice, gender, and privacy in the digital age.