Scarlett Johansson accused OpenAI of using her voice without consent for their ChatGPT-powered voice assistant, Sky. OpenAI insists the voice was from a different actor, but the controversy has sparked legal and ethical concerns, damaging OpenAI's reputation and highlighting broader issues in the AI industry.
Truecaller has upgraded its AI Assistant to allow users to create a digital clone of their voice for handling calls. This feature, developed in collaboration with Microsoft, uses Azure AI Speech technology and aims to enhance personalization while maintaining transparency. The rollout is starting in select countries and raises concerns about the potential misuse of voice cloning technology.
OpenAI has introduced Voice Engine, a tool capable of replicating a person's voice based on a 15-second audio sample, but has decided against releasing it to the public due to concerns about potential misuse, particularly in the context of upcoming elections. The company is engaging with various stakeholders to ensure responsible deployment of the technology and is considering measures to prevent the creation of voices too similar to prominent figures. With the misuse of AI becoming a major concern in election contexts, OpenAI is taking a cautious approach and plans to make an informed decision about deploying the technology at scale based on testing and public debate.
OpenAI has developed Voice Engine, a text-to-speech AI model that can create synthetic voices based on a 15-second segment of recorded audio, but the company has decided not to widely release the technology due to ethical implications. While the technology has potential benefits, such as providing reading assistance and supporting non-verbal individuals, it also raises concerns about potential misuse, including phone scams and security risks. OpenAI is working with select partners and has implemented rules to mitigate misuse, but is urging for societal adaptation and responsible deployment of synthetic voices.
OpenAI has unveiled a voice-cloning tool called "Voice Engine" but plans to keep it tightly controlled until safeguards are in place to prevent audio fakes. The tool can duplicate someone's speech based on a 15-second audio sample, raising concerns about potential misuse, especially in an election year. OpenAI is working with partners to ensure explicit and informed consent is obtained before duplicating voices and to implement safety measures, including watermarking and proactive monitoring, to trace the origin of any audio generated by Voice Engine.
OpenAI has unveiled its new Voice Engine technology that can clone a person's voice with just 15 seconds of recording, but has decided not to release it publicly due to safety concerns related to potential misuse, especially in an election year. The company plans to preview the technology with early testers who have agreed not to impersonate a person without their consent and to disclose that the voices are AI-generated. This move comes amid ongoing investigations into AI-generated robocalls mimicking public figures. Despite the decision to hold back the release, OpenAI's trademark application suggests its intention to enter the speech recognition and digital voice assistant market, potentially competing with products like Amazon's Alexa.
OpenAI has developed Voice Engine, a text-to-speech AI model that can create synthetic voices based on a 15-second segment of recorded audio, but the company has decided to hold back its wide release due to concerns about potential misuse. While the technology has potential benefits such as providing reading assistance and supporting non-verbal individuals, the ability to clone voices raises ethical and security concerns, including potential misuse in phone scams and security breaches. Despite the decision to delay wide release, the implications of voice-cloning technology continue to prompt discussions about AI ethics and societal resilience.
OpenAI has unveiled Voice Engine, a voice cloning tool that can generate synthetic copies of voices from 15-second voice samples, but it's not yet available to the public. The company is prioritizing responsible deployment and is taking steps to prevent misuse, including watermarking recordings and limiting initial access to a small group of developers. The tool has potential applications in healthcare, accessibility, and storytelling, but concerns about its impact on voice actors and the potential for misuse, such as deepfakes, remain.
Some video game actors are allowing AI to clone their voices for characters in games, with concerns about potential replacement and ethical use. While some actors are open to the opportunity if fairly compensated, others worry about exploitation and misuse. The Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) is negotiating terms with major studios to ensure ethical use of AI-generated voices, with an agreement already made with Replica Studios. The use of AI voices in video games is seen as a way to scale up game franchises and reduce physically straining work, but concerns about ethics and consent remain.
The FCC has declared the use of AI-generated voices in robocalls illegal, citing concerns over scams and voter deception. This ruling allows for fines and lawsuits against robocallers using AI voice cloning, following instances of AI-generated voices being used to discourage voting and extort money from families. The decision comes as rapidly advancing technology raises fears over its potential for abuse, and deems calls made with AI-generated voices "artificial" under a 1991 federal law aimed at curbing junk calls, enabling the FCC to take action against violators and providing additional tools for prosecution.
The FCC is pushing to criminalize AI-powered robocalls, warning of an increase in scams using voice-cloning technology to deceive people. The agency aims to classify the use of AI voice cloning in robocall scams as illegal under existing law and is supported by lawmakers and state attorneys general. Efforts are being made to close loopholes and expand robocall rules to cover text messages and the use of AI, with bipartisan support for protecting consumers from the potential harm of AI-driven illegal calls.
The FCC plans to vote on declaring the use of AI-generated voices in robocalls illegal under the Telephone Consumer Protection Act, aiming to prevent misinformation and confusion caused by imitating voices of celebrities, political figures, and family members. The proposed ruling would hold AI-generated voice calls to the same standards as traditional robocalls, potentially aiding states in cracking down on scams and protecting consumers from fraudulent calls.
Researchers believe that the recent deepfake robocall impersonating President Joe Biden in New Hampshire was likely created using technology from AI startup ElevenLabs, which recently achieved unicorn status with an $80 million funding round. Despite the company's policy recommending permission before cloning voices, it acknowledges that permissionless cloning can be acceptable for non-commercial purposes like political speech. Security company Pindrop's analysis points to ElevenLabs' technology, and UC Berkeley's independent analysis supports this conclusion. The incident highlights the potential for malicious use of AI voice cloning technology, and the need for effective safeguards as the 2024 election season approaches.
ElevenLabs, a voice cloning startup, has raised $80 million in a Series B round, reaching unicorn status with a valuation of over $1 billion. The company plans to use the funds for product development, expanding infrastructure and team, AI research, and enhancing safety measures. Despite its success, ElevenLabs has faced criticism for misuse of its tools, including generating hateful messages and concerns about the impact on the voice acting industry. The company is also working on a marketplace for voices, aiming to harmonize AI advancements with established industry practices while compensating voice creators.
Scammers are increasingly using artificial intelligence (AI) to create more realistic scams, infiltrating dating apps to establish fake relationships and trick victims into sending money. Bots are used to create numerous accounts, while AI technology enables scammers to engage in authentic conversations with victims. Phishing scams have also become more convincing, with AI-generated messages exhibiting perfect grammar. Unethical sellers are employing AI to generate realistic reviews and create fake product listings on e-commerce platforms. Additionally, scammers have been using deep fakes of consumers' voices and are expected to leverage AI for influence campaigns during the 2024 election year. The cost of computing is currently a deterrent, but it is expected to decrease, leading to an increase in AI-generated scams.