The rise of deep fakes, including those featuring Taylor Swift, poses a significant threat to artists and the entertainment industry. While generative AI offers promising opportunities such as AI dubbing for international distribution, it also presents risks of commercial and reputational harm through unauthorized use of famous individuals' likenesses and voices. The need for stricter regulations, dialogue between the tech and creative communities, and investment in AI forensic technology is emphasized to protect artists and their livelihoods from the potential exploitation of generative AI.
The circulation of deep fake images, including offensive AI-created images of Taylor Swift, has sparked a push to ban deep fakes and revenge porn. Efforts include legislation at both state and federal levels, as well as technological solutions such as digital fingerprinting and digital watermarks. The rise of AI applications that can manipulate images and voices has led to concerns about political manipulation and scams targeting the elderly. Despite some states enacting laws, the legal impact remains uncertain, and the Biden administration is advocating for AI companies to add digital watermarks to easily identify fake content.
Bruce Reed, Biden's AI chief, expressed concern over voice cloning technology, stating that it keeps him up at night due to its potential for misuse. Voice cloning platforms are accessible and can be used to create convincing audio deep fakes, which scammers have already exploited to enhance their schemes. While some politicians, like New York City Mayor Eric Adams, have used AI-generated clones of their own voices for outreach purposes, concerns about deception and the need for clearer rules on AI usage by politicians have been raised. The accessibility and ease of use of these platforms make them ripe for misuse, as evidenced by reported cases of voice cloning misuse.
President Biden has signed an executive order on artificial intelligence (AI) that requires companies to report the risks of their AI systems aiding in the creation of weapons of mass destruction. The order also aims to address the dangers of deep fakes that can manipulate audio and video to spread fake news or commit fraud. Biden's order demonstrates the United States' commitment to regulating AI technology, as Europe moves forward with its own rules. While the order represents a first step, Biden acknowledges the need for Congress to take further action. The goal is to govern AI technology to harness its potential while mitigating risks.
Senate Intelligence Chairman Mark Warner has cautioned that artificial intelligence (AI) could be manipulated to disrupt the 2024 US elections and financial markets, particularly through the use of deep fakes. He highlighted the vulnerability of the United States to countries like China with advanced AI technology and suggested that Congress may need to pass new laws with penalties to deter malicious actors. President Joe Biden also emphasized the need to govern AI technology during his speech at the United Nations General Assembly.
The FBI is warning parents about the use of AI technology by teens to create nude 'deep fakes' for the purpose of bullying and harassing classmates. These AI-generated images are shockingly realistic and can be easily created using hundreds of available apps. The FBI advises caution when posting personal photos or videos on social media and other online platforms.
Microsoft President Brad Smith has called for steps to ensure that people can distinguish between real and AI-generated content, particularly deep fakes, which he sees as the biggest concern around artificial intelligence. Smith also called for licensing for the most critical forms of AI with obligations to protect security, physical security, cybersecurity, and national security. He urged lawmakers to ensure that safety brakes be put on AI used to control critical infrastructure so that humans remain in control. Some proposals being considered on Capitol Hill would focus on AI that may put people's lives or livelihoods at risk, like in medicine and finance.
A fake image of an explosion near the Pentagon was shared on social media, raising concerns about AI's ability to produce misinformation. Social media sleuths pointed out problems with the image, including the lack of firsthand witnesses and the building's noticeable differences from the Pentagon. Generative AI tools can create life-like images with very little effort, but they can introduce random artifacts. To spot AI-generated and fake images, look for on-the-ground reports, analyze the image and its surroundings, and pay attention to hands, eyes, and posture.
Actress and computer scientist Justine Bateman has urged actors to demand "iron-clad protection" against the use of their image and voice as artificial intelligence (AI) could disrupt the entertainment industry. Bateman said AI had to be addressed now or never, and that this was the last time any labour action would be effective in the business. The Writer's Guild of America (WGA) has proposed blocking the use of AI to write or rewrite literary material, but the Alliance of Motion Picture and Television Producers rejected the proposal.
Chinese police have detained a man for allegedly using ChatGPT, an AI chatbot developed by Microsoft-backed OpenAI, to create and spread fake news online. The suspect reportedly generated a bogus report about a train crash and posted it online for profit. ChatGPT is banned in China, but internet users can access it through virtual private networks. The arrest is the first in Gansu since China's Cyberspace Administration enacted new regulations in January to rein in the use of deep fakes.
Tesla is facing a lawsuit related to a fatality involving its Autopilot system, and the company is attempting to keep CEO Elon Musk's comments out of the case by claiming some of his previous public comments could have been deep fakes. The family of the victim believes Musk's optimism about the technology made the driver feel confident using Autopilot in a dangerous manner. However, the judge ruled that Musk will have to be available for a three-hour interview to go over his statements related to Autopilot and Tesla's Full Self-Driving beta.
Tesla has claimed that CEO Elon Musk's statements on self-driving "might have been deep fakes" in a lawsuit brought by the family of a Tesla owner who died in an accident while using Autopilot. The automaker is trying to keep Musk and his statements out of the case, but the judge has ruled that Musk should be made available for an interview of up to three hours to discuss his statements about Tesla Autopilot and Full Self-Driving. The Huang family is trying to use the argument that some of Musk's comments about Autopilot and self-driving have led Huang to believe he could use Autopilot in the manner that led to the crash.
Artificial intelligence image-generators have been able to create realistic images except for human hands, until now. Midjourney, a popular image maker, released a software update that fixed the problem, but it has also sparked a debate about the danger of generated content that is indecipherable from authentic images. The improved technology could put artists out of work and make deep fake-campaigns more plausible, absent glaring clues that an image is fabricated. However, there are still clues that can be used to detect deep-fakes, such as disfigured tree branches in the background.