Former Meta employee Ferras Hamad has sued the company, alleging he was unlawfully fired for investigating claims of censorship against Palestinian creators and activists. Hamad, a Palestinian-American software engineer, claims his termination was due to his national origin and religion, as well as his work on censorship issues, which was part of his job. Meta denies the allegations, stating Hamad was dismissed for violating data access policies. The lawsuit highlights broader concerns about Meta's handling of content related to the Israel-Hamas conflict.
A former Meta engineer, Ferras Hamad, has sued the company for wrongful termination and discrimination, alleging he was fired for addressing bugs that suppressed Palestinian Instagram posts. Hamad claims Meta showed bias against Palestinians, including deleting internal communications about Gaza and investigating the use of the Palestinian flag emoji. Meta has faced similar accusations from human rights groups and employees regarding its content moderation practices related to the Israel-Palestine conflict.
OpenAI may face legal trouble for creating a ChatGPT voice resembling Scarlett Johansson, potentially violating her right to publicity. Despite not intending the similarity, OpenAI's CEO's comments and public recognition of the likeness could worsen the situation. Johansson has hired legal counsel to address the issue, and OpenAI has temporarily pulled the voice.
OpenAI faces potential legal trouble as Scarlett Johansson considers suing the company for using a virtual assistant voice in ChatGPT that closely resembles hers, despite her previous refusal to grant permission. Legal experts believe Johansson has a strong case for appropriation of likeness, drawing parallels to past successful lawsuits by other celebrities.
The Motion Picture Association plans to collaborate with Congress to introduce site-blocking legislation in the US, allowing content creators to request ISPs to block websites sharing stolen content. The MPA argues that site-blocking would not harm legitimate businesses and cites examples of successful implementation in other countries. Critics have raised concerns about potential impacts on free speech, but the MPA is seeking support from theater owners to advance this initiative.
A new bill introduced in the US Congress, the Generative AI Copyright Disclosure Act, aims to compel AI companies to disclose their use of copyrighted material in training their generative AI models. The bill, introduced by California Democratic congressman Adam Schiff, would require companies to submit copyrighted works in their training datasets to the Register of Copyrights before releasing new AI systems, or face financial penalties. This move comes amid increasing scrutiny and legal action against AI companies, such as OpenAI, over their alleged use of copyrighted works. The bill has garnered support from entertainment industry organizations and unions, reflecting concerns about the potential threat of AI to artists' rights.
A federal judge's dismissal of Elon Musk's lawsuit against the Center for Countering Digital Hate is seen as a win for free speech and research accountability on Twitter, which Musk now owns. The decision could embolden other research groups and Musk critics facing legal threats, as it underscores the protection of constitutionally guaranteed free speech rights. Musk's efforts to stifle criticism through lawsuits and steep data access fees for researchers have raised concerns about transparency and accountability on the platform.
OpenAI responds to Elon Musk's lawsuit, calling his claims "frivolous" and "incoherent," and accusing him of seeking to claim the company's success for himself. The company refutes Musk's accusations of breaching the founding agreement, presenting emails suggesting Musk's desire for a for-profit structure and his subsequent departure. OpenAI seeks to dismiss the lawsuit swiftly, expressing concerns about Musk's potential access to proprietary records and technology. Musk's lawyers have not yet responded to the filing.
Three authors, Brian Keene, Abdi Nazemian, and Steward O’Nan, are accusing Nvidia of using their books for AI training without permission, alleging that the company trained its NeMo platform on a massive dataset containing their work. They have filed a class-action suit that, if certified, would cover anyone in the US with work involved in NeMo’s training, similar to other author lawsuits against OpenAI and Meta.
Nvidia is being sued by three authors who claim the company used their copyrighted books without permission to train its NeMo AI platform, which was subsequently taken down in October due to reported copyright infringement. The authors are seeking unspecified damages for people in the United States whose copyrighted works helped train NeMo's large language models in the last three years. This lawsuit adds Nvidia to a growing body of litigation by writers over generative AI, which creates new content based on inputs such as text, images, and sounds.
Elon Musk has filed a lawsuit against OpenAI and its co-founders, alleging breach of contract and fiduciary duty, claiming that the organization has shifted from its original mission of developing artificial general intelligence (AGI) for the benefit of humanity to a for-profit entity largely controlled by Microsoft. Legal experts question the merit of the case due to the absence of a formal written agreement. Musk's lawsuit may be aimed at shedding light on OpenAI's operations and the details of its GPT-4 AI model, but it remains to be seen whether the case will have a strong legal foundation.
Elon Musk has sued OpenAI and its CEO, Sam Altman, alleging that their partnership with Microsoft violates the company's mission by prioritizing profit over open-source technology for the benefit of humanity. Musk's case appears shaky due to the lack of a written contract and the non-profit status of OpenAI, but the lawsuit could still impact OpenAI's operations and reputation. If successful, it could set a concerning precedent for non-profits, while Musk's own AI company adds another layer to the conflict.
The Intercept, Raw Story, and AlterNet have filed lawsuits against OpenAI, alleging that the company violated their copyright protections by training its ChatGPT AI to ignore and hide copyrighted material from journalists. The lawsuits, filed in federal court in Manhattan, follow The New York Times' similar lawsuit against OpenAI and Microsoft. The media outlets argue that OpenAI's actions violate the Digital Millennium Copyright Act and seek compensation for their journalistic work. While some news outlets have partnered with AI companies, others, like Raw Story and AlterNet, have opted for legal action to protect their copyrights.
OpenAI has asked a federal judge to dismiss parts of The New York Times' copyright lawsuit, alleging that the newspaper "hacked" its chatbot ChatGPT and other AI systems to produce misleading evidence. The Times sued OpenAI and Microsoft in December, accusing them of using millions of its articles without permission to train chatbots. OpenAI contends that the Times manipulated its systems through deceptive prompts and violated its terms of use, while also asserting that AI training qualifies as fair use under copyright law. The lawsuit raises questions about the use of copyrighted material in AI training and its potential impact on the industry.
OpenAI has filed a motion seeking to dismiss parts of The New York Times's lawsuit, arguing that its chatbot, ChatGPT, is not a substitute for a Times subscription and that people do not use it for that purpose. The Times had accused OpenAI of using millions of its articles to train AI technologies, claiming that chatbots now compete with the news outlet as a source of reliable information. OpenAI's motion aims to narrow the focus of the lawsuit by dismissing four claims from The Times's complaint, including acts of reproduction that occurred more than three years ago and the violation of the Digital Millennium Copyright Act.