The article discusses the death of 15-year-old Zackery Nazario while subway surfing, a dangerous activity documented and shared on social media platforms like Instagram and TikTok. His mother blames social media companies for not doing enough to remove such content, which she argues encourages risky behavior. She has filed a lawsuit against these platforms, highlighting the influence of online content on youth safety and raising questions about platform responsibility and regulation.
Reddit and YouTube have been ordered by a New York state judge to face lawsuits seeking to hold them responsible for enabling a white supremacist who killed 10 Black people in a 2022 Buffalo, New York grocery store shooting. The 25 plaintiffs, including store employees and customers, allege that the platforms were designed to addict and radicalize users, leading to the shooter's racially motivated mass shooting. Reddit and YouTube argued they were not liable under Section 230 of the Communications Decency Act, but the judge allowed the plaintiffs to pursue negligence-based claims, citing the mental distress suffered by witnesses. The lawsuits were filed by the gun control advocacy group Everytown Law and seek civil damages.
The Supreme Court is set to hear arguments on whether government officials can urge social media companies to remove disinformation and misinformation that pose national security threats. The case, Murthy v Missouri, involves allegations that federal officials coerced or significantly encouraged social media companies to remove certain content, violating the First Amendment. This has led to a chilling effect on information sharing between the government and social media companies, impacting national security efforts to combat foreign influence campaigns. The outcome of this case could have significant implications for addressing disinformation and protecting national security.
Deepfake videos and images falsely depicting Taylor Swift supporting Trump and engaging in election denialism have been circulating on various social media platforms, including X, Instagram, Facebook, YouTube, and TikTok. Despite some posts being labeled as manipulated media, many others have not been, raising concerns about the platforms' ability to control the spread of malicious inauthentic media. The manipulated media appears to originate from a pro-Trump X account with over 1 million followers, and the issue highlights the ongoing struggle of social media platforms to effectively moderate disinformation, including AI-generated content.
Tech CEOs from X, TikTok, Discord, Meta, and Snap are expected to endorse legislation and preview policy decisions at a Senate Judiciary Committee hearing on child safety issues. The CEOs will offer rare policy commitments and regulatory endorsements, including support for the SHIELD Act and other child safety legislation. The hearing aims to address concerns about the impact of social media platforms on children and create a pathway to pass legislation this year.
Meta Platform's Oversight Board has stated that the company made a mistake in removing two videos depicting hostages and injured individuals in the Israel-Hamas conflict, emphasizing that the videos were crucial for understanding the human suffering caused by the war. The board, which reviews content decisions on Meta's Facebook and Instagram, examined these cases on an expedited basis, marking the first time it has done so. While Meta restored the videos with a warning screen after the board selected them for review, the board disagreed with the decision to restrict the videos from being recommended to users and urged Meta to respond more promptly to changing circumstances on the ground.
Pakistan experienced a nationwide internet disruption ahead of a virtual meeting organized by Imran Khan's party PTI. Users reported difficulties accessing social media platforms such as YouTube, Facebook, Instagram, and X. The internet issues coincided with an online rally, leading to complaints of slow internet services. Pakistan has previously been ranked third in the world for imposing internet restrictions. The Pakistan Telecommunication Authority (PTA) has not yet issued a statement regarding the disruption.
The Indian government has issued notices to social media platforms X (formerly known as Twitter), YouTube, and Telegram, demanding the removal of any child sexual abuse material from their platforms. Failure to comply could result in the companies losing their protection from legal liability. The government emphasized the importance of prompt and permanent removal of such content and stated that consequences under Indian law would follow if the platforms do not act swiftly. The notices also called for measures like content moderation algorithms and reporting mechanisms to prevent the dissemination of child sexual abuse material in the future.
MrBeast, the popular YouTuber, has called for social media platforms to address the growing issue of AI deepfakes after fraudsters posted a fake video of him on TikTok, attempting to scam his followers. This comes shortly after actor Tom Hanks and CNBC host Gayle King also spoke out against deepfake images of themselves being used without permission. The use of AI deepfakes raises concerns about the potential for celebrities and influencers to be misrepresented and exploited, as well as the impact on industries such as entertainment and writing.
Elon Musk's social media platform X, formerly known as Twitter, has disabled a feature that allowed users to report misinformation about elections, according to research organization Reset.Tech Australia. The removal of the "politics" category from the reporting feature raises concerns about the spread of false claims ahead of major elections in the United States and Australia. This move comes amid growing pressure on social media platforms to combat electoral misinformation. The change may limit intervention and review processes for content that violates X's own policy on electoral misinformation. Reset.Tech Australia has expressed concern about the loss of the ability to report serious misinformation, particularly in the lead-up to Australia's upcoming referendum.
The European Commissioner has warned Elon Musk that the European Union (EU) will be closely monitoring his social media platform, X (formerly Twitter), after it was found to have the highest ratio of disinformation posts among large social media platforms. X has left the EU's code of practice, but the EU has made it clear that the new laws on disinformation will still apply to the platform, potentially leading to a ban. The EU's report analyzed posts that will be considered illegal under the Digital Services Act, with X and Facebook being the worst offenders. The EU is particularly concerned about Russian hackers meddling in upcoming European elections.
Russell Brand thanked his supporters in a video addressing the recent sexual assault allegations against him, while also suggesting a media conspiracy and criticizing the UK government and social media platforms for censoring his content. Brand warned his followers about the Trusted News Initiative, claiming it aims to target and shut down independent media organizations. He urged his supporters to follow him on Rumble, the only platform where he can still monetize his videos. The allegations against Brand include rape and sexual assault, which he denies, stating that all interactions were consensual. The BBC has launched an investigation into claims that Brand exposed himself to a woman in their Los Angeles office building.
The 5th U.S. Circuit Court of Appeals in New Orleans has scaled back a lower court's order that restricted the Biden administration's communication with social media platforms regarding controversial content. The appeals court ruled that the White House, Surgeon General, CDC, and FBI cannot "coerce" platforms to remove posts they dislike. However, the court removed broader language from the order that blocked government agencies from contacting platforms to request content takedowns. The administration has 10 days to seek a Supreme Court review, and the ruling came in response to a lawsuit accusing the administration of using threats to suppress conservative viewpoints.
Elon Musk, owner of X (formerly Twitter), plans to remove headlines and other text from news articles shared on the platform, displaying only the lead image. This change aims to reduce the height of tweets and curb clickbait. Users will need to manually add their own text alongside the links they share. The move could have implications for publishers relying on social media for traffic and advertisers. The change is currently being tested internally, and it is unclear when it will be rolled out to the public.
Instagram has mysteriously banned several prominent Pokémon content creators, leaving them without any explanation for the ban. While some creators were able to regain their accounts by verifying their information, others were permanently banned when they appealed the decision. The reason for the bans remains unknown, but it has been speculated that certain hashtags or links featured in their posts may have contributed. The affected creators are family-friendly and have previously collaborated with The Pokemon Company, suggesting that the issue lies with Instagram rather than TPC.