The White House dismissed most of Puerto Rico's oversight board members, escalating tensions over the island's debt management and restructuring efforts, amid criticism of the board's costs and decisions.
Meta is updating its approach to handling manipulated media on its platforms, including Facebook, Instagram, and Threads, based on feedback from the Oversight Board. The changes involve implementing labels to provide context about AI-generated content, including videos, audio, and photos, and addressing manipulation that shows a person doing something they didn't do. The company plans to start labeling AI-generated content in May 2024 and will stop removing content solely based on its manipulated video policy in July. These decisions are informed by extensive public opinion surveys, consultations with global experts, and feedback from civil society organizations and academics.
The Oversight Board overseeing Meta's content moderation is urging the company to relax its ban on the Arabic word "shaheed," arguing that the blanket ban has led to widespread censorship of Arabic-speaking and Muslim communities. The board recommends that Meta only remove content containing "shaheed" when it is tied to clear signs of violence or breaks other rules, rather than using a blunt method that disproportionately restricts freedom of expression. However, Jewish and Israeli groups warn that changing the policy could increase antisemitic content on Meta's platforms, and there are concerns that the policy shift could lead to more hate speech and violent threats on the platform.
Meta's oversight board has urged the company to end its blanket ban on the Arabic word "shaheed," or "martyr," after finding that the approach was "overbroad" and suppressed the speech of millions of users. The board recommended that posts containing "shaheed" should only be removed if they are linked to clear signs of violence or if they separately break other Meta rules. This comes after years of criticism of Meta's handling of content involving the Middle East, with the board finding that the ban failed to account for the word's various meanings and resulted in the removal of non-violent content. Meta has committed to reviewing the board's feedback and responding within 60 days.
Meta's Oversight Board has criticized Facebook's policies for allowing a fake video of President Joe Biden to remain on the platform, despite it being manipulated to falsely depict him inappropriately touching his granddaughter. The board called for a revision of the "incoherent" policies, urging Meta to cover all forms of manipulated media, including audio fakes, and to specify the harms it aims to prevent. While some suggest removing fake content, others advocate for labeling significantly altered content, as Meta faces pressure to update its policies ahead of upcoming elections.
The Oversight Board, an external advisory group for Meta, has called for a rewrite of the company's rules against faked videos, criticizing the current policy as "incoherent." The decision comes after the board reviewed a doctored video of President Biden and found Meta's policy on manipulated media to be lacking in addressing potential harms. The board urged Meta to urgently implement changes to its rules, including adding labels to altered videos and addressing manipulated audio, especially in the lead-up to global elections. Meta is reviewing the board's guidance and will issue a public response within 60 days.
Meta's Oversight Board criticized the company's "incoherent" and "confusing" policies on manipulated media after an altered video of President Biden spread on Facebook, calling for clearer guidelines and expanded policies to address potential harms, including voter suppression. The video, which showed Biden appearing to touch his granddaughter inappropriately, was not removed by Meta despite criticism. The board recommended extending manipulated media policy to cover altered audio and videos showing people doing things they didn't do, and suggested attaching a label to manipulated media instead of removing it if it doesn't violate other rules.
Meta's Oversight Board has urged the company to update its manipulated media policy after a misleadingly edited video of President Joe Biden was allowed to stay on Facebook. The board criticized the current policy as "incoherent" and called for new rules that cover audio and video content, regardless of the method of creation. They recommended applying labels to posts with manipulated media instead of removing them, citing concerns about the potential for viral election misinformation. Meta is reviewing the guidance and plans to respond publicly within the next 60 days, with potential policy changes in response to the evolution of new AI tools.
The Oversight Board, which reviews Facebook's content moderation, has called for Meta to label fake posts instead of removing them, particularly ahead of a busy election year. The board criticized Meta's manipulated media policy as "incoherent" and urged for a wider scope to address fake content. The board's recommendation comes after Meta refused to remove a fake video of US President Joe Biden, stating that it did not violate its policy. The board emphasized the need for more labeling on fake material and expressed concerns about the potential lack of information for users regarding demoted or removed content. Additionally, experts highlighted the importance of addressing "cheap fakes" and AI-generated content while ensuring that the policy remains dynamic and adaptable.
Meta's Oversight Board has ruled that a manipulated video of President Biden, which was edited to make it appear as if he was inappropriately touching his granddaughter, can stay on Facebook, stating that it does not violate the company's policies. However, the board criticized Meta's manipulated media policy as "incoherent and confusing" and recommended a policy overhaul, including the labeling of manipulated media and a focus on preventing specific harms such as incitement to violence or misleading information. The decision comes amid concerns about the impact of social media on elections and the spread of misinformation, with the board urging Meta to develop a framework for evaluating false and misleading claims around elections.
Meta, the parent company of Facebook, has been urged by its quasi-independent Oversight Board to reverse its decisions to remove posts related to the Israel-Hamas war. The board disagreed with Meta's choice to bar the posts from being recommended on Facebook and Instagram, even if they aimed to raise awareness. It also criticized Meta's use of automated tools, which increased the likelihood of removing valuable posts that shed light on the conflict and potential human rights violations. While Meta eventually reinstated the posts with warning screens, it is not obligated to follow the board's recommendations.
Meta's automated content moderation tools unnecessarily removed two videos related to the Israel-Hamas conflict, according to the Meta Oversight Board. The board overturned Meta's decision and urged the company to respect users' freedom of expression and their ability to communicate during the crisis. Meta had implemented temporary measures to address potentially dangerous content during the conflict, but the board criticized the company for mistakenly removing non-violating content. The board also stated that limiting the circulation of the videos does not align with Meta's responsibility to respect freedom of expression.
Meta's Oversight Board has overturned the company's decision to remove two videos related to the Israel-Hamas war from its platforms. The board ruled that the posts informed the world about human suffering on both sides and emphasized the importance of freedom of expression and access to information. The videos have been reinstated with a warning screen. This decision highlights the challenges social media companies face in handling content related to conflicts. Meanwhile, the European Union has launched a probe into Elon Musk-owned X (formerly Twitter) to assess whether it complies with rules on countering illegal content and disinformation.
Meta's Oversight Board has criticized the company's automated moderation tools for unfairly removing two videos depicting the Israel-Hamas war from Facebook and Instagram. The board found that the posts should have remained live, highlighting the high cost to freedom of expression and access to information. The removals were conducted by automated tools without human review, leading to the incorrect removal of non-violating content. The board also criticized Meta for demoting the posts despite acknowledging their intention to raise awareness. This case highlights the risks of overmoderation and the challenges platforms face in content moderation.
Meta Platform's Oversight Board has stated that the company made a mistake in removing two videos depicting hostages and injured individuals in the Israel-Hamas conflict, emphasizing that the videos were crucial for understanding the human suffering caused by the war. The board, which reviews content decisions on Meta's Facebook and Instagram, examined these cases on an expedited basis, marking the first time it has done so. While Meta restored the videos with a warning screen after the board selected them for review, the board disagreed with the decision to restrict the videos from being recommended to users and urged Meta to respond more promptly to changing circumstances on the ground.