Meta, formerly known as Facebook, is under scrutiny for its alleged censorship and bias about posts surrounding the Israel-Palestine conflict. The conflict has resulted in the deaths of over 20,000 Palestinians, including children, as reported by the Gaza Ministry of Health.
Human Rights Watch has released a 51-page report accusing Meta of systematically censoring content that supports Palestine, accusing the company of neglecting the humanitarian crisis in Gaza. Deborah Brown of Human Rights Watch has condemned Meta's censorship, stating that it exacerbates the suffering Palestinians already face. The report reveals that Human Rights Watch has reviewed 1,050 cases of online censorship by Meta across 60 countries, identifying over 100 instances of pro-Palestinian content being censored. Meta is also accused of using 'shadow banning' tactics to reduce content visibility without informing users.
The company's content moderation policies include account suspensions, removal of targeted content, and feature restrictions on Instagram and Facebook Live. AI tools heavily influence Meta's content moderation, and the report suggests government influence. Meta's Dangerous Organizations and Individuals policy curtails legitimate speech regarding the conflict, labelling certain groups as 'terrorist organizations'.
Why does it matter?
The report raises concerns about social media's role in disseminating unbiased information and questions Meta's commitment to free expression. Users have reported difficulties in appealing for account or content removal. US Senator Elizabeth Warren has called on Meta CEO Mark Zuckerberg to provide information about the censorship allegations, reflecting growing concerns over social media's impact on global politics and human rights. The controversy surrounding Meta's alleged censorship sheds light on the broader influence of social media platforms in spreading information, particularly in conflict zones.