Meta Platforms' Oversight Board is currently examining how the company handled two AI-generated sexually explicit images of female celebrities that circulated on Facebook and Instagram. The board, which operates independently but is funded by Meta, aims to evaluate Meta's policies and enforcement practices surrounding AI-generated pornographic content. To prevent further harm, the board did not disclose the names of the celebrities depicted in the images.
Advancements in AI technology have led to an increase in fabricated content online, particularly explicit images and videos portraying women and girls. This surge in 'deepfakes' has posed significant challenges for social media platforms in combating harmful content. Earlier this year, Elon Musk's social media platform X faced difficulties managing the spread of false explicit images of Taylor Swift, prompting temporary restrictions on related searches.
The Oversight Board highlighted two specific cases: one involving an AI-generated nude image resembling an Indian public figure shared on Instagram and another depicting a nude woman resembling an American public figure in a Facebook group for AI creations. Meta initially removed the latter image for violating its bullying and harassment policy but left the former image up until the board selected it for review.
In response to the board's scrutiny, Meta acknowledged the cases and committed to implementing the board's decisions. The prevalence of AI-generated explicit content underscores the need for clearer policies and stricter enforcement measures by tech companies to address the growing issue of 'deepfakes' online.