TikTok is introducing a new tool to combat misinformation by helping creators label AI-generated content. This tool ensures compliance with TikTok's existing policy, which requires creators to label manipulated content that appears realistic as fake or altered.
The platform prohibits deepfakes that mislead users about real-world events, especially involving private figures and young people, while allowing altered images of public figures for specific purposes like art and education. TikTok is also testing an 'AI-generated' label for content created or edited by AI and will explicitly include 'AI' in the names and labels of effects in the app that uses AI.
This move is in response to concerns about the spread of misinformation in the AI era, with the European Union urging online platforms to add labels to AI-generated text, photos, and other content.
Why does it matter?
TikTok is moving towards responsible content management by introducing tools for labelling AI-generated content and implementing stricter policies. This aligns with the path taken by other major tech companies, such as Google, which recently imposed disclosure requirements for AI-generated content in political ads. Additionally, Meta is developing labels that enable creators to identify images produced using their AI technology.