YouTube has introduced new rules requiring creators to reveal if they've used generative AI in their videos, aiming to keep viewers informed about AI-generated content. This change comes as AI makes it easier to create lifelike videos portraying fictional situations. Starting next year, these updates will include:
Not disclosing AI use could result in content removal or penalties. YouTube stresses its existing rules against deceptive content and ensures that AI-generated videos follow guidelines on violence and hate speech. The visibility of AI disclosure varies, with more emphasis on sensitive topics like politics and health. While disclosures may mainly feature in video descriptions, YouTube plans to provide clearer labelling for sensitive subjects.
The simplicity of generating diverse content with AI raises worries about the rapid dissemination of easily-created misinformation. Instances of journalists using ChatGPT have sparked significant criticism from affected individuals and the wider audience. As AI improves its ability to replicate reality, issues surrounding deepfakes and other AI-produced visual and audio content intensify in seriousness. By setting guidelines for AI use it encourages responsible and ethical practices among content creators, especially concerning sensitive subjects or individuals' likenesses.