Google is implementing new rules requiring political ads on its platforms to disclose when images and audio are generated using AI. This change, set to take effect in November, responds to the increased use of AI tools for creating synthetic content. The goal is to prevent the potential spread of disinformation during election campaigns, particularly before the next US presidential election.
Currently, Google's policies already prohibit digital media manipulation for deceptive political purposes. Going further, this update will mandate election-related ads to prominently disclose if they contain synthetic content featuring real or realistic-looking individuals or events. Labels like 'this image does not depict real events' or 'this video content was synthetically generated' will serve as flags.
Any digitally altered content in election ads must be clearly and conspicuously disclosed. Such content includes synthetic imagery or audio showing individuals doing things they didn't do or events that didn't happen. Google said it aims to combat the misuse of AI technology for creating fake content and continues investing in technology for detecting and removing such content.
Why does it matter?
AI experts emphasise that, although fake imagery has existed for some time, the rapid advancement of technology and its potential for misuse should be alarming. Cases like the fake video featuring Ukrainian President Volodymyr Zelenskyy urging Ukrainian troops to surrender on social media reinforce this concern, potentially marking the initial weaponised deployment of deepfakes in an armed conflict. In this context, Google's policy change acknowledges the grave potential consequences of synthetic media for society's stability. Given that Meta Platforms is also working on labels to identify AI-generated content, Google's move could prompt other platforms to embrace similar measures, thus fostering a more standardised and responsible landscape for online political advertising.