Amid growing concerns about the harm AI can cause with fake ads and other misleading content, Google is taking one step toward damage control. In an update to its policy regarding political ads, the search giant will require that advertisers clearly divulge when their ads contain synthetic content created by AI. The policy specifically focuses on political ads that "inauthentically depict real or realistic-looking people or events."
Also: We're not ready for the impact of generative AI on elections
Due to go into effect mid-November, the policy states that the disclosure about the use of AI must be clear and conspicuous and be located in a spot likely to be noticed by users. The requirement applies to political ads with images, video, or audio served by Google on its own platforms (such as YouTube) and on third-party websites that are part of the company's display network.
But it excludes ads where the content is simply edited, meaning an image or video that's resized, cropped, color corrected, error corrected, or even had its background removed as long as the edits don't falsify realistic scenes of real people or events.
Also: TikTok quietly added a fast-forward option, and it's a game-changer
Google cited a couple of examples that would require disclosure. One would be a political ad with AI-generated content that makes it seem as if a person is saying or doing something they didn't say or do. Another would be an AI-generated ad that changes footage of an actual event or depicts scenes of a real event that didn't occur.
Political ads have a long history of bending or breaking the truth in order to make the other side look bad. But the technological age has exacerbated the problem through the influence of social media and AI.
Using artificial intelligence, a political campaign or other party can easily create a realistic but fake image, video, or audio clip depicting a candidate saying or doing something they didn't say or do. Voters who come to the table with clear biases may swallow the lie as truth without bothering to verify it.
Also: Google just gave Android's most frustrating widget an AI facelift, and it's such a relief
In response to concerns about political ads, Google has taken other steps in the past. In 2018, the company started requiring all advertisers of political ads to verify their identity with an in-ad disclosure that shows who paid for the ad.
Political ads in the US and other countries are included in Google transparency reports to help people learn who bought a specific ad, how much they spent, and how many times the ad was viewed. In 2019, the company expanded the transparency to include ads about state-level candidates, political parties, and ballot initiatives.
Google's policies have also prohibited the use of deep fakes and other phony content designed to deceive people on matters related to politics and social issues. The company uses both automated systems and human reviewers to find and remove ads that violate its policies. In 2022, Google removed 5.2 billion ads that ran afoul of the policies and blocked 2.6 million election ads that failed to complete the verification process.
Also: Google releases new apps and widgets to assist Android users
"For years we've provided additional levels of transparency for election ads, including 'paid for by' disclosures and a publicly available ads library that provides people with more information about the election ads they see on our platforms," a Google spokesperson said in a statement sent to .
"Given the growing prevalence of tools that produce synthetic content, we're expanding our policies a step further to require advertisers to disclose when their election ads include material that's been digitally altered or generated," the spokesperson added. "It'll help further support responsible political advertising and provide voters with the information they need to make informed decisions."