We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Should AI-generated political advertising be disclosed?
Cast your vote:
Results (23):
filter
Quotes (21)
Users (0)
-
Electoral Commission (United Kingdom)UK independent elections regulatoragrees and says:We do not regulate the content of campaign material. However, we encourage all campaigners to carry out their role of influencing voters in a responsible and transparent manner. Some campaigners may use generative artificial intelligence (AI) to create campaign material. We expect anyone using AI-generated campaign material to use it in a way that does not mislead voters, and to label it clearly so that voters know how it has been created. (2024) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Brennan Center for JusticeLaw and policy institutestrongly agrees and says:On September 19, 2024, the Brennan Center submitted a public comment in response to the Federal Communications Commission’s request for comment in the matter of “Disclosure and Transparency of Artificial Intelligence-Generated Content in Political Advertisements,” MB Docket No. 24–211. The Brennan Center urged the Commission to require on-air and written disclosures of AI-generated and other deceptive synthetic content in radio and television political advertisements. The comment emphasizes that as deepfakes become more prevalent, implementing more focused and inclusive regulations of deceptive content in political advertisements will promote transparency and strengthen trust in the electoral process. While the comment commends the Commission’s proposal, it proposes several critical revisions. First, the rules should be refined to apply specifically to advertisements where artificial intelligence (AI) has substantially modified content in a manner that could mislead a reasonable viewer. Additionally, the scope should be expanded to encompass all deceptive synthetic content, including “cheapfakes” produced using simpler tools. (2024) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Public CitizenConsumer advocacy organizationstrongly agrees and says:Campaigns are already running A.I.-generated ads that look and sound like actual candidates and events, but in fact are entirely fabricated. These ads look and sound so real that it is becoming exceedingly difficult to discern fact from fiction. When A.I.-generated content makes a candidate say or do things they never did – for the explicit purpose of damaging that targeted candidate’s reputation – these ads are known as “deepfakes.” The practice of disseminating deepfakes in political communications on social media or mainstream television and radio outlets is currently legal in federal elections and most states. These ads are not even subject to a disclaimer requirement noting that the content never happened in real life. Presumptively, Goodman contends, an adequate disclosure of who is issuing a campaign communication is sufficient to defeat a claim of fraudulent misrepresentation. However, Goodman notes, an otherwise adequate disclosure can be countermanded when the misrepresentation in the text itself defeats the disclosure and perpetuates confusion about the actual speaker. In the case of deceptive deepfakes, a disclosure of who is distributing the fraudulently misrepresented content will not cure the confusion about the actual speaker. If Candidate Jones places on their social media feed a deepfake video of Candidate Smith saying that the sun revolves around the earth, the disclosure that Jones is distributing the content does not cure the deception over identity. By contrast, a disclosure that the deepfake video is a deepfake would constitute an adequate disclosure, precisely because it would cure the confusion over identity. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
MetaSocial media and AI companystrongly agrees and says:We also introduced a new global policy that became effective in January 2024 to help people understand when a social issue, election, or political advertisement on Facebook or Instagram has been digitally created or altered, including through the use of AI, in certain cases. Under the terms of the policy, advertisers will have to disclose whenever a social issue, electoral, or political ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered to: •depict a real person as saying or doing something they did not say or do; or •depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened; or •depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event. We will add information on the ad when an advertiser discloses in the advertising tool that the content is digitally created or altered under this policy. This information will also appear in the Ad Library. If we determine that an advertiser does not disclose as required, we will reject the ad and repeated failure to disclose may result in penalties against the advertiser. (2024) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Institute for Free SpeechFirst Amendment advocacy nonprofitstrongly disagrees and says:While there are many problems with this proposal, a few stand above the others and are fatal to this effort. First, the Commission lacks statutory authority to require disclosure or disclaimers on political advertisements that utilize “artificial intelligence.” Congress knows how to require disclaimers on political advertisements. Indeed, it has done so repeatedly, explicitly requiring certain political communications to include “paid for by” disclaimers and “stand by your ad” statements. It has adopted no analogous requirement for AI-generated content. In the absence of Congressional authorization, the FCC cannot rely on its general authority to adopt rules as “necessary” to carry out the provision of the Federal Communications Act of 1934, as amended (the “Act”). This is not a free-floating grant of authority. It requires a tie to another provision of the Act. And there is no provision concerning AI-generated content. There is no general authority for the FCC to police the truth or falsity of political advertisements, nor could there be. This is particularly true for candidate broadcast advertising, where the Act itself explicitly denies broadcasters the ability—let alone the duty—to remove ads for any reason, including the truth or falsity of the communication. Second, even if the Commission could regulate AI-generated content in political advertisements—and it cannot—key definitions are overly broad and impermissibly vague. Proponents of the Rule point to the risk of so-called “deepfakes,” made possible by advances in machine-learning capabilities. But the definition of AI-generated content proposed by the Commission is untethered from these technological advances. Instead, it refers to content “generated using computational technology.” This would seem to encompass any content created in whole or in part using computers, which is to say, the vast majority of advertising content. Assuming the Commission does not intend for the Proposed Rule to sweep so broadly, people of ordinary intelligence must necessarily guess at what might be covered. This guesswork is inherently chilling and is not permitted under the First Amendment. Reasonable people may disagree about what, if anything, should be done about AI-generated content in political advertisements. But those decisions are for Congress to consider in the first instance. Not unelected officials in an agency whose core mission is not regulating political speech. This Commission can and should reject this Rule and allow the proper democratic processes to play out. (2024) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Cisco AguilarNevada Secretary of Statestrongly agrees and says:It's artificial intelligence. Are you using a computer to alter the image, to generate an image? We just want voters to know that if they receive campaign material, they see campaign material, they hear campaign material, and it incorporates AI, that they know that it's AI. [...] It's going to be on the person generating the ad. They're going to have to disclose that, but if they choose not to follow the law, there's some accountability behind it. There's everything from fines, to personal liability, to criminal standards, and that's a conversation we want to make sure we're having, and that's why we look to other states as well. (2025) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jennifer HuddlestonSenior fellow, Cato Institutedisagrees and says:It should be emphasized that not all uses of AI in election advertisements should be presumed to be manipulative or fraudulent. In fact, even when it comes to election advertising, there are beneficial and non-manipulative uses of technologies like AI. For example, AI could be used to translate an existing ad in English to the native language of a group of voters that might not otherwise be reached or add subtitles to reach communities of individuals with disabilities. It could also be used to lower the costs of production and post-production, such as removing a disruption in a shot. Even these examples are more direct interactions that may be more visible than the countless examples of AI that may be used in spell-checking a script or using an algorithm in a search engine to conduct research or promote an ad. These actions are not manipulative or deceptive nor do they give rise to concerns about mis or disinformation. However, under many definitions, some or all of these actions would result in labeling requirements that an advertisement used AI. Given the broad use of AI , such a “warning label” could become meaningless as it applies to both benign and manipulative uses. Existing law does not get tossed out the window just by the appearance of new technologies, and actions by bad actors must be addressed in existing FEC rules. New technologies should not change the underlying rules of the road. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Michael BennetU.S. Senator from Coloradostrongly agrees and says:I write with concerns about your current identification and disclosure policies for content generated by artificial intelligence (AI). Americans should know when images or videos are the product of generative AI models, and platforms and developers have a responsibility to label such content properly. This is especially true for political communication. Fabricated images can derail stock markets, suppress voter turnout, and shake Americans’ confidence in the authenticity of campaign material. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Mike LeeU.S. Senator from Utahagrees and says:The FCC's proposal to impose new regulations on political speech involving AI, just months before one of the most consequential elections in our history, represents a clear overstep of their regulatory authority. While I support transparency in the use of AI in campaign ads, I strongly oppose the idea of a Democrat-run federal agency single-handedly changing the rules of political engagement under the guise of regulation. (2024) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Mark SpreitzerWisconsin state senatoragrees and says:“The use of AI to create a political ad is not inherently good or bad. Generative AI could be used to create a clever animation to illustrate a candidate’s views, or it could be used to create a realistic-looking video clip that makes it look like their opponent said something they never did.” “[...] This bill will leave it up to voters to determine whether what they are seeing or hearing is “fair,” but it will give voters the information to know that what they are seeing or hearing may not be “real.” (2024) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Foundation for Individual Rights and ExpressionCivil liberties nonprofitstrongly disagrees and says:Requiring disclosures will discourage innovative and empowering uses of artificial intelligence, chilling campaigns and grassroots organizations from employing technological advances to their benefit. (2024) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Amy KlobucharU.S. senator from Minnesotastrongly agrees and says:We also need disclaimers [...] so that voters will know if the political ads they see are made using this technology. (2024) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Thierry BretonEU internal market commissioneragrees and says:Political advertising will be labelled as such, and individuals will be provided with additional information, including on the use of artificial intelligence [...] in political advertising. (2024) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
GoogleSearch and advertising companyagrees and says:Verified election advertisers [...] must prominently disclose when their ads contain synthetic content that inauthentically depicts real or realistic-looking people or events. (2025) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Bradley A. SmithFormer FEC chair, law professordisagrees and says:First, the Commission lacks statutory authority to require disclosure or disclaimers on political advertisements that utilize “artificial intelligence.” (2024) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Lisa MurkowskiU.S. senator from Alaskastrongly agrees and says:Our bill only requires a disclaimer when political ads use AI in a significant way – something [...] we can all agree we’d like to know. (2024) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Nick CleggMeta president of global affairsagrees and says:Starting in the new year, advertisers will [...] disclose when they use AI [...] to create or alter a political or social issue ad. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ellen L. WeintraubFEC commissioner; 2025 chairagrees and says:The public would benefit from greater transparency as to when AI-generated content is being used in political advertisements. (2024) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yvette D. ClarkeU.S. Representative from New Yorkstrongly agrees and says:I look forward to [...] pass commonsense disclosure laws for AI-generated content in political ads. (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
American Association of Political ConsultantsPolitical consultants trade associationstrongly disagrees and says:For these reasons, the AAPC strongly opposes the FCC’s proposed rule on the disclosure of AI-generated content in political advertisements. (2024) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jessica RosenworcelChairwoman of the @FCCstrongly agrees and says:The use of AI-generated content in political ads also creates a potential for providing deceptive information to voters, in particular, the use of 'deep fakes' – altered images, videos, or audio recordings that depict people doing or saying things that [they] did not actually do or say, or events that did not actually occur. As artificial intelligence tools become more accessible, the Commission wants to make sure consumers are fully informed when the technology is used. Today, I've shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue. source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
More
ai
votes
More
digital-democracy
votes
| Should we all participate in shaping the future of AI and the post-artificial general intelligence era? |
| Should we use electronic voting machines? |
More
disinformation
votes
| Should TikTok be banned? |
More
ai-regulation
votes