We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Public Citizen
Consumer advocacy organization
Top
New
-
Should AI-generated political advertising be disclosed?
Public Citizen strongly agrees and says:
Campaigns are already running A.I.-generated ads that look and sound like actual candidates and events, but in fact are entirely fabricated. These ads look and sound so real that it is becoming exceedingly difficult to discern fact from fiction. When A.I.-generated content makes a candidate say or do things they never did – for the explicit purpose of damaging that targeted candidate’s reputation – these ads are known as “deepfakes.” The practice of disseminating deepfakes in political communications on social media or mainstream television and radio outlets is currently legal in federal elections and most states. These ads are not even subject to a disclaimer requirement noting that the content never happened in real life. Presumptively, Goodman contends, an adequate disclosure of who is issuing a campaign communication is sufficient to defeat a claim of fraudulent misrepresentation. However, Goodman notes, an otherwise adequate disclosure can be countermanded when the misrepresentation in the text itself defeats the disclosure and perpetuates confusion about the actual speaker. In the case of deceptive deepfakes, a disclosure of who is distributing the fraudulently misrepresented content will not cure the confusion about the actual speaker. If Candidate Jones places on their social media feed a deepfake video of Candidate Smith saying that the sun revolves around the earth, the disclosure that Jones is distributing the content does not cure the deception over identity. By contrast, a disclosure that the deepfake video is a deepfake would constitute an adequate disclosure, precisely because it would cure the confusion over identity. (2023) source Unverified