We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Jennifer Huddleston
Senior fellow, Cato Institute
Top
New
-
Should AI-generated political advertising be disclosed?
Jennifer Huddleston disagrees and says:
It should be emphasized that not all uses of AI in election advertisements should be presumed to be manipulative or fraudulent. In fact, even when it comes to election advertising, there are beneficial and non-manipulative uses of technologies like AI. For example, AI could be used to translate an existing ad in English to the native language of a group of voters that might not otherwise be reached or add subtitles to reach communities of individuals with disabilities. It could also be used to lower the costs of production and post-production, such as removing a disruption in a shot. Even these examples are more direct interactions that may be more visible than the countless examples of AI that may be used in spell-checking a script or using an algorithm in a search engine to conduct research or promote an ad. These actions are not manipulative or deceptive nor do they give rise to concerns about mis or disinformation. However, under many definitions, some or all of these actions would result in labeling requirements that an advertisement used AI. Given the broad use of AI , such a “warning label” could become meaningless as it applies to both benign and manipulative uses. Existing law does not get tossed out the window just by the appearance of new technologies, and actions by bad actors must be addressed in existing FEC rules. New technologies should not change the underlying rules of the road. (2023) source Unverified