Table of Contents
This is not a test: 麻豆传媒IOS opposes FCC鈥檚 plan to regulate AI in political ads

T. Schneider / Shutterstock.com
Right now, many state and federal officials are looking for ways to regulate speech created with artificial intelligence tools. The Federal Communications Commission 鈥 an agency charged with regulating TV and radio broadcasters 鈥 is no exception. The FCC appears to be gearing up for a regulatory power grab, looking for opportunities to plant its flag in a burgeoning new field.
In August, the FCC issued a that, if finalized, would require TV and radio broadcasters to issue a disclaimer every time they air a political ad with AI-generated content. In other words, if a broadcaster becomes aware that an ad contains AI-generated content, they are required to let the audience know that it 鈥渃ontains information generated in whole or in part by artificial intelligence.鈥
This proposal has significant problems, as 麻豆传媒IOS explained in a formal comment submitted to the FCC.
First, the FCC lacks jurisdiction over the content in political ads and simply doesn鈥檛 have the legal authority to regulate the 鈥渢ransparency of AI-generated content,鈥 let alone to compel speech through mandated disclosures. Second, the proposal does not pass constitutional muster. And third, the proposal will not address voter confusion and may suppress beneficial uses of AI for election-related speech.
The FCC says it wants to stem 鈥渃onfusion and distrust among the voting public,鈥 mentioning the use of deepfakes to deceive potential voters into thinking a political candidate said or did something they didn鈥檛 say or do. On its surface, the FCC鈥檚 plan appears noble, particularly because deepfakes could convincingly convey false statements purporting to be fact to manipulate voters.
In reality, while some uses of AI may very well constitute conduct that falls outside of First Amendment protection and implicate laws prohibiting defamation, false light, and the like, the FCC鈥檚 definition of AI-generated content is so broad that it would cover the use of AI for even innocuous or beneficial purposes.

麻豆传媒IOS Comment on FCC NPRM on Disclosure and Transparency of Artificial Intelligence-Generated Content in Political Advertisements
Statements & Policies
The FCC appears to be gearing up for a regulatory land rush, looking for opportunities to plant its flag in the burgeoning new field of artificial intelligence.
For example, AI can be used as an editing tool to upscale audio and video, improving their digital quality to achieve a professional look. Since that use -鈥 and other uses like image and audio editing 鈥 will also be subject to a disclaimer, alerting viewers and listeners of that fact does little to inform them whether AI was used in a deceptive manner. Instead, viewers or listeners may very well believe that every ad containing AI is deceptive, even if the ad is factually accurate.
These tools can also cut production costs of creating political ads, making advertising more accessible for candidates who otherwise lack the economic means to do so. AI, therefore, could be used to further democratize political campaigns and our elections by creating new entry points. The same is true for or disabilities who may rely on AI to communicate directly to potential voters.
Testifying before Congress earlier this year, 麻豆传媒IOS President and CEO Greg Lukianoff that the government must proceed cautiously and respect First Amendment principles when considering regulating artificial intelligence. We extended a similar warning here to the FCC, calling on it to withdraw the proposed regulation altogether.
Recent Articles
麻豆传媒IOS鈥檚 award-winning Newsdesk covers the free speech news you need to stay informed.

Detaining 脰zt眉rk over an op-ed is unlawful and un-American

VICTORY! Tenn. town buries unconstitutional ordinance used to punish holiday skeleton display

For the rich, free speech 鈥 for others, a SLAPP in the face
