With the 2024 election cycle in full swing in the United States, on Wednesday, May 22, 2024, FCC Chairwoman Jessica Rosenworcel asked her fellow Commissioners to approve a Notice of Proposed Rulemaking (NPRM) seeking comment on a proposal to require a disclosure when political ads on radio and television contain AI-generated content.  This action reflects a growing concern among federal and state officials about the role that deceptive AI-generated content could play in elections. At the same time, a statement issued today from Republican Commissioner Brendan Carr makes clear that there is disagreement about the appropriateness of FCC intervention on this topic.

According to the FCC’s press release (the NPRM itself is not yet public), the proposal would require an on-air disclosure when a political ad—whether from a candidate or an issue advertiser—contains AI-generated content.  Broadcasters would also have to disclose the ad’s use of AI in its online “public inspection file,” which is the online repository for information about political ads and many other broadcaster activities.  The requirements would apply only to those entities currently subject to the FCC’s political advertising rules, meaning it would not encompass online political advertisements.  Among the issues on which the item would seek comment is the question of how to define “AI-generated content” for this purpose.  How this term gets defined may ultimately impact whether the final rules will apply to any AI-generated content or merely deceptive AI-generated content. 

Akin to the FCC’s sponsorship identification rules, the proposal would be focused on disclosure—it does not propose to prohibit the use of AI-generated content in political advertisements, an action that would present significant First Amendment concerns.  As described by Chairwoman Rosenworcel, the proposed rules would make it clear that “consumers have a right to know when AI tools are being used in the political ads they see.”

In response to the Chairwoman’s announcement, Commissioner Carr issued a pointed statement unambiguously opposing the proposed NPRM and characterizing it as “part and parcel of a broader effort to control political speech.”  Among other points, he expressed the view that FCC action around AI in political advertising would exceed the agency’s authority and result in confusion given that it would not (and could not) apply to unregulated streaming services and other online media.  The other Republican member of the FCC, Commissioner Nathan Simington, has not yet commented on the Chairwoman’s proposal.

Regardless of the outcome of the new proposal, heading into this year’s election and future cycles, television and radio broadcasters face a myriad of challenges when it comes to deceptive AI-generated content and political “deepfakes” (where a candidate’s or other individual’s voice and image are manipulated to suggest they have said or done something they have not).  At the most practical level, as AI technology advances it is becoming ever more challenging to spot AI-manipulated or AI-generated content.  Political advertisers making use of AI-generated deepfakes may not wish to disclose such facts, making the challenge of identifying such content that much harder.  

Moreover, as recently discussed by our colleagues, while a number of bills have been introduced in Congress to regulate deepfakes, none have been enacted into law.  In the vacuum of federal action, states have taken the lead on this issue, with at least 39 having enacted or considering laws that would regulate the use of deepfakes in political advertising.  This patchwork of state laws can mean that requirements vary from state to state, only increasing the regulatory burden for broadcasters.  That these state requirements could be in tension with obligations that may arise under federal law—such as a requirement that broadcasters not edit (or “censor”) candidate ads—further complicates the situation.

In this fast evolving political and regulatory landscape, it is critical that broadcasters and advertisers remain mindful of potential risks and obligations related to AI-generated content in political advertisements.  While the NPRM itself is not yet public and will not lead to final rules until later this year, at the earliest, it may offer useful guidance for broadcasters navigating the challenges posed by political deepfakes in the meantime. And the debate around it also suggests that a bipartisan solution to addressing use of AI-generated content in political advertisements is unlikely to emerge in the near future.

Updated May 23, 2024 to include a subsequently released statement from Commissioner Brendan Carr.