With the 2024 election cycle in full swing in the United States, on Wednesday, May 22, 2024, FCC Chairwoman Jessica Rosenworcel asked her fellow Commissioners to approve a Notice of Proposed Rulemaking (NPRM) seeking comment on a proposal to require a disclosure when political ads on radio and television contain AI-generated content.  This action reflects a growing concern among federal and state officials about the role that deceptive AI-generated content could play in elections. At the same time, a statement issued today from Republican Commissioner Brendan Carr makes clear that there is disagreement about the appropriateness of FCC intervention on this topic.

According to the FCC’s press release (the NPRM itself is not yet public), the proposal would require an on-air disclosure when a political ad—whether from a candidate or an issue advertiser—contains AI-generated content.  Broadcasters would also have to disclose the ad’s use of AI in its online “public inspection file,” which is the online repository for information about political ads and many other broadcaster activities.  The requirements would apply only to those entities currently subject to the FCC’s political advertising rules, meaning it would not encompass online political advertisements.  Among the issues on which the item would seek comment is the question of how to define “AI-generated content” for this purpose.  How this term gets defined may ultimately impact whether the final rules will apply to any AI-generated content or merely deceptive AI-generated content. 

Akin to the FCC’s sponsorship identification rules, the proposal would be focused on disclosure—it does not propose to prohibit the use of AI-generated content in political advertisements, an action that would present significant First Amendment concerns.  As described by Chairwoman Rosenworcel, the proposed rules would make it clear that “consumers have a right to know when AI tools are being used in the political ads they see.”

In response to the Chairwoman’s announcement, Commissioner Carr issued a pointed statement unambiguously opposing the proposed NPRM and characterizing it as “part and parcel of a broader effort to control political speech.”  Among other points, he expressed the view that FCC action around AI in political advertising would exceed the agency’s authority and result in confusion given that it would not (and could not) apply to unregulated streaming services and other online media.  The other Republican member of the FCC, Commissioner Nathan Simington, has not yet commented on the Chairwoman’s proposal.

Regardless of the outcome of the new proposal, heading into this year’s election and future cycles, television and radio broadcasters face a myriad of challenges when it comes to deceptive AI-generated content and political “deepfakes” (where a candidate’s or other individual’s voice and image are manipulated to suggest they have said or done something they have not).  At the most practical level, as AI technology advances it is becoming ever more challenging to spot AI-manipulated or AI-generated content.  Political advertisers making use of AI-generated deepfakes may not wish to disclose such facts, making the challenge of identifying such content that much harder.  

Moreover, as recently discussed by our colleagues, while a number of bills have been introduced in Congress to regulate deepfakes, none have been enacted into law.  In the vacuum of federal action, states have taken the lead on this issue, with at least 39 having enacted or considering laws that would regulate the use of deepfakes in political advertising.  This patchwork of state laws can mean that requirements vary from state to state, only increasing the regulatory burden for broadcasters.  That these state requirements could be in tension with obligations that may arise under federal law—such as a requirement that broadcasters not edit (or “censor”) candidate ads—further complicates the situation.

In this fast evolving political and regulatory landscape, it is critical that broadcasters and advertisers remain mindful of potential risks and obligations related to AI-generated content in political advertisements.  While the NPRM itself is not yet public and will not lead to final rules until later this year, at the earliest, it may offer useful guidance for broadcasters navigating the challenges posed by political deepfakes in the meantime. And the debate around it also suggests that a bipartisan solution to addressing use of AI-generated content in political advertisements is unlikely to emerge in the near future.

Updated May 23, 2024 to include a subsequently released statement from Commissioner Brendan Carr.

The Democratic and Republican National Party Conventions are a premiere forum for businesses and trade groups to elevate their priorities to candidates, elected officials, and staff. However, thanks to a complex regulatory regime, participation in convention events can invite scrutiny and legal trouble. The Republican Convention is scheduled to take place in Milwaukee from July 15 to 18 and the Democratic Convention will be held in Chicago from August 19 to 22. As convention planning kicks into full swing, this advisory summarizes key contribution, gift, and ethics rules to consider when sponsoring convention events and interacting with both federal and state officials, employees, and political organizations at or around the conventions.

Georgia Governor Brian Kemp has vetoed Georgia Senate Bill 368, which would have created a requirement in state law for certain “agents of foreign principals” to register and report certain lobbying and political activities in Georgia.  This is the first of the wave of recently proposed baby FARA bills at the state level, designed to mirror the federal Foreign Agents Registration Act, that made it to a state governor’s desk, and also the first to be vetoed.  In the Governor’s brief veto message, he wrote that “Senate Bill 368 would prohibit foreign nationals from making political contributions, which is already prohibited by federal law, and impose additional state-level registration requirements on agents of foreign principals, some of which were unintended by the bill’s sponsor.”  He indicated that the bill’s own sponsor had requested that he veto it.

The Georgia bill, like other proposed state-level baby FARA laws, could have had broad consequences (likely broader than intended) not just for foreign companies but also for U.S. subsidiaries of foreign companies, as well as nonprofits, academic institutions, religious institutions, and others because, unlike the federal FARA statute, it did not include major exemptions intended to carve out at least some entities from the obligation to register. Covington is continuing to track the growing wave of proposed baby FARA bills, including whether the bills in other states meet the same fate as the ill-fated Georgia bill.

Over the past several weeks, legislatures in Arizona, California, Georgia, Oklahoma, and Tennessee have introduced bills that mirror the federal Foreign Agents Registration Act (“FARA”). There has been a trend in the states to enact so-called “baby FARA” laws that apply to foreign influenced political activity in the states, although until now those laws generally focused narrowly on regulation of political contributions. 

In this client alert, we summarize these recently introduced bills.

A New Orleans magician recently made headlines for using artificial intelligence (AI) to emulate President Biden’s voice without his consent in a misleading robocall to New Hampshire voters. This was not a magic trick, but rather a demonstration of the risks AI-generated “deepfakes” pose to election integrity.  As rapidly evolving AI capabilities collide with the ongoing 2024 elections, federal and state policymakers increasingly are taking steps to protect the public from the threat of deceptive AI-generated political content.

Media generated by AI to imitate an individual’s voice or likeness present significant challenges for regulators.  As deepfakes increasingly become indistinguishable from authentic content, members of Congress, federal regulatory agencies, and third-party stakeholders all have called for action to mitigate the threats deepfakes can pose for elections.  

Several federal regulators have taken steps to explore the regulation of AI-generated content within their existing jurisdiction.  On February 8, the Federal Communications Commission issued a declaratory ruling confirming that the Telephone Consumer Protection Act restricts the use of “current AI technologies that generate human voices,” an interpretation endorsed by 26 state attorneys general. 

Last year, the Federal Election Commission (FEC) took a step toward clarifying whether AI-generated deepfakes might violate the Federal Election Campaign Act’s prohibition on deceptive campaign practices by requesting comment on whether to initiate a rulemaking on the subject.  After previously deadlocking on a petition from Public Citizen to open such a rulemaking, the FEC voted unanimously in August 2023 to accept public comment on whether to initiate rulemaking procedures, though the agency has not yet taken further action.

Members of Congress also have introduced several bills that regulate deepfakes, though these efforts have moved slowly in committee.  Many lawmakers remain determined to make progress on the issue, as senators from both parties expressed in an April Judiciary Subcommittee hearing. In March, Senators Amy Klobuchar (D-MN) and Lisa Murkowski (R-AK) introduced the bipartisan AI Transparency in Elections Act of 2024 to require clear and conspicuous disclosures in certain political communications that were created or materially altered by artificial intelligence.  Representatives Anna Eshoo (D-CA) and Neal Dunn (R-FL)—members of the House Bipartisan Task Force on Artificial Intelligence—introduced a more generally applicable deepfake disclosure bill that would also address potential impact of the technology on our elections.

Several states already have enacted prohibitions or disclosure requirements on certain forms of manipulated media related to elections, including Minnesota, Texas, and California.  These laws generally prohibit the knowing dissemination of deepfakes within one to three months of an election, and each requires intent to influence the election or the depicted candidate’s reputation.

Even with AI risks top-of-mind for policymakers at all levels, with just seven months until the 2024 general election, a full agenda in Congress, and state legislative sessions coming to a close, the prospects of major reforms in time for this cycle remain uncertain.

Federal circuit courts are split on a core question of corruption law: whether state and local officials, and agents of organizations that contract with or receive benefits from the federal government, may lawfully accept gratuities.

It is generally a federal crime for state and local officials to act in their official capacities in exchange for things of value, provided they solicit or agree to accept such benefits “corruptly.”  This is quid pro quo bribery, prohibited under 18 U.S.C. § 666.  Federal courts lack consensus, however, on whether § 666 also criminalizes scenarios wherein an official or agent of a federal program recipient acts without expectation of a thing of value, but later receives a “gratuity” to reward his or her conduct.

The Supreme Court will review the issue this term in Snyder v. United States.  The case concerns an Indiana mayor who was convicted under § 666 for accepting a $13,000 payment from a truck company that had recently won a sizeable contract with the city.

Although the merits of this case involve money given to an elected official, the statute also applies to agents of organizations that receive, in any one-year period, more than $10,000 in federal benefits, whether in the form of a contract award, grant, loan, appropriation, or other structure.  This includes a sizeable number of companies, institutions of higher education, and nonprofit organizations.  As a result, if the Supreme Court holds that § 666 criminalizes gratuities and broadly interprets the statutory standard for “corruptly” accepting things of value, covered entities may have limited capacity to receive gifts, even those unrelated to the principal’s use of federal funds.

The Supreme Court’s decision could also have significant implications for individuals, companies, and organizations that offer items or services of value to the covered officials and entities, since any gift can be scrutinized as a possible inducement or reward for exercising official powers.

This area of law spans civil and criminal provisions at the federal and state levels, and parties engaging on related matters should consider consulting with counsel.  Covington will continue to monitor developments in this space, and the firm is well positioned to assist companies and individuals navigating this area of the law.

On December 1, 2023, three top U.S. government officials responsible for enforcing the Foreign Agents Registration Act (“FARA”) gave remarks at the American Conference Institute’s 5th National Forum on FARA. In their remarks, each of the speakers – Deputy Assistant Attorney General Eun Young Choi, the Acting Chief of the Counterintelligence and Export Control Section Jennifer Gellie, and the FARA Unit Chief Evan Turgeon – reiterated and reinforced the Department’s commitment to enforcing the statute aggressively. The officials also previewed potentially substantial regulatory changes that it will propose in its forthcoming notice of proposed rulemaking (“NPRM”), along with highlighting the Department’s enforcement and legislative priorities.

In this alert, we summarize and examine these developments, each of which could have significant implications for international companies, sovereign wealth funds, and others, along with the political, legal, and public relations consultants who advise them. 

Today, Congress announced the final version of the National Defense Authorization Act (“NDAA”) for Fiscal Year 2024.  The NDAA is an annual bill that contains important provisions related to the Department of Defense and international security, among other things.  An earlier version of the bill contained two key provisions related to the Foreign Agents Registration Act (“FARA”): The Lobbying Disclosure Improvement Act and Disclosing Foreign Influence in Lobbying Act, both of which had passed in the Senate earlier this year. The final NDAA bill released today, however, does not contain these provisions.  It is not clear why these provisions were removed.  Press reports indicate that the bill’s managers were stripping provisions over which there were disagreements between the chambers in an effort to get the annual bill passed before the holidays.  The lack of a House-passed companion provision therefore could have been fatal to the Senate’s FARA-related provisions.

More substantively, although there is bipartisan support for the regulation of foreign agents, legislators appear to be divided regarding the best approach for reform.  Senator Bob Menendez (D – N.J.), who was indicted on federal bribery charges earlier this year, has reportedly objected to reform of laws regulating foreign lobbying and has blocked similar legislation in the past.  On the other hand, Senator Grassley (R – Iowa) and others have engaged with the Department of Justice to develop comprehensive reform bills. At the recent American Conference Institute’s 5th National Forum on FARA, Department of Justice officials signaled that the Department continues to seek legislative reform to FARA.  Accordingly, Congress may take up more comprehensive legislation that addresses the Department’s legislative priorities at a later date.  Covington will continue to monitor and report on FARA legislation.

This week, the Department of Justice (DOJ) released a new memorandum from Deputy Attorney General Lisa Monaco updating its policies and procedures for criminal investigations involving Members of Congress and congressional staff.  

DOJ emphasized that investigations reaching Congress are important and “sensitive matters,” and explained that the additional guidance would address the “unique challenges” specific to such investigations.  For example, the guidance highlights constitutional protections and privileges afforded to Members of Congress, such as the Speech or Debate Clause of Article I of the Constitution, which provides immunity in the performance of legislative acts.  Furthermore, although not expressly mentioned in the memo, DOJ is also likely to consider the risk that certain investigative activity could chill the exercise of First Amendment freedoms by voters and constituents in their speech and petition rights.  

The guidelines set out in the memo are intended to strike a balance between protecting these essential rights and privileges and ensuring that investigations can continue to proceed consistent with DOJ’s overarching goal of ensuring the public’s “confidence . . . that important prosecutorial decisions will be made rationally and objectively on the merits of each case.” 

Most significantly, the memorandum outlines important changes in the mechanics of how DOJ conducts investigations relating to Members of Congress and their staff, and adds new requirements that will affirmatively require local U.S. Attorney’s Offices to consult with Main Justice in all these matters.  While DOJ’s Public Integrity Section (PIN) has long played a role in criminal investigations involving Congress, the memo adds “additional consultation and approval requirements” formalizing the exact situations in which prosecutors must consult with or receive approval from PIN.  These new requirements reflect the Department’s view that “[a]dditional supervision and coordination” by PIN is warranted. 

These additional requirements even reach investigations seeking information “associated with” Members or staff members but held by third parties (including electronically stored information).  For this reason, the guidance serves as an important reminder of the general principle that even individuals and entities who are mere witnesses—rather than investigative “target[s]” or “subject[s]”—may be swept up in a sensitive DOJ inquiry. 

The guidance describes two categories in which PIN must be involved: (1) those where prosecutors must consult PIN, and (2) those where PIN must approve certain prosecutorial decisions.  

Investigative scenarios that require the consultation of PIN include:

  • Prosecutors open a case targeting a Member of Congress (or where the Member is a subject).
  • Prosecutors issue a subpoena to a Member of Congress, congressional office, or a congressional staffer (if related to their work).
  • Prosecutors seek to initiate surveillance of accounts or devices related to a congressional staffer (if not related to their work).
  • Prosecutors seek to interview a Member of Congress or congressional staffer (unless they are the victim of a crime, see below).
  • Prosecutors bring charges in a matter where a Member of Congress is a subject or target for activities unrelated to their official role or campaign activities.
  • Prosecutors resolve charges against a congressional staffer in a matter in which a Member of Congress is not a subject or target.

Prosecutorial decisions requiring approval by PIN include:

  • Prosecutors issue a subpoena to a third party seeking records belonging to a Member of Congress, congressional office, or a congressional staffer (if related to their work).
  • Prosecutors issue a subpoena or seek court orders asking a third party for data belonging to a Member of Congress, congressional office, or congressional staffer (if related to their work).
  • Prosecutors seek to initiate surveillance of accounts or devices related to a Member of Congress, congressional office, or congressional staffer (if related to their work).
  • Prosecutors direct a source or cooperating witness to have contact with a Member of Congress or congressional staffer.
  • Prosecutors apply for a warrant which will seek information or property which belongs to a Member of Congress, congressional office, or congressional staffer, or covers any place where “[l]egislative [m]aterials [a]re [l]ikely [t]o [b]e [f]ound.”
  • Prosecutors apply for Title III surveillance where a Member of Congress or congressional staffer’s communications may be intercepted or monitor oral communications of a Member or staffer with consent.
  • Prosecutors bring charges in a matter where a Member of Congress is a subject or target for activities related to their official role or campaign activities.
  • Prosecutors resolve charges in an investigation where a Member of Congress is a subject or target.

Notably, when a Member of Congress or their staff is the victim of a crime, prosecutors are not required to involve PIN, although they are encouraged to confer with PIN regarding communications with the Member or staffer.  This additional guidance does not replace the general requirement that DOJ consult with their internal Office of Legislative Affairs (OLA) before contacting Congress, congressional committees, and congressional staffers.  

DOJ’s new guidance raises three crucial takeaways for future DOJ investigations which involve Members of Congress, their offices, or their staff members.  First, to the extent that it was uncertain before what role PIN would play in these matters, it is now clear that PIN will be heavily involved, in either a consultative or supervisory role.  Put another way, any person or entity involved in a criminal investigation relating to Congress should expect to deal with prosecutors from PIN.  Second, the guidance may signal that DOJ anticipates additional investigations that reach the Hill are forthcoming.  Finally, third parties (such as technology and communications companies) which store data belonging to Members and their staffers should be aware of these new procedural requirements and seek legal counsel when necessary to help navigate the process. 

Companies, organizations, and people which receive an inquiry related to an investigation involving Congress should ensure that they understand this new guidance as well as other applicable laws, regulations, and procedures.  If you have any questions concerning the material discussed in this client alert, please contact the members of our Election & Political Law, White Collar, and Congressional Investigations practices.  

The Federal Election Commission (FEC) officially dipped its toes into the ongoing national debate around artificial intelligence (AI) regulation, publishing a Federal Register notice seeking comment on a petition submitted by Public Citizen to initiate a rulemaking to clarify that the Federal Election Campaign Act (FECA) prohibits deceptive AI-generated campaign advertisements.  The Commission unanimously approved publication of the petition at its August 10 meeting

The public is being asked to comment on whether the FEC should initiate a formal rulemaking to specify that using false AI-generated content, sometimes called “deepfakes,” in campaign ads would violate FECA’s prohibition on fraudulent misrepresentation of campaign authority (52 U.S.C. § 30124).  Currently, there are no AI-specific FEC regulations or guidance governing campaign ads or fundraising.

The decision to seek comment on the petition follows the FEC’s June deadlock on an earlier petition from Public Citizen. FEC Republicans, led by Commissioner Allen Dickerson, argued that the FEC has no authority to address AI-generated or “deepfake” campaign ads under FECA, and should not make rules without further guidance from Congress.  

However, the decision to seek public comment does not mean that the Commission will ultimately issue a proposed rulemaking, much less adopt new AI-specific rules.  The Commission remains divided on whether it has the statutory authority to address AI issues at all.  In voting to advance the petition in June, Democratic FEC Chair Dara Lindenbaum indicated she was “skeptical” that the FEC has existing authority to regulate AI, but supported publishing the petition in the hope of receiving helpful comments on the issue.  At the August 10 meeting, Commissioner Dickerson, despite voting to publish the petition, reiterated his view that this remains an issue for Congress and noted “serious First Amendment concerns lurking in the background of this effort.”

Partisan divisions in Congress also mean an expansion of the Commission’s authority to encompass AI-generated ads is unlikely to become law anytime soon.  In May, Rep. Yvette Clark (D-NY) introduced the REAL Political Advertisements Act legislation to give the FEC authority to regulate the use of AI in campaign ads.  Senators Amy Klobuchar (D-MN), Cory Booker (D-NJ), and Michael Bennet (D-CO) have also introduced a Senate companion bill.  No Republican Members of Congress have yet cosponsored either bill, nor did any congressional Republicans join 27 of their House and Senate colleagues on a July letter to the FEC urging it to move forward with the rulemaking petition.

The FEC will accept public comments on whether to initiate a rulemaking process until October 16, 2023.