This week, the Senate Committee on Health, Education, Labor, and Pensions (HELP) will vote to pursue civil enforcement and criminal contempt of Congress charges against Steward Health Care CEO Dr. Ralph de la Torre.  If the vote succeeds, and it is likely it will, Dr. de la Torre will be only the second corporate executive subject to a subpoena enforcement action in the history of the Senate.

The bipartisan enforcement action, announced by Committee Chairman Sen. Bernie Sanders (I-Vt.) and Ranking Member Sen. Bill Cassidy, M.D. (R-La.), followed a hearing last week for which Dr. de la Torre was subpoenaed to testify but failed to appear.

The use of an empty chair at a hearing to symbolize noncompliance with congressional requests has increased in recent years, but it is nonetheless a rare event on Capitol Hill.  Dr. de la Torre, remarkably, has been represented by an empty chair twice in less than six months.  In March 2024, Sen. Edward Markey (D-Mass.), chair of the Senate HELP Subcommittee on Primary Health and Retirement Security, launched an inquiry into financial mismanagement at Steward Health Care.  Senator Markey twice requested that Dr. de la Torre testify at a Subcommittee hearing on April 3, 2024.  Dr. de la Torre declined to appear, earning his first empty chair of the year.

In July 2024, Committee Chairman Sanders and Ranking Member Cassidy led a bipartisan vote to authorize a full committee investigation into the bankruptcy of Steward Health Care and to subpoena the company’s CEO, Dr. de la Torre, to testify at the September 12, 2024, hearing.  The Committee approved both the investigation and the subpoena, marking the Committee’s first subpoena since 1981.  Dr. de la Torre’s counsel objected to the subpoena, which the Committee then rejected, proceeding with the hearing as scheduled.  According to Chairman Sanders and Ranking Member Cassidy, the Committee will pursue both civil enforcement and criminal contempt of Congress charges against Dr. de la Torre.

Since the 1800s, the Supreme Court has recognized Congress’s inherent authority to use the power of the courts to enforce congressional subpoenas.  The Senate additionally enjoys statutory power to seek civil enforcement of a subpoena by instructing Senate Legal Counsel to bring a civil suit in the District Court for the District of Columbia to require compliance.  To date, the Senate has sought to exercise its civil enforcement authority only six times and only once against a corporate executive, the CEO of an online forum accused of sex trafficking.

After the Committee authorizes the resolutions, the measures will advance to the full Senate for a vote.  If the Senate votes in favor of enforcement and the case proceeds to court, this enforcement action could carry significant implications for future corporate executives facing requests for testimony backed with threats or suggestions of a subpoena for testimony.    

The Securities and Exchange Commission (SEC) this week issued a cease-and-desist order that demonstrates the SEC pay-to-play rule’s expansiveness and the SEC’s readiness to enforce it to the letter, even when it is virtually impossible that a political contribution could have influenced a government entity’s investment decision.

In this alert, we summarize the SEC order and its implications.

With Election Day fast approaching, corporations face increasing pressure from both internal and external forces to make legal decisions about political activities. This can be a fraught area of law, with little understood, highly technical regulatory issues that vary significantly across jurisdictions. Corporate counsel should be mindful of common—and sometimes complicated—political law traps. In this alert, we provide a high-level primer on key issues for corporations to monitor, and best practices for corporations to pursue, in an election year.

With the 2024 election cycle in full swing in the United States, on Wednesday, May 22, 2024, FCC Chairwoman Jessica Rosenworcel asked her fellow Commissioners to approve a Notice of Proposed Rulemaking (NPRM) seeking comment on a proposal to require a disclosure when political ads on radio and television contain AI-generated content.  This action reflects a growing concern among federal and state officials about the role that deceptive AI-generated content could play in elections. At the same time, a statement issued today from Republican Commissioner Brendan Carr makes clear that there is disagreement about the appropriateness of FCC intervention on this topic.

According to the FCC’s press release (the NPRM itself is not yet public), the proposal would require an on-air disclosure when a political ad—whether from a candidate or an issue advertiser—contains AI-generated content.  Broadcasters would also have to disclose the ad’s use of AI in its online “public inspection file,” which is the online repository for information about political ads and many other broadcaster activities.  The requirements would apply only to those entities currently subject to the FCC’s political advertising rules, meaning it would not encompass online political advertisements.  Among the issues on which the item would seek comment is the question of how to define “AI-generated content” for this purpose.  How this term gets defined may ultimately impact whether the final rules will apply to any AI-generated content or merely deceptive AI-generated content. 

Akin to the FCC’s sponsorship identification rules, the proposal would be focused on disclosure—it does not propose to prohibit the use of AI-generated content in political advertisements, an action that would present significant First Amendment concerns.  As described by Chairwoman Rosenworcel, the proposed rules would make it clear that “consumers have a right to know when AI tools are being used in the political ads they see.”

In response to the Chairwoman’s announcement, Commissioner Carr issued a pointed statement unambiguously opposing the proposed NPRM and characterizing it as “part and parcel of a broader effort to control political speech.”  Among other points, he expressed the view that FCC action around AI in political advertising would exceed the agency’s authority and result in confusion given that it would not (and could not) apply to unregulated streaming services and other online media.  The other Republican member of the FCC, Commissioner Nathan Simington, has not yet commented on the Chairwoman’s proposal.

Regardless of the outcome of the new proposal, heading into this year’s election and future cycles, television and radio broadcasters face a myriad of challenges when it comes to deceptive AI-generated content and political “deepfakes” (where a candidate’s or other individual’s voice and image are manipulated to suggest they have said or done something they have not).  At the most practical level, as AI technology advances it is becoming ever more challenging to spot AI-manipulated or AI-generated content.  Political advertisers making use of AI-generated deepfakes may not wish to disclose such facts, making the challenge of identifying such content that much harder.  

Moreover, as recently discussed by our colleagues, while a number of bills have been introduced in Congress to regulate deepfakes, none have been enacted into law.  In the vacuum of federal action, states have taken the lead on this issue, with at least 39 having enacted or considering laws that would regulate the use of deepfakes in political advertising.  This patchwork of state laws can mean that requirements vary from state to state, only increasing the regulatory burden for broadcasters.  That these state requirements could be in tension with obligations that may arise under federal law—such as a requirement that broadcasters not edit (or “censor”) candidate ads—further complicates the situation.

In this fast evolving political and regulatory landscape, it is critical that broadcasters and advertisers remain mindful of potential risks and obligations related to AI-generated content in political advertisements.  While the NPRM itself is not yet public and will not lead to final rules until later this year, at the earliest, it may offer useful guidance for broadcasters navigating the challenges posed by political deepfakes in the meantime. And the debate around it also suggests that a bipartisan solution to addressing use of AI-generated content in political advertisements is unlikely to emerge in the near future.

Updated May 23, 2024 to include a subsequently released statement from Commissioner Brendan Carr.

The Democratic and Republican National Party Conventions are a premiere forum for businesses and trade groups to elevate their priorities to candidates, elected officials, and staff. However, thanks to a complex regulatory regime, participation in convention events can invite scrutiny and legal trouble. The Republican Convention is scheduled to take place in Milwaukee from July 15 to 18 and the Democratic Convention will be held in Chicago from August 19 to 22. As convention planning kicks into full swing, this advisory summarizes key contribution, gift, and ethics rules to consider when sponsoring convention events and interacting with both federal and state officials, employees, and political organizations at or around the conventions.

Georgia Governor Brian Kemp has vetoed Georgia Senate Bill 368, which would have created a requirement in state law for certain “agents of foreign principals” to register and report certain lobbying and political activities in Georgia.  This is the first of the wave of recently proposed baby FARA bills at the state level, designed to mirror the federal Foreign Agents Registration Act, that made it to a state governor’s desk, and also the first to be vetoed.  In the Governor’s brief veto message, he wrote that “Senate Bill 368 would prohibit foreign nationals from making political contributions, which is already prohibited by federal law, and impose additional state-level registration requirements on agents of foreign principals, some of which were unintended by the bill’s sponsor.”  He indicated that the bill’s own sponsor had requested that he veto it.

The Georgia bill, like other proposed state-level baby FARA laws, could have had broad consequences (likely broader than intended) not just for foreign companies but also for U.S. subsidiaries of foreign companies, as well as nonprofits, academic institutions, religious institutions, and others because, unlike the federal FARA statute, it did not include major exemptions intended to carve out at least some entities from the obligation to register. Covington is continuing to track the growing wave of proposed baby FARA bills, including whether the bills in other states meet the same fate as the ill-fated Georgia bill.

Over the past several weeks, legislatures in Arizona, California, Georgia, Oklahoma, and Tennessee have introduced bills that mirror the federal Foreign Agents Registration Act (“FARA”). There has been a trend in the states to enact so-called “baby FARA” laws that apply to foreign influenced political activity in the states, although until now those laws generally focused narrowly on regulation of political contributions. 

In this client alert, we summarize these recently introduced bills.

A New Orleans magician recently made headlines for using artificial intelligence (AI) to emulate President Biden’s voice without his consent in a misleading robocall to New Hampshire voters. This was not a magic trick, but rather a demonstration of the risks AI-generated “deepfakes” pose to election integrity.  As rapidly evolving AI capabilities collide with the ongoing 2024 elections, federal and state policymakers increasingly are taking steps to protect the public from the threat of deceptive AI-generated political content.

Media generated by AI to imitate an individual’s voice or likeness present significant challenges for regulators.  As deepfakes increasingly become indistinguishable from authentic content, members of Congress, federal regulatory agencies, and third-party stakeholders all have called for action to mitigate the threats deepfakes can pose for elections.  

Several federal regulators have taken steps to explore the regulation of AI-generated content within their existing jurisdiction.  On February 8, the Federal Communications Commission issued a declaratory ruling confirming that the Telephone Consumer Protection Act restricts the use of “current AI technologies that generate human voices,” an interpretation endorsed by 26 state attorneys general. 

Last year, the Federal Election Commission (FEC) took a step toward clarifying whether AI-generated deepfakes might violate the Federal Election Campaign Act’s prohibition on deceptive campaign practices by requesting comment on whether to initiate a rulemaking on the subject.  After previously deadlocking on a petition from Public Citizen to open such a rulemaking, the FEC voted unanimously in August 2023 to accept public comment on whether to initiate rulemaking procedures, though the agency has not yet taken further action.

Members of Congress also have introduced several bills that regulate deepfakes, though these efforts have moved slowly in committee.  Many lawmakers remain determined to make progress on the issue, as senators from both parties expressed in an April Judiciary Subcommittee hearing. In March, Senators Amy Klobuchar (D-MN) and Lisa Murkowski (R-AK) introduced the bipartisan AI Transparency in Elections Act of 2024 to require clear and conspicuous disclosures in certain political communications that were created or materially altered by artificial intelligence.  Representatives Anna Eshoo (D-CA) and Neal Dunn (R-FL)—members of the House Bipartisan Task Force on Artificial Intelligence—introduced a more generally applicable deepfake disclosure bill that would also address potential impact of the technology on our elections.

Several states already have enacted prohibitions or disclosure requirements on certain forms of manipulated media related to elections, including Minnesota, Texas, and California.  These laws generally prohibit the knowing dissemination of deepfakes within one to three months of an election, and each requires intent to influence the election or the depicted candidate’s reputation.

Even with AI risks top-of-mind for policymakers at all levels, with just seven months until the 2024 general election, a full agenda in Congress, and state legislative sessions coming to a close, the prospects of major reforms in time for this cycle remain uncertain.

Federal circuit courts are split on a core question of corruption law: whether state and local officials, and agents of organizations that contract with or receive benefits from the federal government, may lawfully accept gratuities.

It is generally a federal crime for state and local officials to act in their official capacities in exchange for things of value, provided they solicit or agree to accept such benefits “corruptly.”  This is quid pro quo bribery, prohibited under 18 U.S.C. § 666.  Federal courts lack consensus, however, on whether § 666 also criminalizes scenarios wherein an official or agent of a federal program recipient acts without expectation of a thing of value, but later receives a “gratuity” to reward his or her conduct.

The Supreme Court will review the issue this term in Snyder v. United States.  The case concerns an Indiana mayor who was convicted under § 666 for accepting a $13,000 payment from a truck company that had recently won a sizeable contract with the city.

Although the merits of this case involve money given to an elected official, the statute also applies to agents of organizations that receive, in any one-year period, more than $10,000 in federal benefits, whether in the form of a contract award, grant, loan, appropriation, or other structure.  This includes a sizeable number of companies, institutions of higher education, and nonprofit organizations.  As a result, if the Supreme Court holds that § 666 criminalizes gratuities and broadly interprets the statutory standard for “corruptly” accepting things of value, covered entities may have limited capacity to receive gifts, even those unrelated to the principal’s use of federal funds.

The Supreme Court’s decision could also have significant implications for individuals, companies, and organizations that offer items or services of value to the covered officials and entities, since any gift can be scrutinized as a possible inducement or reward for exercising official powers.

This area of law spans civil and criminal provisions at the federal and state levels, and parties engaging on related matters should consider consulting with counsel.  Covington will continue to monitor developments in this space, and the firm is well positioned to assist companies and individuals navigating this area of the law.

On December 1, 2023, three top U.S. government officials responsible for enforcing the Foreign Agents Registration Act (“FARA”) gave remarks at the American Conference Institute’s 5th National Forum on FARA. In their remarks, each of the speakers – Deputy Assistant Attorney General Eun Young Choi, the Acting Chief of the Counterintelligence and Export Control Section Jennifer Gellie, and the FARA Unit Chief Evan Turgeon – reiterated and reinforced the Department’s commitment to enforcing the statute aggressively. The officials also previewed potentially substantial regulatory changes that it will propose in its forthcoming notice of proposed rulemaking (“NPRM”), along with highlighting the Department’s enforcement and legislative priorities.

In this alert, we summarize and examine these developments, each of which could have significant implications for international companies, sovereign wealth funds, and others, along with the political, legal, and public relations consultants who advise them.