jbareham_170417_1617_0001.0

Movement Toward Legislation Protecting Against Deepfakes

Introduction

Deepfakes are media, primarily videos, which have been manufactured or doctored using advances in artificial intelligence. It is difficult or impossible to distinguish between deepfakes and real life, demonstrating clear and serious implications for our trust in media as technology continues to progress. Since a viral video posted in August making Bill Hader morph into Tom Cruise while doing impressions on a talk show, deepfakes have greatly risen in popularity. Deepfakes have also recently begun to demonstrate their destructive power. For example, a deepfake video surfaced in May, in which Speaker Nancy Pelosi appeared to be drunk or impaired. Henry Farid, professor of computer science at the University of California Berkeley, said this is just the tip of the iceberg of the manipulation of media. While convincingly altering video has long been a difficult but possible task for humans to accomplish, deepfake videos can now be made by allowing A.I. to do the hard work and completely fabricate what we assume to be a recording of reality.

Recent Legislative Action in California Against Deepfakes

In early October, California Governor Gavin Newsom signed into law two pieces of legislation targeting deepfakes. The first bill, AB 602, gives victims of pornographic deepfakes the right to sue the creators of the media. Importantly, the vast majority of deepfakes on the internet are pornographic, often intended to harass, intimidate, or sexualize women and famous celebrities. In fact, a startup called Deeptrace found that 96 percent of deepfakes in open circulation on the web were pornographic. To address this concern, Section 1708.86 of AB 602 specifically covers media which was originally performed by the individual but has been altered to depict sexually explicit material. 

The second bill, AB 730, was introduced ahead of the 2020 election to address political concerns over deepfakes. California assemblymember Marc Berman said in a statement after the signing of the bill, “In the context of elections, the ability to attribute speech or conduct to a candidate that is false – that never happened – makes deepfake technology a powerful and dangerous new tool in the arsenal of those who want to wage misinformation campaigns to confuse voters.” AB 730 makes it illegal to manufacture or distribute “materially deceptive audio or visual media” of politicians within 60 days of an election. According to Section 20010, the bill is designed to protect against deepfakes spread with “the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate”. To ensure no breachof the First Amendment, the bill does not apply to satire or parody videos, to news media, or published media with a disclaimer of inaccurate representation. 

What does this mean for candidates in California? As is also outlined in Section 20010, the bill authorizes any “candidate for elective office whose voice or likeness appears in audio or visual media distributed in violation of [AB 730] to seek injunctive or other equitable relief,” and to “bring an action for general or special damages against the person, committee, or other entity that distributed the media.”

Effectiveness of California Legislation

While there has been a widely positive response to AB 602, critics of AB 730 have been quick to voice concerns over its effectiveness with regard to future elections. In fact, the American Civil Liberties Union of California urged Governor Newsom to veto the bill. In a letter to the governor, the organization’s legislative director Kevin Baker wrote, “Despite the author’s good intentions, this bill will not solve the problem of deceptive political videos; it will only result in voter confusion, malicious litigation, and repression of free speech.” 

Additionally, the bill does little to address the very nature of the majority of deepfakes, which are often posted online anonymously and spread extremely quickly. The bill may theoretically give the authority to hold creators of political deepfakes accountable, but doing so would involve a lengthy tracking-down and legal process. The bill doesn’t target the creation of misinformation in the first place, and there are not mechanisms in place to quickly catch deepfakes or correctly inform the public once damage is done. For example, the video depicting Nancy Pelosi quickly gathered millions of views before being debunked, appearing across Facebook, Youtube, Twitter, and numerous news outlets and message boards. The deepfake was even shared to Twitter by Rudy Giuliani, President Trump’s personal attorney.

Beginning of a National Movement Against Deepfakes

California’s legislation follows a national trend as legislators begin to recognize the need to protect against deepfakes. Representative Yvette Clarke of New York introduced the DEEPFAKES Accountability Act to the House on June 12 as one of the first major pieces of legislation drafted to address the damaging misinformation spread by deepfakes. H.R.3230, or the Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act, would require any person creating a piece of manipulated media with intent to distribute to clearly disclose that the video is altered, through embedded digital watermarks and text statements. Importantly, the bill would establish criminal penalties for the creation of deepfakes “with the intent to humiliate or otherwise harass the person falsely exhibited…[or] with the intent to cause violence or physical harm, incite armed or diplomatic conflict, or interfere in an official proceeding, including an election.” 

Though the DEEPFAKES Accountability Act has not seen further action since referral to the House Subcommittee on Crime, Terrorism, and Homeland Security, another deepfake bill brought to the Senate in July was just passed on October 24. Introduced by Senator Rob Portman of Ohio, the bipartisan Deepfake Report Act of 2019 now awaits consideration in the House. S.2065 would require the Department of Homeland Security to produce an annual report on the use of digital content forgery, including an assessment of the impact of deepfakes on national security and individuals. Significantly, the bill would assess altered media produced by both domestic entities and foreign governments to better understand their roles in the spreading of misinformation and political manipulation. 

Senator Gary Peters, a sponsor of the bill and the top Democrat on the Senate Homeland Security and Governmental Affairs Committee, stressed the importance of legislation addressing deepfakes by articulating, “with each passing day, deepfakes become easier to create and distribute, opening the door for bad actors to sow discord and mislead thousands with just the click of a button…we must ensure Americans are aware of the risks this new technology poses, and are empowered to recognize misinformation.” Reflecting California’s action on a national scale, legislators are moving to maintain the integrity of our democracy against the rising threat of deepfakes. 

 

Noah Charlick is a sophomore from Canton, Ohio, studying Public Policy and Global Health.

 

Add a Comment

Your email address will not be published. Required fields are marked *