Finding Truth In The Age Of AI

In January, thousands of New Hampshire voters received a call from Kathy Sullivan, former chair of the New Hampshire Democratic party, with a message from Joe Biden telling them not to vote in the primary election and only in November, when their vote counts. Except this call was neither from Sullivan nor from President Biden. The recorded message was fake, generated by artificial intelligence.

Such a deceptive message is made possible by AI’s deepfake technology, which synthesizes and manipulates faces and speech to produce content that looks and sounds real. The technology is easily accessible, granting most computer users the power to create realistic media and post it. While deepfakes can be entertaining, creators of these types of content often aim to purposely mislead people and make it impossible to determine which online content is real. Further, there is currently no technology that automatically and reliably detects deepfakes, making it difficult for media platforms to crack down on fake posts. 

Politicians are often targeted by deepfakes and are concerned about the damage it could do to their reputations. The deepfake message that purported to be from Joe Biden was intended to confuse New Hampshire voters and interfere in the battleground state’s primary election. After the incident, the manager of the Biden campaign, Julie Chávez Rodriguez, urged that “spreading disinformation to suppress voting and deliberately undermine free and fair elections will not stand, and fighting back against any attempt to undermine our democracy will continue to be a top priority for this campaign.” Deepfakes have also been used to show politicians expressing certain controversial beliefs. Last year, a deepfake displayed a Democratic candidate for Chicago mayor as apathetic towards police shootings and wanting to refund the police. It is feared that people will become increasingly misinformed about the news and politicians, as disinformation becomes more frequent and harder to distinguish from legitimate sources because of deepfakes.

This issue is not confined to the U.S. Leading up to a September parliamentary election in Slovakia, an audio recording circulated online of AI impersonating a leading candidate saying he rigged the election. The Slovakian example shows how deepfake posts can be strategically timed to confuse a wide audience right before pivotal votes, when there is not enough time to accurately determine or denounce the content as AI-generated.

Celebrities are also particularly vulnerable to false portrayal in deepfakes. Taylor Swift was a victim earlier this year, when sexually explicit deepfake images of her spread across the internet. Media platforms worked quickly to take the images down, as Swift’s fans brought attention to the issue. Other women online fear similar violations and may not benefit from such hasty removal. Deepfakes of several other celebrities, including Tom Hanks and MrBeast, were used in scam advertisements, depicted to be endorsing cookware, dental plans, and iPhone giveaways. The ads include very convincing AI-generated audio and video of these figures promoting products with which they have no affiliation. While it is against many platforms’ policies to post content that dishonestly misrepresents people to make money, they are often only able to identify such content after it is posted, and the damage has already been done. 

Lawmakers are concerned about the threats deepfakes pose, and certain states are taking action to prevent future disinformation. A recently signed Wisconsin bill subjects politically affiliated groups to a $1,000 fine if they fail to note the use of AI in their media. The Florida governor is set to sign a bill that could sentence those found liable for spreading deepfake content to up to a year in prison, if they are found to have distributed AI material without proper disclaimers. Arizona’s legislature has taken one of the most aggressive approaches, targeting deepfakes with two potential new bills. One makes it a criminal misdemeanor (or a felony for repeat offenses) if someone fails to provide disclaimers on AI material 90 days before an election. The second allows the victims of deepfake impersonation to sue the perpetrator. Certain states are also creating laws that specifically tackle the production of pornographic AI material, such as criminalizing its distribution and giving victims the power to sue perpetrators. 

Technology companies are also looking to play an active role in solving the deepfake issue. Efforts include improving AI detection technology, alerting people when their content is used in AI, and displaying watermarks on all AI-generated media. These solutions would require joint effort from social media platforms and AI companies to effectively decrease deepfakes. 

The upcoming U.S. presidential election will be the first in which AI is likely to have a meaningful impact. With widespread and uniform legislation still underway to protect citizens and reputations from deepfakes, for now, it is the responsibility of voters and media consumers to determine what is real, and what is fake.

Marina Varriano is from Westchester, NY, studying Public Policy and Music. 


Tags: No tags

Add a Comment

Your email address will not be published. Required fields are marked *