How AI is changing the American election campaign

How AI is changing the American election campaign

Vor einigen Wochen verbreitete Techmilliardär Elon Musk auf seiner Plattform X ein Bild von Kamala Harris, auf dem sie eine rote Uniform mit goldenen Schulterklappen trägt. Auf ihrer Kappe prangen Hammer und Sichel. Musk kommentierte dazu, die demokratische Präsidentschaftskandidatin wolle vom ersten Tag an als „kommunistische Diktatorin“ regieren, und fragte: „Könnt ihr glauben, dass sie dieses Outfit trägt?!“

Das Bild ist nicht echt, sondern mithilfe Künstlicher Intelligenz (KI) entstanden. Musk stellte das jedoch nicht klar. Stattdessen erweckte der Trump-Unterstützer den Eindruck, Harris trage kommunistische Sym­bole.

Images like this are a phenomenon of the first American election campaign in which artificial intelligence is widely accessible. In which computer-generated content with false information is circulating without American politicians having yet found an answer as to how to protect themselves against it.

“The number of deepfakes has increased significantly”

At the beginning of the year, experts warned of a flood of disinformation that threatened Americans by November 5th. Three weeks before the election, it seems as if things have not come to an extreme. But there can be no question of a fundamental all-clear.

“The number of deepfakes has increased significantly,” says Olaf Groth, an artificial intelligence expert at the University of Berkeley. Overall, three times more computer-generated but deceptively real images, videos and audio files have been registered so far this year, which pose a threat to the election. This makes it all the more important to mark corresponding content in the future, says Groth, with watermarks or notes that can be read out technically.

Large tech companies such as Meta, X and Google signed an agreement in the spring to combat election interference through AI. However, this is more of a symbolic nature and is not concerned with removing deepfakes, but rather with recognizing and labeling such posts. As Axios reported on Thursday, Google will block all election advertising after the November 5 election to stop false reports.

The danger of deepfakes

In January, the Americans learned impressively what a deepfake can look like. Shortly before the New Hampshire primary, a man sent an audio recording to thousands of voters in which President Joe Biden supposedly told them not to vote but to save their vote for November. This was irrelevant to the outcome of the primary because Biden was not on the ballot anyway due to internal party disputes.

But expert Groth sees a fundamental danger. The average voter does not have the time or the will to verify the authenticity of such a message. This applies to domestic and foreign influence and is even more serious in a close race.

The initiator of the so-called robocall in New Hampshire, a Democratic political consultant, faces several years in prison for attempting to stop voters from voting and for impersonating a candidate. The FCC also fined him $6 million for using AI to make illegal calls to spread misinformation.

However, despite bipartisan initiatives in Congress, there are currently no comprehensive laws at the federal level that regulate the use of deepfakes and AI content in election campaigns.

After a closed meeting with experts last November, Democratic Senate Majority Leader Chuck Schumer warned of the “serious risks” to democracy and the integrity of the elections. Time is running out and it is important to act quickly. But since then, several bills have failed due to resistance from Republicans in the Senate. They warn that such regulations could hinder technical innovations and freedom of expression. A few weeks ago, Schumer only vaguely said that the issue would be addressed “beyond the 2024 election.”

There is a lack of a unified response to AI interventions

An announced report with recommendations for action on the topic is still pending in the House of Representatives. A bill introduced by lawmakers from both parties in September would authorize the Federal Election Commission (FEC) to regulate the use of AI in elections as well as other misrepresentations. So far, the authority primarily responsible for campaign financing has claimed that it is not up to it to ban these methods in election campaigns. Rather, we are waiting for instructions from Congress. Until then, the FEC will take care of all “fraudulent misrepresentations,” regardless of the medium.

Three weeks before the election, there is no unified response from the United States to AI interventions in the election campaign. Laws that combat disinformation through artificial intelligence only exist in individual states. On its website, the consumer protection organization Public Citizen lists 18 of fifty American states that regulate the use of AI in election campaigns. This is primarily about false information and election manipulation.

In California, for example, the use of AI in election advertising must be identified. In addition, the distribution of election communications with “misleading content” is prohibited 120 days before and sixty days after an election; what is meant are deepfakes. Since May, it has been a criminal offense in Alabama to knowingly spread deepfakes during the election campaign if they are not marked as such and are intended to influence the election. A single violation can be punished with up to one year in prison; Thereafter, the act is considered a felony with a prison sentence of up to five years.

Trump in prison and other fakes

Controlling artificial intelligence in election campaigns is also important because Americans are already highly skeptical of the political system. Rumors fueled by the Republicans that there will be election fraud “again” are sometimes accompanied by sophisticated AI articles with explicit false information.

In addition to Harris as a communist, pictures of Trump in prison are also circulating in this election campaign. The Republican presidential candidate himself distributed a fake photo of Taylor Swift with which the pop star allegedly announced his support for him. And Republicans released a video showing an apocalyptic America under Harris. None of this content is true.

But even if such posts can quickly be revealed as the work of an AI, that doesn’t necessarily make them any less dangerous, observers warn. Trump, for example, used AI images to spread the lie that migrants were eating residents’ pets in a city in Ohio. They showed him running away from shirtless black men with two kittens in his arms. They showed him on a private jet with dozens of cats and ducks. The depictions serve racist narratives with supposedly humorous depictions. In reality, Trump’s increased false claims led to death threats and school closures in Springfield, Ohio. According to her campaign team, Democrat Harris only uses AI for data analysis and IT issues.

Another phenomenon of artificial intelligence that is becoming more and more common is the “Liar’s Dividend”, according to which liars or fraudsters can take advantage of the fact that it is becoming increasingly difficult to distinguish between real and false content. “Liars have a double advantage,” says Olaf Groth. Not only could they use AI to spread false information, but they could also deny real content based on it. “So public confusion increases even further.” This is one of the reasons why we are only at the very beginning when faced with the challenges posed by artificial intelligence.

Facebook
Pinterest
Twitter
LinkedIn
Email

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *