Union of Concerned Scientists

The Warning Signs Are Flashing: AI Is a Threat to Democracy in 2024

As we head toward the November 2024 election, the rise of generative artificial intelligence has real potential to disrupt the 2024 elections through massive disinformation and disenfranchisement of voters. We see the danger in front of us, yet we have no meaningful laws to protect democracy. Instead, we seem to be relying on companies who make AI to police themselves. Raise your hand if you think that approach worked well with social media during the last election or the COVID-19 pandemic?

AI will move disinformation at the speed of light

Disinformation is a powerful weapon because all of us are vulnerable to believing, sharing, and perpetuating it, especially when it reinforces a strongly held belief, value, or assumption that we hold. Bad actors often lead coordinated disinformation campaigns to simply sow doubt and mistrust in our government and elections. As Nobel Laureate Maria Ressa notes: “without facts, you can’t have truth. Without truth, you can’t have trust. Without trust, we have no shared reality, no democracy, and it becomes impossible to deal with our world’s existential problems: climate, coronavirus, the battle for truth.”

As generative artificial intelligence tools become more accessible, experts are terrified of the potential impact from a tsunami of deep fakes and misleading messages. Voters are also worried: nearly two thirds (63%) agree that regulating AI should be a top priority for lawmakers in 2024. The White House issued an executive order in 2023—a laudable first step—but it lacks enforcement. The Federal Election Commission and Congress are both pursuing policies to limit the use of deep fakes in campaign ads, but currently there are no federal laws or regulations as the 2024 primaries swing into full gear. Some states have begun to pass policies against distributing deceptive material, such as Michigan. Deep fake videos might be of presidential candidates or highly localized disinformation that uses impersonations of election officials and spoofed election websites to keep voters away from the polls. In fact, New Hampshire’s attorney general’s office is investigating after likely AI-generated robocalls impersonating President Biden were sent in advance of the state’s primary to deter voters from going to the polls.

New AI tools allow cheap, rapid production and distribution of low-level disinformation at a massive scale, such as AI-powered accounts on social media that join your local community interest groups and appear like a normal user at first, but then learn to amplify political propaganda. Masquerading as a real person, the AI account might tell you to vote at a fake address or falsely post about long lines at your local precinct. When replicated by the millions, these so-called AI ‘persona bots’  can have real influence on elections.   can have real influence on elections. The use of AI by hostile intelligence agencies to spread misinformation to influence voters and undermine confidence in election systems presents have teamed up to confront it. FBI and NSA have teamed up to confront it.

A few big companies with poor track records control AI

As demonstrated in past elections, social media and tech companies have a poor record of policing themselves regarding use of their platforms for political disinformation. The results have included erosion of trust in elections and a wave of new anti-voter laws. History has also shown that disinformation on social media disproportionately targeted voters of color, such as during the 2020 presidential election.

As AI grows, just a few companies dominate the technology which further concentrates power in the hands of a few unaccountable industry giants. Based on past corporate behavior and an understanding of the risks posed by AI to people, it’s not surprising that a new poll by Data for Progress found that a “whopping 85% of voters agree that companies developing AI tools should be required to demonstrate their products are free of harm before they are made available to the public.”

The tech industry has acknowledged the threat of AI-generated disinformation on elections. OpenAI, the company behind the popular ChatGPT platform, recently announced steps they are taking to provide transparency on AI-generated content and combat the use of deepfakes and chatbots that impersonate candidates. But relying on companies to mitigate the harms of AI to the 2024 elections is not enough. Policymakers need to move swiftly to address these real-world threats with new laws and regulations.

Protections that put people before technology

Government should regulate the big technology companies monopolizing AI. In past elections, big social media companies were not held accountable, and the consequences were severe. Facebook’s onetime motto was ‘move fast and break things’. They did and nearly broke democracy, from spreading Russian disinformation during the 2016 to helping enable the January 6 insurrection. To prevent similar outcomes from AI tech giants, regulations should, at a minimum, improve and enforce antitrust laws, pass privacy laws that reflect fast-moving technology, and require independent oversight to avoid a single point of failure.

Laws at all levels of government should require that the public be informed when  seeing or hearing AI-generated content. Deepfakes and synthetic materials should clearly be labeled as AI-generated representations that are not real. Laws should require disclosure on deepfakes of any person, not just politicians or candidates, to ensure that others involved in electoral systems, such as election administrators, are protected. Laws should explicitly prevent and mitigate any attempts to purposely mislead voters about where, when, and how to vote or to sow doubt about the legitimacy of the electoral process.

While the President’s executive order took some initial steps safeguarding against discriminatory outputs, we need regulation that protects Black, Brown and Indigenous people who are impacted by racially biased data that fuels AI. Regulation should leverage under-enforced civil rights laws to protect those most at risk. We need to ensure that you and I have a seat at the policymaking able. Sadly, public engagement has not been a regular feature of the government has dealt with emerging technology in recent years and the consequences have often been borne by the most vulnerable. Policy and regulation must reflect underrepresented voices and diverse perspective to ensure they have decision making power over how AI impacts their lives and communities.  No one would ever accuse me of being a Luddite. But when it comes to democracy, I choose people over robots.

Originally published in Union of Concerned Scientists.

More from Union of Concerned Scientists

Union of Concerned Scientists6 min readEnvironmental
Roundup: EPA Releases Several Rules and a Draft Scientific Integrity Policy
In this latest scientific integrity roundup, a preview of Supreme Court cases that could limit federal agencies' ability to set protective standards.
Union of Concerned Scientists5 min read
The Fossil Fuel Industry Continues Producing Heat-Trapping Emissions that Drive Climate Change
A new dataset released by InfluenceMap provides information on heat-trapping emissions traced to the 122 largest investor and state-owned fossil fuel companies in the world. Fossil fuels are the main driver of climate change and the terrifying effect
Union of Concerned Scientists7 min readAmerican Government
Why the White House’s Justice40 is an Important Tool to Build On
We now are seeing the most comprehensive set of presidential equity and justice actions since the 1960s.

Related Books & Audiobooks