Union of Concerned Scientists

New Executive Order on Artificial Intelligence Falls Short on Science and Democracy

While the executive order marks an important first step, the government needs to do more to address the threat AI poses to governmental science and our democracy.

This week, the White House released a robust executive order on the use of artificial intelligence (AI). The sweeping order includes plans to create guidance to label and watermark AI-generated content, and to enforce safety testing of new AI technologies under the Defense Production Act. It also asks agencies to set standards to make sure AI technologies do not exacerbate discriminatory practices such as in housing and mortgage applications, for example.

The executive order marks a much-needed first step toward regulating the development, use, and subsequent impacts AI will have on our everyday lives. But it falls short on providing rules and processes for the use of AI in science and democracy—both critical and pressing issues, especially considering that generative AI technologies will likely be used to spread disinformation in the upcoming 2024 election cycle. Workers and families, as well as the practice of science and democracy stand to be affected by AI. That widespread reach makes it important for the government to develop and adopt strong policies on the issue.

Here’s my initial sense of where significant work remains to be done.

When is it appropriate to use AI in decisionmaking?

Federal agencies have been using AI technology in government work for years. An executive order  passed during the Trump administration (EO 13960) required federal agencies to report all known non-sensitive and non-classified uses of AI. While the inventory does currently exist, compliance with EO 13960 has been mixed, and federal agencies have varied in the methodologies they have used in reporting AI uses, leading some cases to go unreported. For example, the Small Business Administration has not reported using AI to vet loan applications and responded that they were only focused on uses of AI for internal, not external, applications. This limited interpretation is disappointing given that a significant goal of the executive order was to help establish principles for the use of AI throughout federal agency decisionmaking. Certain agencies, such as the Department of Defense and the Department of Homeland Security, have developed principles and guidelines for agency use of AI. But many agencies have not adopted such policies. This lack of regulatory structure means that federal agencies can use AI in policy and regulatory development in ways that are currently unchecked, including some that might exacerbate discrimination or bias and diminish public trust. Discrimination and bias are already present in AI tools. For example, AI image generators such as “Stable Diffusion” and “DALE-E” have been shown to return images of people who are young with light complexions to the prompt “attractive people.”  

The new executive order released by the Biden administration does not explicitly call for agencies to create general guidelines or principles for the use of AI in their policy and regulatory development. More specifically, the executive order does not establish guidelines or principles for the use of AI-derived science in decisionmaking. There is a need for government-wide agency coordination on the use of AI in decisionmaking and to set some common guidelines and principles, with federal agencies creating their own policies that are specific to their missions and work. Federal agencies need to address some basic questions such as when it is appropriate to use AI-generated data and information in a decisionmaking process? Will agencies make transparent that a decision has been informed by AI-generated information? How will AI-generated information be used in the context of other information provided by scientists and impacted communities?  Federal agencies also could begin studying the benefits and risks of using AI in their work. The Food and Drug Administration (FDA), for example, published a discussion paper to identify the benefits and risks in the development of drug and biological products. Hopefully this paper will be used to inform FDA’s responsible use of AI in the future.

AI-generated disinformation threatens our democracy

Disinformation isn’t a new problem, and certainly isn’t new to democracy. But the Internet and social media have significantly altered the rate and scale at which false information can spread and undermine public trust. The presidential election in 2024 will be significant for the future of democracy, particularly in light of the disinformation promulgated about the validity of results after the last presidential election. It is urgently important to find solutions on this issue because, left unchecked, the growing use of AI in upcoming elections will likely have increasingly dire consequences for public trust in democratic processes.

New generative AI technologies make it incredibly easy to create “deep fakes,” fake images or audio that look or sound like an individual, but are, in fact, highly deceiving. Using just three seconds of your voice, generative AI technologies can now create an authentic-sounding audio message that could be sent to one of your family members saying you’re in distress or in need of money. The same technology could be used to create a voice memo from the US President telling you that your voting location has changed on election day. A deep fake image of President Trump hugging Dr. Anthony Fauci has already been used by the Ron DeSantis campaign.

The Biden Administration’s executive order tackles the use of AI to deceive and defraud people. But, unfortunately, it leaves the issues of democracy and elections off the table. We all should be concerned and push for our government to do more to protect our democracy from AI risks, otherwise we may find that our voices greatly diminished in elections or other critical decisions that affect our health, safety, and economy. Furthermore, the executive order’s proposed solution is to provide guidance on watermarking AI-generated content. Labeling AI-generated content is a novel approach but it’s unlikely to stop the spread of disinformation from running rampant. It’s possible to manipulate a watermark. And not all companies will collaborate to label AI-generated content. Ultimately, even if disinformation is accompanied by a label, the science shows that people often ignore facts, favoring information that aligns closely with their own personal beliefs. While labeling could be a useful strategy to combat AI-generated disinformation, we hope that the White House works to include other solutions as well, especially as they pertain to elections. Specifically, we hope that the Federal Elections Commission and the Elections Assistance Commission will be required to study the risks of AI to elections and develop effective solutions to combat AI-generated disinformation about them.  

AI can exacerbate discrimination and bias

UCS applauds the administration for thinking through how AI technology might be used to exacerbate discrimination and bias when applying for a loan or a job. Large language models essentially glean through huge troves of existing data to form predictions, which is why social media algorithms have become so effective at targeting content to specific users, for example. Existing datasets in the United States are the products of a long history of systemic racism and bias, particularly when it comes to Black, Brown, and Indigenous people. The executive order released today charges the National Institutes for Standards and Technology (NIST) to set standards for “red team testing” to ensure the safety of new AI tools, including safeguarding against discriminatory outputs. Will these standards be robust enough to protect underserved communities from some of the worst potential impacts of AI?

What is lacking from the administration’s executive order on AI is any guidance or direction to help ensure that those who will be most impacted by AI technologies are prioritized and heard in the use of AI and in government decisions developed for AI. Efforts by Congress have already been criticized for prioritizing tech giants over real people who will bear the brunt of potential impacts. It is important for the White House to continue working to improve public participation as AI policy development progresses, particularly for underserved communities.

Preventing AI from undermining scientific integrity

AI technologies could be used by bad actors to manipulate or politicize the scientific information used in government decisions. This would, of course, result in weak policies that are not fully protective of public health and safety. We were disappointed not to see any mention of the linkages between AI and scientific integrity to and the need to make sure that evidence-based decisions are not politicized. Remember when former President Trump doctored the path of a hurricane with a sharpie marker? While it was clear in that case that a sharpie marker was used, would the same be true for an AI-generated image? What about AI-generated data that informs a critical decision about air pollution? It’s clear that AI will pose new challenges for maintaining scientific integrity across the federal government, but the executive order recommends no actions to help federal agencies grapple with this daunting new challenge. We hope that ongoing or future efforts will task the National Science and Technology Council’s Subcommittee on Scientific Integrity, and other departments or agencies, to tackle this issue.

More government action is needed

Around the world, countries are scrambling to try to understand what AI means for the future of our society. AI technology could be hugely beneficial for the future of medicine, for helping to resolve the climate crisis, or for predicting extreme weather events. But the risks of AI technology are very real, too, and could make a lot of people’s lives worse. While signing the executive order on AI, President Biden spoke of his concern for the mental health of children and teens across the nation targeted by AI to keep them addicted to social media. Potential short- and long-term impacts on people’s lives such as this are very real. The new executive order marks a first step for the US government in developing an approach to regulating how AI will affect us. But there’s much more work to be done. UCS is watching closely, focusing foremost on potential harms to the public, the conduct of science, and the effect on our democracy. While we don’t have all the answers, our focus will remain squarely on the public interest. And we aren’t planning to back down.

Originally published in Union of Concerned Scientists.

More from Union of Concerned Scientists

Union of Concerned Scientists6 min readEnvironmental
Roundup: EPA Releases Several Rules and a Draft Scientific Integrity Policy
In this latest scientific integrity roundup, a preview of Supreme Court cases that could limit federal agencies' ability to set protective standards.
Union of Concerned Scientists6 min read
Justice40 Can Be Strengthened with These 3 Fixes
If done right, this White House initiative can continue to achieve lasting, meaningful, systemic, and structural changes that so many have spent their lives fighting to achieve.
Union of Concerned Scientists3 min readCrime & Violence
A Call for Climate Justice at the InterAmerican Court of Human Rights
This week, the InterAmerican Court of Human Rights (IACHR) started to hear testimony at the University of the West Indies, near Bridgetown, Barbados, addressing one of the most pressing global issues of our time: climate change and its implications o

Related Books & Audiobooks