AI, Free Speech, and the Future of Democracy
Navigating Truth in the Age of Synthetic Media
It’s a tale as old as Democracy, says associate professor of journalism Jeremy Littau -- political candidates lying about the life, accomplishments or plans of opponents to discredit them or make themselves look like the better choice. It may seem surprising, but courts have long ruled that this type of false political speech is legal under the Constitution’s First Amendment right to free speech.
Now, due to the easy accessibility of the technology known as AI, or artificial intelligence, political candidates – or their supporters -- can take lying much further than speaking or writing mistruths. They can try to sway public opinion by creating very believable fake images, video and audio that purport to be someone or something they are not.
That was the case earlier this year when supporters of Donald Trump generated fake photos of their candidate with his arms around a group of Black women at a party in an effort to show that Black voters support him. The images spread quickly and widely on social media.
And while lying with AI might seem to be even more egregious and unacceptable, it has strong First Amendment protection too, according to Littau and his research partner Daxton R. “Chip” Stewart, journalism professor of Texas Christian University. “The courts say there’s no difference,” says Littau. “You have a right to lie with AI.”
In their paper “The Right to Lie with AI? First Amendment challenges for state efforts to curb false political speech using deepfakes and synthetic media,” Littau and Stewart provide a primer on AI and deepfake technology and explore the viability of laws created to limit or ban AI generated false political speech. The paper was presented in August at the 2004 annual conference of AEJMC, the Association for Education in Journalism and Mass Communication, where it received the Top Faculty Paper Award in the Law & Policy Division.
AI's Role in Modern Political Speech
Littau has long had an interest in digital media and emerging technology. His work sits at the intersection of social media, community, social action and political engagement. He was one of the first in the field of journalism to write about the impact of social media such as Twitter when it gained a foothold around 2009, the year Littau began teaching at Lehigh. He now publishes The Unraveling, a well-regarded Substack newsletter on the subject of AI.
Littau says AI was the natural next step in his research journey.
“We are figuring out how to talk to our discipline about what AI is and how it functions and what it is used for,” he says.
The research is timely as “2024 marks the first major U.S. election in the era of widespread, accessible artificial intelligence,” say the researchers. As a result, there is much worry about “malicious use of these tools to influence voters.”
Littau says AI generated political content has been widespread in the past year, not so much from the campaigns themselves, but from activists, content creators and foreign governments such as China and Russia. He expects it to ramp up from all corners as races heat up in the fall.
The researchers begin their paper by defining AI and deepfakes, two methods of false speech that are often used interchangeably but are not the same thing. Littau says it was important to define the terms.
“One of the best comments I received is that this descriptive section will be used in a lot of papers going forward. When people want to do future research, they have a primer on how to describe this kind of technology.”
“AI,” the researchers explain, “attempts to mimic the processes found in human intelligence by calculating the mathematical probabilities for a desired response based on human data and choices. AI systems have training data to learn about language structures, concepts and ideas and they use that data to help it predict correct decisions. A programmer sets the goal but the AI determines the way to accomplish the task based on how it has been trained.”
Littau and Stewart describe two types of AI outputs driving the conversation about the technology's use in mass media. Outputs that are “processes” occur when AI operates in the background to manage a task without specific human oversight. Search results are the primary form of AI process outputs. Outputs that are “products,” often referred to as generative AI, create something original such as images or text.
Deepfakes, on the other hand, are more comparable to photoshopping. No original material is generated, but existing images are swapped. For example, the researchers say, someone could make it look like Joe Biden is a bank robber by taking an image of a person robbing a bank and substituting Joe Biden’s head for the robber’s.
The rest of the paper looks at court decisions regarding political speech. Littau says political speech is the most sacred form of speech protected by the First Amendment. Protection for false political speech is stronger than protection related to false speech against private individuals, who have remedies such as suing for slander.
Counterspeech: An Imperfect Solution
For more than a century, federal and state governments have passed and attempted to enforce laws targeting false political speech. That issue moved to a definitive conclusion in 2011, when the U.S. Supreme Court struck down the federal Stolen Valor Act, a law that criminalized lying about having earned military honors. In the case, a California man was charged after claiming he had been awarded the Congressional Medal of Honor. The court said because the falsehood did not cause any “legally cognizable harm,” such as interfering with administration of justice or causing financial or property harm, it was protected by the First Amendment.
The Alvarez case has become the standard for challenges to false political speech, and that has extended to AI-generated lies.
“Alvarez looms large, cementing the idea that you have the right to lie in political discourse. For the courts to go against the standards they have been using, they are going to have to define what makes AI different beyond mere technicalities,” says Littau.
But Alvarez has not stopped states from passing or proposing new laws to regulate false political speech using AI. The researchers say more than 80 bills have been introduced in state legislatures since 2019, requiring things such as mandatory labeling of AI content and limits on how close to elections deceptive use of AI can be used. Some laws include outright bans. The laws impose civil liability, criminal penalties and injunctive relief. But the researchers conclude that these laws, although mostly untested, are unlikely to pass First Amendment muster.
The courts have decided that the chilling effects on political speech caused by regulations or bans are worse than the risks to free and fair elections caused by false speech. There is concern that regulation could set the stage for “politicized enforcement.”
“The kneejerk reaction is we’re surprised you can lie about it. And we should regulate it,” says Littau. “Then you have to stop and think what kind of world we would end up with.”
Another concern about regulation is the practicality of enforcement, when creating falsehoods with AI is so simple and they are so easy to share.
“How are you going to prosecute 30 million people lying. What about the people who share it?” Littau says.
So, is there a remedy for the aggrieved when AI lies are used to influence elections?
Courts have ruled that “it is up to citizens and not the government to be the monitor of falseness in the political arena.” In other words, people need to call out the lies. “The debunking of such deceptions by citizens, campaigns and news media provides the counterspeech that U.S. courts have identified time and again as the most effective and least speech-restrictive remedy for false political speech,” say the authors.
But Littau says there are many practical problems with the argument that counterspeech is the answer.
One problem is that while national media may have the resources and reach to call out AI fakes in national elections, it’s a different story in small towns where local journalism has been decimated. When Trump recently claimed that photos of a crowd greeting Kamala Harris at a Michigan airport were AI-generated fakes, there was plenty of evidence to prove otherwise and the claim was quickly debunked. But in a small community, there may be no one to push back.
Another problem is the lack of a single trusted news source. The level of trust in long-established media has waned. And new media outlets have proliferated, many spreading fake news themselves.
Then there’s the speed at which fake AI can spread. “It gets out and spreads like wildfire and the truth can never catch up,” says Littau.
Technological and Market-Based Solutions
Littau says, however, that he is encouraged, by evidence that the public is skeptical about AI. “The public is actually pretty good at spotting something that doesn’t smell right. But as technology gets better, it will be harder to tell what’s real and what’s a fake.”
Littau says he and Stewart hope their research will help inform public policy. “One of our hopes was that we would move the ball around public discourse regarding these tools,” he says.
They suggest that “a better path to managing deepfakes and AI-generated false political speech – both in legal terms and in practicality – may be rooted in technology and the free market instead.” For example, social media platforms can do more content moderation to control the spread of falsehoods.
Littau cites the case earlier this year of a collection of bots distributing fake pornographic AI images of Taylor Swift. They were spreading so fast that the platform X temporarily banned all searches for Taylor Swift.
Another technical solution is watermarking – to “encode the knowledge of an AI image’s provenance at the point of creation,” say the authors. Google was one of seven major U.S. AI companies that announced in 2023 that it would voluntarily work to watermark images produced by its tools.
It’s a fast-moving and volatile topic, and Littau says the paper creates a foundation for future research on the subject of AI. There are lots of questions to explore, and he is working on them.
“What does it mean to be human in a world of AI? he says. “What happens to the life we are living when we are increasingly relying on technology to manage our lives for us, when we are outsourcing our brains and our thinking.”