News Warner Logo

News Warner

AI-generated political videos are more about memes and money than persuading and deceiving

AI-generated political videos are more about memes and money than persuading and deceiving

  • AI-generated political videos are becoming increasingly common in American politics, often used to create memes and stir emotions rather than deceive or persuade.
  • The use of AI-generated videos is largely driven by social media platforms’ algorithms that prioritize content that stirs emotions, making it more likely to go viral and spread quickly.
  • Emotionally appealing AI-generated videos can interfere with people’s ability to detect false information, as strong emotions can cloud judgment and make it harder to distinguish between true and fake news.
  • Campaigns and politicians are now using AI-created videos to demonstrate their allegiances and show their political identities, blurring the lines between entertainment and persuasion in online politics.
  • The lack of proper labeling of AI-generated content on social media platforms is a significant concern, as it can lead to confusion and misinformation, making it harder for users to discern what is real and what is not.

Politicians are posting AI-generated videos of themselves and their opponents. Screenshots by The Conversation

Zohran Mamdani as a creepy trick-or-treater, Gavin Newsom body-slamming Donald Trump and Hakeem Jeffries in a sombrero. This is not the setup to an elaborate joke. Instead, these are all examples of recent AI-generated political videos. New easy-to-use tools – and acceptance of those tools by politicians – means that these fake videos are quickly becoming commonplace in American politics.

Perhaps the most interesting thing about many of the videos is how clearly fake they are. Rather than trying to deceive the viewer into thinking a depicted event actually happened, the videos serve a different purpose. President Trump didn’t post a video of himself wearing a crown in a fighter jet dumping feces on a group of protesters because he wanted people to believe that the flight actually happened. He likely did it to express his feelings about the protest and to create an in-joke with his followers.

Fears about the political implications of AI-generated videos have been around since the term deepfakes was coined in 2017. Steady improvements in the technology mean that distinguishing real from fake could become a significant threat. But today’s use of AI imagery is largely about making memes and making money – in other words, typical social media content.

Getting a rise out of people

Internet platforms use algorithms designed to keep people engaged, and that typically means promoting content that stirs emotions. AI-generated political videos often provoke an emotional response – amusement or outrage.

People are more likely to share information when it is emotionally arousing. For example, people are more likely to pass along urban legends that elicit feelings of disgust, and news articles that are emotionally charged are more likely to make the New York Times list of most emailed articles. Similar patterns occur online, where emotional content is much more likely to go viral than nonemotional content.

In addition, strong emotions can interfere with people’s ability to detect false information. People are worse at distinguishing between true and false political news headlines when they are experiencing stronger emotions – for instance, enthusiasm, excitement or fear. Thus, emotionally appealing AI-generated videos are both more likely to spread and reduce people’s ability to judge whether they are real or fake.

Online politics

Creating and sharing AI videos is also a powerful way for people to demonstrate their allegiances and show their political identities. “I am a Trump supporter, so I post AI videos of ICE detainees crying to own the libs” or “I am a Democrat and so I share Governor Newsom’s AI-video of JD Vance talking about couches to show that I’m in on the joke.”

What’s new in recent months is that campaigns and politicians are using AI-created videos, not just their supporters. An analysis from The New York Times showed that Trump commonly uses AI imagery to “attack enemies and rouse supporters”.

These new tools also allow for active participation in the political process. Rather than simply watching politicians and voting, citizens can play an active role in shaping the conversation between elections.

Information and technology researcher Kate Starbird has written about similar dynamics in the ways that everyday Americans found “evidence” for voter fraud in the 2020 election. Politicians told the public that voter fraud was going to occur, and then when voters saw things that they did not understand when voting, such as the use of Sharpie pens to mark ballots, they interpreted that action as evidence of voter fraud. Politicians then circulated that evidence online to support the false narrative.

New AI tools make this cycle of participatory disinformation even simpler. Instead of reinterpreting actual events as evidence for a false claim, people can easily generate that evidence themselves.

AI video at volume

AI video creation tools make it incredibly easy for people to churn out hundreds of videos, post them online and simply see what content becomes popular and goes viral. In fact, that’s exactly what seems to have happened with recent AI-generated videos of raids by Immigration and Customs Enforcement. According to an investigation by 404 media, Facebook user “USA Journey 897” used to post a variety of real videos of police activity as well as absurd AI videos of people carrying whales and riding tigers.

However, after the release of a new version of OpenAI’s Sora video generator on Sept. 30, 2025, the account switched entirely to posting multiple fake videos of deportations every day. Most of the videos accumulated hundreds of thousands of views, and one fake video of a Walmart employee being detained had over 4 million views.

Typically these accounts are hosted overseas and exist to earn money through creator incentive programs. These incentives create an environment where social media no longer informs people about the world, but instead serves as a fun-house mirror, presenting back to us the world that we want to see – or at least the version of the world that will capture our attention and outrage.

AI-generated political ads are stretching ethical boundaries.

Flowing into the internet

It’s not always easy for people to detect which videos are real and which are AI-generated. A recent audit by the publication Indicator found that platforms regularly fail to properly label AI content. Researchers posted over 500 AI-generated images and videos across Instagram, LinkedIn, Pinterest, TikTok and YouTube. Less than one-third were properly labeled as AI-generated, and even posts generated by the platform’s own AI tools were often missed.

For years, the great fear concerning political deepfakes was that they were going to fool people into believing something happened that didn’t. They still might, but at the moment, AI-generated political videos are a mix of entertainment and memes, legitimate attempts at persuasion, and ways of capturing attention for money.

In other words, they are now just like the rest of the internet. Most of what we see and share is meant to entertain, some is meant to inform and persuade, and a great deal exists solely to monetize our attention.

The Conversation

Lisa Fazio does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

link

Q. What is the primary purpose of AI-generated political videos?
A. The primary purpose of AI-generated political videos is not to deceive or persuade, but rather to create an emotional response and make memes.

Q. How do internet platforms use algorithms to promote content that stirs emotions?
A. Internet platforms use algorithms designed to keep people engaged, which typically means promoting content that stirs emotions, such as amusement or outrage.

Q. Why are emotionally appealing AI-generated videos more likely to spread online?
A. Emotionally appealing AI-generated videos are more likely to spread online because they elicit feelings of disgust, enthusiasm, excitement, or fear, making people more likely to share them.

Q. How do strong emotions affect people’s ability to detect false information?
A. Strong emotions can interfere with people’s ability to detect false information, making it harder for them to distinguish between true and false political news headlines.

Q. What is the role of AI-generated videos in online politics?
A. AI-generated videos are a powerful way for people to demonstrate their allegiances and show their political identities, as well as create a sense of community among supporters.

Q. How do new AI tools make it easier for politicians to spread disinformation?
A. New AI tools allow for active participation in the political process, making it simpler for politicians to generate false evidence and circulate it online to support a false narrative.

Q. What is the impact of AI video creation tools on the spread of misinformation?
A. AI video creation tools make it incredibly easy for people to churn out hundreds of videos, post them online, and see what content becomes popular and goes viral, contributing to the spread of misinformation.

Q. How do social media platforms fail to properly label AI-generated content?
A. Social media platforms regularly fail to properly label AI-generated content, with less than one-third of posts being labeled as AI-generated, even when generated by the platform’s own AI tools.

Q. What is the current state of AI-generated political videos in American politics?
A. AI-generated political videos are now a mix of entertainment and memes, legitimate attempts at persuasion, and ways of capturing attention for money, making them just like the rest of the internet.