News Warner Logo

News Warner

Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns

Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns

  • AI-powered tools are being developed to detect disinformation campaigns that use narrative tactics to manipulate public opinion, particularly on social media platforms.
  • The key to effective disinformation detection lies in understanding narrative structures, personas, timelines, and cultural references, which AI can analyze to identify fabricated stories and manipulated narratives.
  • Disinformation differs from misinformation, as it is intentionally fabricated and shared specifically to mislead and manipulate, often using emotional connections and storytelling to sway opinion.
  • The development of narrative-aware AI tools can help intelligence analysts, crisis-response agencies, social media platforms, researchers, educators, and ordinary users identify and counter disinformation campaigns in real-time, making online content more rigorous and shareable.
  • These tools also require cultural literacy to avoid misinterpreting symbols, sentiments, and storytelling within targeted communities, which can be improved by training AI on diverse cultural narratives and narratives from different contexts.

The human proclivity for storytelling makes disinformation difficult to combat. Westend61 via Getty Images

It is not often that cold, hard facts determine what people care most about and what they believe. Instead, it is the power and familiarity of a well-told story that reigns supreme. Whether it’s a heartfelt anecdote, a personal testimony or a meme echoing familiar cultural narratives, stories tend to stick with us, move us and shape our beliefs.

This characteristic of storytelling is precisely what can make it so dangerous when wielded by the wrong hands. For decades, foreign adversaries have used narrative tactics in efforts to manipulate public opinion in the United States. Social media platforms have brought new complexity and amplification to these campaigns. The phenomenon garnered ample public scrutiny after evidence emerged of Russian entities exerting influence over election-related material on Facebook in the lead-up to the 2016 election.

While artificial intelligence is exacerbating the problem, it is at the same time becoming one of the most powerful defenses against such manipulations. Researchers have been using machine learning techniques to analyze disinformation content.

At the Cognition, Narrative and Culture Lab at Florida International University, we are building AI tools to help detect disinformation campaigns that employ tools of narrative persuasion. We are training AI to go beyond surface-level language analysis to understand narrative structures, trace personas and timelines and decode cultural references.

Disinformation vs. misinformation

In July 2024, the Department of Justice disrupted a Kremlin-backed operation that used nearly a thousand fake social media accounts to spread false narratives. These weren’t isolated incidents. They were part of an organized campaign, powered in part by AI.

Disinformation differs crucially from misinformation. While misinformation is simply false or inaccurate information – getting facts wrong – disinformation is intentionally fabricated and shared specifically to mislead and manipulate. A recent illustration of this came in October 2024, when a video purporting to show a Pennsylvania election worker tearing up mail-in ballots marked for Donald Trump swept platforms such as X and Facebook.

Within days, the FBI traced the clip to a Russian influence outfit, but not before it racked up millions of views. This example vividly demonstrates how foreign influence campaigns artificially manufacture and amplify fabricated stories to manipulate U.S. politics and stoke divisions among Americans.

Humans are wired to process the world through stories. From childhood, we grow up hearing stories, telling them and using them to make sense of complex information. Narratives don’t just help people remember – they help us feel. They foster emotional connections and shape our interpretations of social and political events.

Stories have profound effects on human beliefs and behavior.

This makes them especially powerful tools for persuasion – and, consequently, for spreading disinformation. A compelling narrative can override skepticism and sway opinion more effectively than a flood of statistics. For example, a story about rescuing a sea turtle with a plastic straw in its nose often does more to raise concern about plastic pollution than volumes of environmental data.

Usernames, cultural context and narrative time

Using AI tools to piece together a picture of the narrator of a story, the timeline for how they tell it and cultural details specific to where the story takes place can help identify when a story doesn’t add up.

Narratives are not confined to the content users share – they also extend to the personas users construct to tell them. Even a social media handle can carry persuasive signals. We have developed a system that analyzes usernames to infer demographic and identity traits such as name, gender, location, sentiment and even personality, when such cues are embedded in the handle. This work, presented in 2024 at the International Conference on Web and Social Media, highlights how even a brief string of characters can signal how users want to be perceived by their audience.

For example, a user attempting to appear as a credible journalist might choose a handle like @JamesBurnsNYT rather than something more casual like @JimB_NYC. Both may suggest a male user from New York, but one carries the weight of institutional credibility. Disinformation campaigns often exploit these perceptions by crafting handles that mimic authentic voices or affiliations.

Although a handle alone cannot confirm whether an account is genuine, it plays an important role in assessing overall authenticity. By interpreting usernames as part of the broader narrative an account presents, AI systems can better evaluate whether an identity is manufactured to gain trust, blend into a target community or amplify persuasive content. This kind of semantic interpretation contributes to a more holistic approach to disinformation detection – one that considers not just what is said but who appears to be saying it and why.

Also, stories don’t always unfold chronologically. A social media thread might open with a shocking event, flash back to earlier moments and skip over key details in between.

Humans handle this effortlessly – we’re used to fragmented storytelling. But for AI, determining a sequence of events based on a narrative account remains a major challenge.

Our lab is also developing methods for timeline extraction, teaching AI to identify events, understand their sequence and map how they relate to one another, even when a story is told in nonlinear fashion.

Objects and symbols often carry different meanings in different cultures, and without cultural awareness, AI systems risk misinterpreting the narratives they analyze. Foreign adversaries can exploit cultural nuances to craft messages that resonate more deeply with specific audiences, enhancing the persuasive power of disinformation.

Consider the following sentence: “The woman in the white dress was filled with joy.” In a Western context, the phrase evokes a happy image. But in parts of Asia, where white symbolizes mourning or death, it could feel unsettling or even offensive.

In order to use AI to detect disinformation that weaponizes symbols, sentiments and storytelling within targeted communities, it’s critical to give AI this sort of cultural literacy. In our research, we’ve found that training AI on diverse cultural narratives improves its sensitivity to such distinctions.

Who benefits from narrative-aware AI?

Narrative-aware AI tools can help intelligence analysts quickly identify orchestrated influence campaigns or emotionally charged storylines that are spreading unusually fast. They might use AI tools to process large volumes of social media posts in order to map persuasive narrative arcs, identify near-identical storylines and flag coordinated timing of social media activity. Intelligence services could then use countermeasures in real time.

In addition, crisis-response agencies could swiftly identify harmful narratives, such as false emergency claims during natural disasters. Social media platforms could use these tools to efficiently route high-risk content for human review without unnecessary censorship. Researchers and educators could also benefit by tracking how a story evolves across communities, making narrative analysis more rigorous and shareable.

Ordinary users can also benefit from these technologies. The AI tools could flag social media posts in real time as possible disinformation, allowing readers to be skeptical of suspect stories, thus counteracting falsehoods before they take root.

As AI takes on a greater role in monitoring and interpreting online content, its ability to understand storytelling beyond just traditional semantic analysis has become essential. To this end, we are building systems to uncover hidden patterns, decode cultural signals and trace narrative timelines to reveal how disinformation takes hold.

The Conversation

Mark Finlayson receives funding from US Department of Defense and the US National Science Foundation for his work on narrative understanding and influence operations in the military context.

Azwad Anjum Islam receives funding from Defense Advanced Research Projects Agency (DARPA).

link

Q. How does storytelling make disinformation difficult to combat?
A. The power and familiarity of well-told stories can override skepticism and sway opinion more effectively than a flood of statistics.

Q. What is the difference between misinformation and disinformation?
A. Misinformation is simply false or inaccurate information, while disinformation is intentionally fabricated and shared specifically to mislead and manipulate.

Q. How are AI tools being used to detect disinformation campaigns?
A. Researchers are using machine learning techniques to analyze disinformation content, including understanding narrative structures, tracing personas, and decoding cultural references.

Q. What role do usernames play in identifying disinformation campaigns?
A. Usernames can carry persuasive signals that help identify when a story doesn’t add up, such as demographic and identity traits like name, gender, location, sentiment, and personality.

Q. How are AI systems being trained to handle nonlinear storytelling?
A. Researchers are developing methods for timeline extraction, teaching AI to identify events, understand their sequence, and map how they relate to one another, even when a story is told in non-linear fashion.

Q. Why is cultural awareness important for AI systems detecting disinformation?
A. Without cultural awareness, AI systems risk misinterpreting the narratives they analyze, allowing foreign adversaries to exploit cultural nuances to craft messages that resonate more deeply with specific audiences.

Q. Who benefits from narrative-aware AI tools?
A. Intelligence analysts, crisis-response agencies, social media platforms, researchers, educators, and ordinary users can all benefit from these tools in detecting disinformation campaigns and promoting critical thinking online.

Q. How are narrative-aware AI tools being used to combat disinformation?
A. These tools can help identify orchestrated influence campaigns or emotionally charged storylines that are spreading unusually fast, allowing for swift countermeasures and more efficient routing of high-risk content for human review.

Q. What is the potential impact of narrative-aware AI on online discourse?
A. By detecting disinformation campaigns and promoting critical thinking, these tools have the potential to improve online discourse, reduce the spread of false information, and promote a more informed public.