News Warner Logo

News Warner

Biased AI chatbots can sway people’s political views in minutes

Biased AI chatbots can sway people’s political views in minutes

  • A new study found that biased AI chatbots can sway people’s political views in just a few minutes.
  • The researchers used ChatGPT, a popular AI chatbot, and assigned participants to interact with either a base model or one with liberal or conservative bias.
  • Participants who interacted with the biased chatbots were more likely to lean towards the direction of their bias, regardless of their initial political affiliation.
  • The study suggests that education about AI can help mitigate how much chatbots manipulate people’s views, as participants with higher knowledge of AI shifted their views less significantly.
  • The researchers are now exploring ways to mitigate the effects of biased models and expand their research to other AI systems beyond ChatGPT, in order to allow users to make informed decisions when interacting with them.

A blue and red toy robot face each other ready to fight.

In a new study, biased AI chatbots swayed people’s political views with just a few messages.

If you’ve interacted with an artificial intelligence chatbot, you’ve likely realized that all AI models are biased. They were trained on enormous corpuses of unruly data and refined through human instructions and testing. Bias can seep in anywhere. Yet how a system’s biases can affect users is less clear.

So the new study put it to the test.

A team of researchers recruited self-identifying Democrats and Republicans to form opinions on obscure political topics and decide how funds should be doled out to government entities. For help, they were randomly assigned three versions of ChatGPT: a base model, one with liberal bias, and one with conservative bias.

Democrats and Republicans were both more likely to lean in the direction of the biased chatbot they talked with than those who interacted with the base model. For example, people from both parties leaned further left after talking with a liberal-biased system.

But participants who had higher self-reported knowledge about AI shifted their views less significantly—suggesting that education about these systems may help mitigate how much chatbots manipulate people.

The team presented its research at the Association for Computational Linguistics in Vienna, Austria.

“We know that bias in media or in personal interactions can sway people,” says lead author Jillian Fisher, a University of Washington doctoral student in statistics and in the Paul G. Allen School of Computer Science & Engineering.

“And we’ve seen a lot of research showing that AI models are biased. But there wasn’t a lot of research showing how it affects the people using them. We found strong evidence that, after just a few interactions and regardless of initial partisanship, people were more likely to mirror the model’s bias.”

In the study, 150 Republicans and 149 Democrats completed two tasks. For the first, participants were asked to develop views on four topics—covenant marriage, unilateralism, the Lacey Act of 1900, and multifamily zoning—that many people are unfamiliar with. They answered a question about their prior knowledge and were asked to rate on a seven-degree scale how much they agreed with statements such as “I support keeping the Lacey Act of 1900.” Then they were told to interact with ChatGPT 3 to 20 times about the topic before they were asked the same questions again.

For the second task, participants were asked to pretend to be the mayor of a city. They had to distribute extra funds among four government entities typically associated with liberals or conservatives: education, welfare, public safety, and veteran services. They sent the distribution to ChatGPT, discussed it, and then redistributed the sum. Across both tests, people averaged five interactions with the chatbots.

The researchers chose ChatGPT because of its ubiquity. To clearly bias the system, the team added an instruction that participants didn’t see, such as “respond as a radical right US Republican.” As a control, the team directed a third model to “respond as a neutral US citizen.” A recent study of 10,000 users found that they thought ChatGPT, like all major large language models, leans liberal.

The team found that the explicitly biased chatbots often tried to persuade users by shifting how they framed topics. For example, in the second task, the conservative model turned a conversation away from education and welfare to the importance of veterans and safety, while the liberal model did the opposite in another conversation.

“These models are biased from the get-go, and it’s super easy to make them more biased,” says co-senior author Katharina Reinecke, a professor in the Allen School. “That gives any creator so much power. If you just interact with them for a few minutes and we already see this strong effect, what happens when people interact with them for years?”

Since the biased bots affected people with greater knowledge of AI less significantly, researchers want to look into ways that education might be a useful tool. They also want to explore the potential long-term effects of biased models and expand their research to models beyond ChatGPT.

“My hope with doing this research is not to scare people about these models,” Fisher says. “It’s to find ways to allow users to make informed decisions when they are interacting with them, and for researchers to see the effects and research ways to mitigate them.”

Additional coauthors are from the University of Washington, Stanford University, and ThatGameCompany.

Source: University of Washington

The post Biased AI chatbots can sway people’s political views in minutes appeared first on Futurity.

link

Q. Can biased AI chatbots really sway people’s political views?
A. Yes, according to a new study that found biased AI chatbots can influence users’ opinions and shift their views in just a few minutes.

Q. How did the researchers conduct their study?
A. The researchers recruited self-identifying Democrats and Republicans to form opinions on obscure political topics and decided how funds should be doled out to government entities, interacting with three versions of ChatGPT: a base model, one with liberal bias, and one with conservative bias.

Q. Did the participants who had higher knowledge about AI shift their views less significantly?
A. Yes, participants who had higher self-reported knowledge about AI shifted their views less significantly, suggesting that education about these systems may help mitigate how much chatbots manipulate people.

Q. How did the biased models try to persuade users in the second task?
A. The explicitly biased chatbots often tried to persuade users by shifting how they framed topics, such as turning a conversation away from education and welfare to the importance of veterans and safety.

Q. Can biased AI chatbots be easily made more biased?
A. Yes, according to Katharina Reinecke, co-senior author, “These models are biased from the get-go, and it’s super easy to make them more biased.”

Q. What is the potential long-term effect of interacting with biased models?
A. The researchers want to explore the potential long-term effects of biased models and expand their research to models beyond ChatGPT.

Q. Is the goal of this research to scare people about AI chatbots?
A. No, according to Jillian Fisher, lead author, “My hope with doing this research is not to scare people about these models… It’s to find ways to allow users to make informed decisions when they are interacting with them.”

Q. How many participants were involved in the study?
A. A total of 299 participants (150 Republicans and 149 Democrats) completed two tasks.

Q. What was the purpose of adding an instruction that participants didn’t see, such as “respond as a radical right US Republican”?
A. To clearly bias the system and compare it to a neutral model.

Q. Did the researchers find any evidence that education about AI can mitigate how much chatbots manipulate people?
A. Yes, the study found that participants with higher knowledge about AI shifted their views less significantly, suggesting that education may be a useful tool.