News Warner Logo

News Warner

People fear AI taking jobs more than AI threatening humanity

People fear AI taking jobs more than AI threatening humanity

  • A new study by the University of Zurich found that people are more concerned about immediate risks posed by AI than theoretical future threats to humanity.
  • Respondents showed a clear distinction between abstract scenarios and specific tangible problems, prioritizing the latter over hypothetical dangers.
  • The study revealed that while there is a broad consensus on AI’s associated risks, there are differences in how those risks are understood and prioritized.
  • Participants were more worried about present problems such as job losses due to AI and systematic bias in AI decisions than potential future catastrophes.
  • The research suggests that public discourse should not be an either-or approach, but rather a concurrent understanding and appreciation of both immediate and potential future challenges posed by AI.

A woman covers her face while peeking between her fingers with one eye.

Most people generally are more concerned about the immediate risks of artificial intelligence than they are about a theoretical future in which AI threatens humanity, researchers report.

A new study by the University of Zurich (UZH) reveals that respondents draw clear distinctions between abstract scenarios and specific tangible problems and particularly take the latter very seriously.

There is a broad consensus that artificial intelligence is associated with risks, but there are differences in how those risks are understood and prioritized.

One widespread perception emphasizes theoretical long-term risks such as that of AI potentially threatening the survival of humanity.

Another common viewpoint focuses on immediate concerns such as how AI systems amplify social prejudices or contribute to disinformation.

Some fear that emphasizing dramatic “existential risks” may distract attention from the more urgent actual present problems that AI is already causing today.

To examine those views, a team of political scientists conducted three large-scale online experiments involving more than 10,000 participants in the USA and the UK. Some subjects were shown a variety of headlines that portrayed AI as a catastrophic risk. Others read about present threats such as discrimination or misinformation, and others about potential benefits of AI. The objective was to examine whether warnings about a catastrophe far off in the future caused by AI diminish alertness to actual present problems.

“Our findings show that the respondents are much more worried about present risks posed by AI than about potential future catastrophes,” says Professor Fabrizio Gilardi from the political science department at UZH.

Even if texts about existential threats amplified fears about scenarios of that kind, there was still much more concern about present problems including, for example, systematic bias in AI decisions and job losses due to AI.

The study, however, also shows that people are capable of distinguishing between theoretical dangers and specific tangible problems and take both seriously.

The study thus fills a significant gap in knowledge. In public discussion, fears are often voiced that focusing on sensational future scenarios distracts attention from pressing present problems. The study is the first-ever to deliver systematic data showing that awareness of actual present threats persists even when people are confronted with apocalyptic warnings.

“Our study shows that the discussion about long-term risks is not automatically occurring at the expense of alertness to present problems,” coauthor Emma Hoes says.

Gilardi adds that “the public discourse shouldn’t be “either-or.”

“A concurrent understanding and appreciation of both the immediate and potential future challenges is needed.”

The research appears in the Proceedings of the National Academy of Sciences.

Source: University of Zurich

The post People fear AI taking jobs more than AI threatening humanity appeared first on Futurity.

link

Q. What is the main finding of the study conducted by researchers at the University of Zurich?
A. The study found that respondents are more concerned about immediate risks posed by AI than potential future catastrophes.

Q. Why did the researchers conduct this study?
A. To examine whether warnings about a catastrophe far off in the future caused by AI diminish alertness to actual present problems.

Q. What were the three large-scale online experiments conducted as part of the study?
A. The experiments involved showing participants different types of headlines, including those that portrayed AI as a catastrophic risk, and measuring their responses.

Q. Did the study find that people are less concerned about potential future catastrophes when they are confronted with apocalyptic warnings?
A. No, the study found that even if texts about existential threats amplified fears about scenarios of that kind, there was still much more concern about present problems.

Q. What were some of the specific concerns raised by participants in the study?
A. Participants expressed concerns about systematic bias in AI decisions and job losses due to AI.

Q. According to Professor Fabrizio Gilardi, what is the public discourse on long-term risks of AI?
A. The public discourse should not be either-or; it should involve a concurrent understanding and appreciation of both immediate and potential future challenges.

Q. What was the objective of the study?
A. To examine whether warnings about a catastrophe far off in the future caused by AI diminish alertness to actual present problems.

Q. How many participants were involved in the online experiments?
A. The study involved more than 10,000 participants in the USA and the UK.