People Worry More About Today’s AI Harms Than Future Catastrophes: Study

Share this Article

By Nivash Jeevanandam
According to a new AI risk perception study, people are far more focused on real-time challenges like bias and misinformation than hypothetical threats of artificial intelligence-driven extinction.

Artificial Intelligence has swiftly become one of the most transformative—and polarizing—technologies of our time. As policymakers, researchers, and industry leaders debate the implications of AI’s rapid evolution, the study from the University of Zurich (UZH) offers fresh insights into how the general public perceives the AI risk posed by AI.

The results highlight a clear distinction in public awareness: while apocalyptic narratives may make headlines, it is the present-day challenges that weigh more heavily on people’s minds.

A Clearer Picture of Public AI Risk Priorities 

Conducted by a team of political scientists, the study, published in the journal Proceedings of the National Academy of Sciences, surveyed over 10,000 participants across the US and the UK, presenting them with a series of AI-related news headlines. These headlines varied in tone and focus—some emphasizing long-term existential threats, others discussing immediate issues such as algorithmic bias, job displacement, and AI-fueled disinformation. A third set highlighted AI’s potential benefits.

The researchers’ goal was to understand whether dramatic portrayals of distant AI catastrophes would overshadow public attention to real-world, ongoing issues.

Their conclusion: it doesn’t.

“Our findings show that the respondents are much more worried about present risks posed by AI than about potential future catastrophes,” says Professor Fabrizio Gilardi, from the Department of Political Science at UZH.

While warnings of existential dangers may raise awareness about AI’s long-term risks, they do not diminish concern for the immediate consequences. Respondents consistently ranked current and tangible threats—such as discriminatory algorithms or the manipulation of public opinion—as more pressing than speculative doomsday scenarios.

 

AI cancer cell networkMoving Beyond the “Either-Or” Narrative

This study offers timely clarification in an ongoing debate within the AI community. Some experts, including prominent voices in AI ethics and policy, have cautioned that an overemphasis on existential risk, while important, can detract from addressing the very real, measurable harms AI systems are already causing. These include biased decision-making in areas like hiring and criminal justice, economic disruptions due to automation, and the viral spread of misinformation.

However, the UZH findings suggest that the public is capable of holding both types of concern simultaneously. Exposure to headlines about future threats does not erase awareness of present dangers. Rather, it emphasizes the need for a nuanced, multifaceted conversation around AI.

“Public discourse shouldn’t be ‘either-or.’ A concurrent understanding and appreciation of both the immediate and potential future challenges is needed,” says Gilardi.

Co-author Emma Hoes adds, “Our study shows that the discussion about long-term risks is not automatically occurring at the expense of alertness to present problems.”

Implications for Policy and Public Dialogue

The implications of this study are significant for AI risk perception, governance and regulation. As governments around the world work to draft ethical guidelines and legal frameworks for AI, public opinion plays a crucial role. Understanding that people are more responsive to practical, everyday risks can help shape policies that are both grounded in scientific insight and aligned with societal priorities.

Moreover, the research underscores the importance of engaging the public in informed dialogue, rather than limiting discussion to polarizing extremes. By acknowledging both the urgent and the speculative, we can build a more holistic approach to AI development—one that addresses the challenges of today while safeguarding the possibilities of tomorrow.

 

Nivash

Nivash Jeevanandam (PhD) is the Former Senior Researcher and Author at IndiaAI Portal – National Artificial Intelligence Portal of India at Nasscom

Leave a Reply

Your email address will not be published. Required fields are marked *