By Larkin, Otten and Arvai – Published in the Journal of Risk Research

Will people take advice from AI to make important decisions?

Artificial intelligence (AI) helps people make decisions in many different contexts, from recommendations for movies and music to more consequential decisions, such as medical treatment.

But do people really trust AI?

Researched published recently in the Journal of Risk Research looked at how people respond to risk-management advice from AI and from human experts. The research, led by Erb student Connor Larkin, former Erb postdoctoral fellow Caitlin Drummond Otten and former Erb Faculty Director Joseph Árvai, found that people generally prefer to receive medical and financial risk-management advice from people rather than from AI.

AI has been deployed in many realms, and as it continues to advance, its potential is immense. But if people don’t trust it, its potential will be limited.

Previous literature has shown that algorithmic or actuarial decision-making—using pre-set rules to analyze information and produce judgments—is more statistically accurate than individual “expert” judgments, Larkin and his team wrote. But research on people’s willingness to accept advice from AI has not kept up with the pace of AI’s advancement.

So the researchers set out to examine how people react to advice from AI and advice from human experts. They conducted two studies that involved people making decisions about their health care and finances—two domains where AI is commonly used. In these situations, AI was defined as an advanced computer system that could quickly analyze large amounts of data and make recommendations or decisions without human input or supervision.

In the first study, the researchers found:

  • Participants indicated a strong preference for advice from human experts over AI. This preference appeared to be stronger in the health care context than in the finance context.
  • Participants were more likely to follow a recommendation from a human expert and placed more confidence in a human expert in the health care context than in the finance context.
  • In both contexts, participants were equally likely to follow a recommendation from AI.
The second study gave participants a hypothetical medical or financial decision that was risky and uncertain, and it asked them to make an initial judgment—to take immediate action or to wait and see for a year. Next, the participants received advice from either AI or a human expert, and the study looked at the degree to which they updated their judgments after that advice.

In the health care context, participants were asked to imagine that they were diagnosed with a cancerous tumor that could metastasize and be fatal or remain static and benign. They could either have immediate surgery or wait and see for one year. In the finance context, participants were asked to imagine that they owned a portfolio of investments dominated by companies that produce oil and gas, and they were informed of the possibility that companies that produce energy from renewables may soon outperform them. They could either immediately rebalance their portfolios in the direction of companies that produce energy from renewables or wait and see for one year. This study found:

  • In both scenarios, participants updated their judgments more toward a human expert’s advice than they did toward AI advice. This was true regardless of whether the advice was to take immediate action or to wait.
  • Participants updated their judgments more in response to the human expert in the health care scenario than they did in the finance scenario.
  • In both scenarios, they updated their judgments similarly in response to AI.
Some of these differences might involve various levels of trust in human doctors and financial advisors. In the second study, participants reported that they held more trust in medical professionals than in financial professionals.

Overall, across both studies, the researchers found that people preferred advice from human experts to advice from AI. And they found a stronger preference for human experts in health care than in finance.

The researchers noted that, because participants reacted differently to AI in these two contexts, people’s preferences for human experts over AI may depend on the context. The researchers suggested that future research should take into account the factors that influence how much human expertise is trusted in different domains—along with the factors that influence trust in or aversion to algorithms.

“AI can perform tasks that are typically thought of as being within the exclusive domain of human experts—such as medical diagnoses and treatment recommendations, and personal financial planning—with high accuracy and low costs, . . . potentially leading to substantial gains in welfare,” the researchers wrote. “As AI continues to shape everyday life and decision-making, however, our research suggests it may have significant barriers to overcome before its advice is seen as a trustworthy as that from a human expert.”

Because AI is in use in health care, finance and other consequential domains, insight on how much people trust AI will be important as it continues to advance.

This post is adapted from research which was published in Journal of Risk Research “Paging Dr. JARVIS! Will people accept advice from artificial intelligence for consequential risk management decisions?” August 2021

Photo by Tara Winstead from Pexels