Study Finds Most People Trust Doctors More than AI But See Its Potential for Cancer Diagnosis

Nationally representative surveys measure public attitudes toward AI in healthcare 

Embargoed for release until December 8, 2025

For media Inquiries regarding the study, please contact Natalie Judd or Emma Scott

Washington, D.C., December 8, 2025– New research on public attitudes toward AI indicates that most people are reluctant to let ChatGPT and other AI tools diagnose their health condition, but see promise in technologies that use AI to help diagnose cancer. These and other results of two nationally representative surveys will be presented at the annual meeting of the Society for Risk Analysis Dec. 7-10 in Washington, DC. 

Led by behavioral scientist Dr. Michael Sobolev of the Schaeffer Institute for Public Policy & Government Service at the University of Southern California, and psychologist Dr. Patrycja Sleboda, assistant professor at Baruch College, City University of New York, the study focuses on public perspectives—specifically trust, understanding, potential, excitement, and fear of AI—in the context of cancer diagnosis, one of AI’s most commonly used and impactful applications in medicine. It also examines how these public attitudes vary by demographics, such as age, gender and education. 

The study used data from two nationally representative surveys to assess how personal use of AI tools like ChatGPT and general trust in medical AI relate to the acceptance of an AI-based diagnostic tool for cervical cancer. 

Key findings: 

  • Most people still trust doctors more than AI. Only about 1 in 6 people (17%) said they trust AI as much as a human expert to diagnose health problems. • People who have tried AI (like ChatGPT) feel more positive about AI’s application in medicine. Those who had used AI in their personal life said they understood it better and were more excited and trusting of its use in healthcare. (55.1% of respondents had heard of ChatGPT but not used it, while 20.9% had both heard of and used it.)
  • People see promise, not danger. When participants learned about an AI tool that helps find early signs of cancer, most thought it had great potential and were more excited than afraid. 

“Our research shows that even a little exposure to AI—just hearing about it or trying it out—can make people more comfortable and trusting of the technology. We know from research that familiarity plays a big role in how people accept new technologies, not just AI,” says Sleboda. 

In the first survey, participants reported whether they had heard of or used AI technologies and responded to questions about their general trust in AI for health diagnoses. 

In the second survey, participants were introduced to a scenario based on real development in which a research team developed an AI system that can analyze digital images of the cervix to detect precancerous changes (a technology called automated visual evaluation). Participants then rated on a scale (from 1 to 5) five elements of acceptance for this diagnostic AI tool: understanding, trust, excitement, fear and potential. 

An analysis of the results showed that potential was rated the highest when judging a diagnostic AI tool, followed by excitement, trust, understanding and fear. Identifying as male and having a college degree were associated with greater trust, excitement and potential for the use of AI in healthcare. These participants also expressed lower fear of the use of AI overall. 

“We were surprised by the gap between what people said in general about AI and how they felt in a real example” says Sobolev, who leads the Behavioral Design Unit at Cedars-Sinai Medical Center in Los Angeles, with the goal of advancing human-centered innovation. “Our results show that learning about specific, real-world examples can help build trust between people and AI in medicine.” 

### 

EDITORS NOTE: 

This research will be presented on December 8 at 8:30 EST at the Society for Risk Analysis (SRA) Annual Conference at the Downtown Westin Hotel in Washington, D.C. SRA Annual Conference welcomes press attendance. Please contact Emma Scott at emma@bigvoicecomm.com to register. 

About Society for Risk Analysis 

The Society for Risk Analysis (SRA) is a multidisciplinary, global organization dedicated to advancing the science and practice of risk analysis. Founded in 1980, SRA brings together researchers, practitioners, and policymakers from diverse fields including engineering, public health, environmental science, economics, and decision theory. The Society fosters collaboration and communication on risk assessment, management, and communication to inform decision-making and protect public well-being. SRA supports a wide range of scholarly activities, publications, and conferences. Learn more at sra.org

Media Contact: 
Emma Scott 
Media Relations Specialist 
Emma@bigvoicecomm.com 
(740)632-0965