CogSci 2025

August 01, 2025

San Francisco, United States

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

keywords:

qualitative analysis

human-computer interaction

psychology

causal reasoning

survey

The present study examined how participants (N = 298) assessed causality, blameworthiness, foreseeability, and counterfactuality of an AI or human therapist, across three levels of empathy, in comparison to their supervisor and a recommending clinician. We found that participants judged the human therapist as more causal and blameworthy than their supervisor when medium or low empathy levels were displayed, whereas no difference emerged between the judgments of the AI therapist and its supervisor across all of the empathy levels. Additionally, participants did not differentiate causality and blameworthiness between the AI and human therapists, regardless of the empathy level. However, they did perceive the human therapist as foreseeing the outcome more than the AI therapist in the medium and low empathy levels. Qualitative analysis revealed that participants considered the directness of the causes to the outcome, counterfactual reasoning, and inherent limitations of AI when making judgments.

Downloads

Paper

Next from CogSci 2025

The Computational Mechanism of How Music Influences Food Choices:  A Drift-Diffusion Model Analysis
poster

The Computational Mechanism of How Music Influences Food Choices: A Drift-Diffusion Model Analysis

CogSci 2025

Linbo Qiu and 1 other author

01 August 2025