Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
keywords:
case-based reasoning
bayesian modeling
human-computer interaction
psychology
Large Language Models like ChatGPT are becoming every-day writing partners in the workplace. This study asked: how does simply knowing an email was “edited by ChatGPT” affect its persuasiveness and the perceived cred-ibility of the sender? We collected data from 308 profes-sionals using experimental vignettes that simulated realis-tic workplace emails. Some emails were described as en-tirely human-written, while others were labeled as AI-edited, with variations in the sender's reliability (who is sending the message) and strength of the argument (how well the content is constructed). A Bayesian Model of Ar-gumentation provided normative predictions for how reli-ability and argument quality should influence persuasion. We found that when an email was labeled as “edited by ChatGPT,” receivers saw it as less persuasive overall. However, AI-mediation did not diminish the relative in-fluence of source reliability and argument quality. In other words, while the AI-edited label reduced overall persua-siveness, it didn’t change how recipients inherently evalu-ated credibility. They still adjusted their beliefs primarily based on who sent the message and how strong the argu-ment was. To our knowledge, this is the first study to ap-ply a Bayesian framework to understanding how people process AI-mediated communication.