CogSci 2025

August 01, 2025

San Francisco, United States

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

keywords:

language understanding

computational modeling

computer science

pragmatics

Humans naturally interpret numbers non-literally, effortlessly combining context, world knowledge, and speaker intent. We investigate whether large language models (LLMs) interpret numbers similarly, focusing on hyperbole and pragmatic halo effects. Through systematic comparison with human data and computational models of pragmatic reasoning, we find that LLMs diverge from human interpretation in striking ways. By decomposing pragmatic reasoning into testable components, grounded in the Rational Speech Act framework, we pinpoint where LLM processing diverges from human cognition --- not in prior knowledge, but in reasoning with it. This insight leads us to develop a targeted solution --- chain-of-thought prompting inspired by an RSA model makes LLMs' interpretations more human-like. Our work demonstrates how computational cognitive models can both diagnose AI-human differences and guide development of more human-like language understanding capabilities.

Downloads

Paper

Next from CogSci 2025

A Linguistic Analysis of Spontaneous Thoughts: Investigating Experiences of Deja Vu, Unexpected Thoughts, and Involuntary Autobiographical Memories
poster

A Linguistic Analysis of Spontaneous Thoughts: Investigating Experiences of Deja Vu, Unexpected Thoughts, and Involuntary Autobiographical Memories

CogSci 2025

+3Nathaniel Blanchard
Videep Venkatesha and 5 other authors

01 August 2025