CogSci 2025

August 02, 2025

San Francisco, United States

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

keywords:

learning

representation

artificial intelligence

neural networks

reasoning

Do large language models (LLMs) learn like people do? We investigate this question with a simple task that compares human learning and LLM finetuning on the same set of novel inputs. We find that while humans learn and generalize robustly, finetuned LLMs largely fail to generalize from what they learned and are more influenced by prior expectations than humans are. We then analyze human solutions of our task and find that stronger performance is characterized by the proactive formation of efficient representations that aid learning and generalization. Although LLMs can use in-context learning to match the performance of humans who do not form these representations, and can use similar representations provided in-context to match the performance of those who, they do not form these representations on their own. Given these findings, we then consider how future theories of human learning might be built in the age of LLMs

Downloads

Paper

Next from CogSci 2025

The contributions of explanation simplicity and source expertise to evaluations of disagreeing explanations
poster

The contributions of explanation simplicity and source expertise to evaluations of disagreeing explanations

CogSci 2025

Rina Miyata Harsch
Rina Miyata Harsch and 1 other author

02 August 2025