CogSci 2025

August 02, 2025

San Francisco, United States

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

keywords:

computer-based experiment

computational modeling

mathematical modeling

bayesian modeling

decision making

learning

psychology

Behavioral adaptation in probabilistic environments requires learning through trial and error. While reinforcement learning (RL) models can describe the temporal development of preferences through error-driven learning, the diffusion decision model (DDM) allow for the mapping of state preferences on single response times. We present a Bayesian hierarchical RL-DDM integrating temporal-difference (TD) learning. Our implementation incorporates variants of TD learning, including SARSA, Q-Learning, and Actor-Critic models. We tested the model with data from N = 59 participants in a two-stage decision-making task. Participants exhibited learning over time, becoming both more accurate and faster. They also reflected a difficulty effect, with faster and more accurate responses for easier choices, as reflected by greater subjective value differences between available options. Model comparison demonstrated that the RL-DDM provided a better fit compared to standalone RL or DDM models. Notably, the RL-DDM captured both the temporal dynamics of learning and the difficulty effect in decision-making.

Downloads

Paper

Next from CogSci 2025

Does Explicit Analogical Reasoning Help Second Language Acquisition? Evidence From Artificial Language Learning
poster

Does Explicit Analogical Reasoning Help Second Language Acquisition? Evidence From Artificial Language Learning

CogSci 2025

Tyler Marghetis
Liantao Shan and 2 other authors

02 August 2025