A multidimensional distributional map of future reward in dopamine neurons

Midbrain dopamine neurons (DANs) signal reward-prediction errors that teach recipient circuits about expected rewards 1 . However, DANs are thought to provide a substrate for temporal difference (TD) reinforcement learning (RL), an algorithm that learns the mean of temporally discounted expected fut...

Full description

Saved in:
Bibliographic Details
Published inNature (London) Vol. 642; no. 8068; pp. 691 - 699
Main Authors Sousa, Margarida, Bujalski, Pawel, Cruz, Bruno F., Louie, Kenway, McNamee, Daniel C., Paton, Joseph J.
Format Journal Article
LanguageEnglish
Published London Nature Publishing Group UK 19.06.2025
Nature Publishing Group
Subjects
Online AccessGet full text
ISSN0028-0836
1476-4687
1476-4687
DOI10.1038/s41586-025-09089-6

Cover

More Information
Summary:Midbrain dopamine neurons (DANs) signal reward-prediction errors that teach recipient circuits about expected rewards 1 . However, DANs are thought to provide a substrate for temporal difference (TD) reinforcement learning (RL), an algorithm that learns the mean of temporally discounted expected future rewards, discarding useful information about experienced distributions of reward amounts and delays 2 . Here we present time–magnitude RL (TMRL), a multidimensional variant of distributional RL that learns the joint distribution of future rewards over time and magnitude. We also uncover signatures of TMRL-like computations in the activity of optogenetically identified DANs in mice during behaviour. Specifically, we show that there is significant diversity in both temporal discounting and tuning for the reward magnitude across DANs. These features allow the computation of a two-dimensional, probabilistic map of future rewards from just 450 ms of the DAN population response to a reward-predictive cue. Furthermore, reward-time predictions derived from this code correlate with anticipatory behaviour, suggesting that similar information is used to guide decisions about when to act. Finally, by simulating behaviour in a foraging environment, we highlight the benefits of a joint probability distribution of reward over time and magnitude in the face of dynamic reward landscapes and internal states. These findings show that rich probabilistic reward information is learnt and communicated to DANs, and suggest a simple, local-in-time extension of TD algorithms that explains how such information might be acquired and computed. An algorithm called time–magnitude reinforcement learning (TMRL) extends distributional reinforcement learning to take account of reward time and magnitude, and behavioural and neurophysiological experiments in mice suggest that midbrain dopamine neurons use TMRL-like computations.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0028-0836
1476-4687
1476-4687
DOI:10.1038/s41586-025-09089-6