September 06, 2023
Modern recommendation systems ought to benefit by probing for and learning from delayed feedback. Research has tended to focus on learning from a user’s response to a single recommendation. Such work, which leverages methods of supervised and bandit learning, forgoes learning from the user’s subsequent behavior. Where past work has aimed to learn from subsequent behavior, there has been a lack of effective methods for probing to elicit informative delayed feedback. Effective exploration through probing for delayed feedback becomes particularly challenging when rewards are sparse. To address this, we develop deep exploration methods for recommendation systems. In particular, we formulate recommendation as a sequential decision problem and demonstrate benefits of deep exploration over single-step exploration. Our experiments are carried out with high-fidelity industrial-grade simulators and establish large improvements over existing algorithms.
Publisher
RecSys
December 26, 2025
Brandon Amos, Anselm Paulus, Arman Zharmagambetov, Ilia Kulikov, Ivan Evtimov, Kamalika Chaudhuri, Remi Munos
December 26, 2025
December 01, 2025
Amine Benhalloum, Hany Awadalla, Hejia Zhang, Hunter Lang, Julian Katz-Samuels, Karishma Mandyam, Licheng Yu, Manaal Faruqui, Maryam Fazel-Zarandi, Nanshu Wang, Qi Qi, Richard Yuanzhe Pang, Selina Xiaoliang Peng, Shengjie Bi, Shengyu Feng, Shishir G. Patil, Sopan Khosla, Sujan Gonugondla, Vincent Li, Wenzhe Li, Yuanhao Xiong, Yue Yu, Yun He, Yundi Qian
December 01, 2025
October 13, 2025
Paria Rashidinejad, Cai Zhou, Tommi Jaakkola, DiJia Su, Bo Liu, Feiyu Chen, Chenyu Wang, Shannon Zejiang Shen, Sid Wang, Siyan Zhao, Song Jiang, Yuandong Tian
October 13, 2025
September 24, 2025
Dulhan Jayalath, Suchin Gururangan, Cheng Zhang, Alan Schelten, Anirudh Goyal, Parag Jain, Shashwat Goel, Thomas Simon Foster
September 24, 2025

Our approach
Latest news
Foundational models