September 06, 2023
Modern recommendation systems ought to benefit by probing for and learning from delayed feedback. Research has tended to focus on learning from a user’s response to a single recommendation. Such work, which leverages methods of supervised and bandit learning, forgoes learning from the user’s subsequent behavior. Where past work has aimed to learn from subsequent behavior, there has been a lack of effective methods for probing to elicit informative delayed feedback. Effective exploration through probing for delayed feedback becomes particularly challenging when rewards are sparse. To address this, we develop deep exploration methods for recommendation systems. In particular, we formulate recommendation as a sequential decision problem and demonstrate benefits of deep exploration over single-step exploration. Our experiments are carried out with high-fidelity industrial-grade simulators and establish large improvements over existing algorithms.
Publisher
RecSys
May 06, 2024
Haoyue Tang, Tian Xie
May 06, 2024
April 30, 2024
Mikayel Samvelyan, Minqi Jiang, Davide Paglieri, Jack Parker-Holder, Tim Rocktäschel
April 30, 2024
April 02, 2024
Patrick Lancaster, Nicklas Hansen, Aravind Rajeswaran, Vikash Kumar
April 02, 2024
March 26, 2024
Prajjwal Bhargava, Rohan Chitnis, Alborz Geramifard, Shagun Sodhani, Amy Zhang
March 26, 2024
Product experiences
Foundational models
Product experiences
Latest news
Foundational models