Live Session
Teatro Petruzzelli
Paper
17 Oct
 
12:30
CEST
Session 15: Off-policy Learning
Add Session to Calendar 2024-10-17 12:30 pm 2024-10-17 01:15 pm Europe/Rome Session 15: Off-policy Learning Session 15: Off-policy Learning is taking place on the RecSys Hub. Https://recsyshub.org
Main Track

Effective Off-Policy Evaluation and Learning in Contextual Combinatorial Bandits

View on ACM Digital Library

Tatsuhiro Shimizu (Independent Researcher), Koichi Tanaka (Keio Univercity), Ren Kishimoto (Tokyo Institute of Technology), Haruka Kiyohara (Cornell University), Masahiro Nomura (CyberAgent, Inc.) and Yuta Saito (Cornell University)

View Paper PDFView Poster
Abstract

We explore off-policy evaluation and learning in contextual combinatorial bandits (CCB), where a policy selects a subset in the action space. For example, it might choose a set of furniture pieces (a bed and a drawer) from available items (bed, drawer, chair, etc.) for interior design sales. This setting is widespread in fields such as recommender systems and healthcare, yet OPE/L of CCB remains unexplored in the relevant literature. Standard OPE methods typically employ regression and importance sampling in the action subset space. However, they often face significant challenges due to high bias or variance, exacerbated by the exponential growth in the number of available subsets. To address these challenges, we introduce a concept of factored action space, which allows us to decompose each subset into binary indicators. These indicators signify whether each action is included in the selected subset. This formulation allows us to distinguish between the “main effect” derived from the main actions, and the “residual effect”, originating from the supplemental actions, facilitating more effective OPE. Specifically, our estimator, called OPCB, leverages an importance sampling-based approach to unbiasedly estimate the main effect, while employing regression-based approach to deal with the residual effect with low variance. OPCB achieves substantial variance reduction compared to conventional importance sampling methods and bias reduction relative to regression methods under certain conditions, as illustrated in our theoretical analysis. Experiments on both synthetic and real-world datasets demonstrate OPCB’s superior performance over the typical methods, particularly when navigating the complexities of a large action subset space.

Join the Conversation

Head to Slido and select the paper's assigned session to join the live discussion.

Conference Agenda

View Full Agenda →
No items found.