Live Session
Session 9: Sequential Recommendation 2
Main Track
Repeated Padding for Sequential Recommendation
Yizhou Dang (Northeastern University), Yuting Liu (Northeastern University), Enneng Yang (Northeastern University), Guibing Guo (Northeastern University), Linying Jiang (Northeastern University), Xingwei Wang (Northeastern University) and Jianzhe Zhao (Northeastern University)
Abstract
Sequential recommendation aims to provide users with personalized suggestions based on their historical interactions. When training sequential models, padding is a widely adopted technique for two main reasons: 1) The vast majority of models can only handle fixed-length sequences; 2) Batch-based training needs to ensure that the sequences in each batch have the same length. The special value 0 is usually used as the padding content, which does not contain the actual information and is ignored in the model calculations. This common-sense padding strategy leads us to a problem that has never been explored in the recommendation field: Can we utilize this idle input space by padding other content to improve model performance and training efficiency further?In this paper, we propose a simple yet effective padding method called Repeated Padding (RepPad). Specifically, we use the original interaction sequences as the padding content and fill it to the padding positions during model training. This operation can be performed a finite number of times or repeated until the input sequences’ length reaches the maximum limit. Our RepPad can be considered as a sequence-level data augmentation strategy. Unlike most existing works, our method contains no trainable parameters or hyperparameters and is a plug-and-play data augmentation operation. Extensive experiments on various categories of sequential models and five real-world datasets demonstrate the effectiveness and efficiency of our approach. The average recommendation performance improvement is up to 60.3% on GRU4Rec and 24.3% on SASRec. We also provide in-depth analysis and explanation of what makes RepPad effective from multiple perspectives. The source code will be released to ensure the complete reproducibility of our experiments.