Live Session
Wednesday Posters
Late Breaking Results
User knowledge prompt for sequential recommendation
Yuuki Tachioka (Denso IT Laboratory)
Abstract
The large language model (LLM) based recommendation system is effective for sequential recommendation, because general knowledge of popular items is included in the LLM training data. To add knowledge of items, knowledge graph-based prompt tuning (knowledge prompting) has been proposed, which achieved SOTA performance. However, for personalized recommendation, it is necessary to consider user knowledge, which the SOTA method does not fully consider because user knowledge is not included in the item knowledge graphs; thus, we propose a user knowledge prompt, which converts a user knowledge graph into a prompt using the relationship template. The proposed method can consider user traits and user preferences and associate relevant items for collaborative filter-like effects. We conducted experiments on three types of datasets (movie, music, and book) and show the significant and consistent improvement of our proposed user knowledge prompt.