Live Session
Thursday Posters
Research
Knowledge-Enhanced Multi-Behaviour Contrastive Learning for Effective Recommendation
Zeyuan Meng (University of Glasgow), Zixuan Yi (University of Glasgow) and Iadh Ounis (University of Glasgow)
Abstract
Real-world recommendation scenarios usually need to handle diverse user-item interaction behaviours, including page views, adding items into carts, and purchasing activities. The interactions that precede the actual target behaviour (e.g. purchasing an item) allow to better capture the user's preferences from different angles, and are used as auxiliary information (e.g. page views) to enrich the system’s knowledge about the users’ preferences, thereby helping to enhance recommendation for the target behaviour. Despite efforts in modelling the users’ multi-behaviour interaction information, the existing multi-behaviour recommenders still face two challenges: (1) Data sparsity across multiple user behaviours is a common issue that limits the recommendation performance, particularly for the target behaviour, which typically exhibits fewer interactions compared to other auxiliary behaviours. (2) Noisy auxiliary interactive behaviour where the information in the auxiliary information might be non-relevant to recommendation. In this case, a direct adoption of contrastive learning between the target behaviour and the auxiliary behaviours will amplify the noise in the auxiliary behaviours, thereby negatively impacting the real semantics that can be derived from the target behaviour. To address these two challenges, we propose a new model called Knowledge-Enhanced Multi-behaviour Contrastive Learning for Recommendation (KEMCL). In particular, to address the problem of sparse user multi-behaviour interaction information, we leverage a tailored knowledge graph (KG) to enrich the semantic representations of items, and generate supervision signals through self-supervised learning so as to enhance recommendation. In addition, we develop two contrastive learning (CL) methods, inter CL and intra CL, to alleviate the problem of noisy auxiliary interactions. Extensive experiments on three public recommendation datasets show that our proposed KEMCL model significantly outperforms the existing state-of-the-art (SOTA) methods. In particular, our KEMCL model outperforms the best baseline performance, namely KMCLR, by 5.42% on the large Tmall dataset.