Live Session
Session 13: Robust RecSys 1
Main Track
Improving the Shortest Plank: Vulnerability-Aware Adversarial Training for Robust Recommender System
Kaike Zhang (Institute of Computing Technology, CAS), Qi Cao (Institute of Computing Technology, CAS), Yunfan Wu (Institute of Computing Technology, CAS), Fei Sun (Institute of Computing Technology, CAS), Huawei Shen (Institute of Computing Technology, CAS) and Xueqi Cheng (Institute of Computing Technology, CAS)
Abstract
Recommender systems play a pivotal role in mitigating information overload in diverse fields. Nonetheless, the inherent openness of these systems introduces vulnerabilities, allowing attackers to insert fake users to skew the exposure of certain items, known as poisoning attacks. Adversarial training emerges as a notable defense mechanism against such poisoning attacks within recommender systems. Traditional adversarial training methods apply perturbations with the same scale across all users to their embeddings to maintain system robustness against the worst-case attacks. Yet, in reality, attacks often affect only a subset of users who are actually vulnerable to the specific attacks. These indiscriminate perturbations make it difficult to balance effective protection for vulnerable users and avoidance of recommendation quality degradation for those who are not. To address this issue, our research delves into understanding user vulnerability. Considering that poisoning attacks pollutes the training data, we observe that the extent of a recommender system’s fit to users’ training data, particularly when high, correlates with an increased likelihood of users incorporating attack information, thus indicating their vulnerability. Leveraging these insights, we introduce the Vulnerability-aware Adversarial Training (VAT) method, designed to counteract poisoning attacks in recommender systems. VAT employs a novel vulnerability-aware function to estimate users’ vulnerability based on the degree to which they are fitted by the system. Guided by this evaluation, VAT applies user-specific perturbations to embeddings. thereby not only reducing the success rate of attacks but also preserving—and potentially enhancing—the quality of recommendations. Comprehensive experiments confirm VAT’s superior defensive capabilities against various attacks and recommendation models.