Live Session
Session 2: Bias and Fairness 1
Main Track
FairCRS: Towards User-oriented Fairness in Conversational Recommendation Systems
Qin Liu (Jinan University), Xuan Feng (Jinan University), Tianlong Gu (Jinan University) and Xiaoli Liu (Jinan University)
Abstract
Conversational Recommendation Systems (CRSs) enable recommender systems to explicitly acquire user preferences during multiturn interactions, providing more accurate and personalized recommendations. However, the data imbalance in CRSs, due to inconsistent interaction history among users, may lead to disparate treatment for disadvantaged user groups. In this paper, we investigate the discriminate issues in CRS from the user’s perspective, called as user-oriented fairness. To reveal the unfairness problems of different user groups in CRS, we conduct extensive empirical analyses. To mitigate user unfairness, we propose a user-oriented fairness framework, named FairCRS, which is a model-agnostic framework. In particular, we develop a user-embedding reconstruction mechanism that enriches user embeddings by incorporating more interaction information, and design a user-oriented fairness strategy that optimizes the recommendation quality differences among user groups while alleviating unfairness. Extensive experimental results on English and Chinese datasets show that FairCRS outperforms state-of-the-art CRSs in terms of overall recommendation performance and user fairness.