Live Session
Teatro Petruzzelli
Paper
17 Oct
 
14:30
CEST
Session 16: Large Language Models 2
Add Session to Calendar 2024-10-17 02:30 pm 2024-10-17 04:20 pm Europe/Rome Session 16: Large Language Models 2 Session 16: Large Language Models 2 is taking place on the RecSys Hub. Https://recsyshub.org
Main Track

Towards Empathetic Conversational Recommender Systems

View on ACM Digital Library

Xiaoyu Zhang (Shandong University), Ruobing Xie (Tencent), Yougang Lyu (Shandong University), Xin Xin (Shandong University), Pengjie Ren (Shandong University), Mingfei Liang (Tencent), Bo Zhang (Tencent), Zhanhui Kang (Tencent), Maarten de Rijke (University of Amsterdam) and Zhaochun Ren (Leiden University)

View Paper PDFView Poster
Abstract

Conversational recommender systems (CRSs) are able to elicit user preferences through multi-turn dialogues. They typically incorporate external knowledge and pre-trained language models to capture the dialogue context. Most CRS approaches, trained on benchmark datasets, assume that the standard items and responses in these benchmarks are optimal. However, they overlook that users may express negative emotions with the standard items and may not feel emotionally engaged by the standard responses. This issue leads to a tendency to replicate the logic of recommenders in the dataset instead of aligning with user needs. To remedy this misalignment, we introduce empathy within a CRS. With empathy we refer to a system’s ability to capture and express emotions. We propose an empathetic conversational recommender (ECR) framework.ECR contains two main modules: emotion-aware item recommendation and emotion-aligned response generation. Specifically, we employ user emotions to refine user preference modeling for accurate recommendations. To generate human-like emotional responses, ECR applies retrieval-augmented prompts to fine-tune a pre-trained language model aligning with emotions and mitigating hallucination. To address the challenge of insufficient supervision labels, we enlarge our empathetic data using emotion labels annotated by large language models and emotional reviews collected from external resources. We propose novel evaluation metrics to capture user satisfaction in real-world CRS scenarios. Our experiments on the ReDial dataset validate the efficacy of our framework in enhancing recommendation accuracy and improving user satisfaction.

Join the Conversation

Head to Slido and select the paper's assigned session to join the live discussion.

Conference Agenda

View Full Agenda →
No items found.