Live Session
Teatro Petruzzelli
Paper
15 Oct
 
10:15
CEST
Session 1: Large Language Models 1
Add Session to Calendar 2024-10-15 10:15 am 2024-10-15 11:00 am Europe/Rome Session 1: Large Language Models 1 Session 1: Large Language Models 1 is taking place on the RecSys Hub. Https://recsyshub.org
Reproducibility

Large Language Models as Evaluators for Recommendation Explanations

View on ACM Digital Library

Xiaoyu Zhang (Tsinghua University), Yishan Li (Tsinghua University), Jiayin Wang (Tsinghua Univeristy), Bowen Sun (Tsinghua Univeristy), Weizhi Ma (Tsinghua University), Peijie Sun (Hefei University of Technology, School of Computer and Information) and Min Zhang (Tsinghua University)

View Paper PDFView Poster
Abstract

The explainability of recommender systems has attracted significant attention in academia and industry. Many efforts have been made for explainable recommendations, yet evaluating the quality of the explanations remains a challenging and unresolved issue. In recent years, leveraging LLMs as evaluators presents a promising avenue in Natural Language Processing tasks (e.g., sentiment classification, information extraction), as they perform strong capabilities in instruction following and common-sense reasoning. However, evaluating recommendation explanatory texts is different from these NLG tasks, as its criteria are related to human perceptions and are usually subjective. In this paper, we investigate whether LLMs can serve as evaluators of recommendation explanations. To answer the question, we utilize real user feedback on explanations given from previous work and additionally collect third-party annotations and LLM evaluations. We design and apply a 3-level meta-evaluation strategy to measure the correlation between evaluator labels and the ground truth provided by users. Our experiments reveal that LLMs, such as GPT4, can provide comparable evaluations with appropriate prompts and settings. We also provide further insights into combining human labels with the LLM evaluation process and utilizing ensembles of multiple heterogeneous LLM evaluators to enhance the accuracy and stability of evaluations. Our study verifies that utilizing LLMs as evaluators can be an accurate, reproducible and cost-effective solution for evaluating recommendation explanation texts. Our code is available at https://anonymous.4open.science/r/LLMasAnnotator-0043.

Join the Conversation

Head to Slido and select the paper's assigned session to join the live discussion.

Conference Agenda

View Full Agenda →
No items found.