Live Session
Doctoral Symposium
Doctoral Symposium
CEERS: Counterfactual Evaluations of Explanations in Recommender Systems
Mikhail Baklanov (Tel Aviv University)
Abstract
The growing emphasis on explainability in ethical AI, driven by regulations like GDPR, underscores the need for robust explanations of Recommender Systems (RS). Key to the development and research progress of such methods are reproducible, quantifiable evaluation metrics. Traditional human-involved evaluation methods are not reproducible, subjective, costly, and fail to capture the counterfactual nuances of AI explanations. Hence, there is a pressing need for objective and scalable metrics to accurately measure the correctness of explanation methods for recommender systems. Inspired by similar approaches in computer vision, this research aims to propose a counterfactual approach to evaluate explanation accuracy in RS. While counterfactual evaluation methods have been established in other domains, they are underexplored in RS. Our goal is to introduce quantifiable metrics that objectively assess the correctness of local explanations. This approach enhances evaluation reliability and scalability, integrating various recommenders, explanation algorithms, and datasets. Our goal is to provide a comprehensive mechanism combining model fidelity with explanation correctness, advancing transparency and trustworthiness in AI-driven recommender systems.