Live Session
Tutorial: Conducting User Experiments in Recommender Systems
Conducting User Experiments in Recommender Systems
View on ACM Digital LibrarySpeakers
Bart Knijnenburg and Edward Malthouse
Abstract
Click here for the tutorial presentation slides.
Traditionally, the field of recommender systems has evaluated the fruits of its labor using metrics of algorithmic accuracy and precision. In recent years, however, researchers have come to realize that the goal of a recommender system extends well beyond accurate predictions; its primary real-world purpose is to provide personalized help in discovering relevant content or items. This realization has caused prominent recommender systems researchers to call for a broadening of the scope of research beyond algorithms and beyond accuracy- or precision-based evaluation.Despite these calls, surprisingly little recommender systems research focuses on preference elicitation, the presentation of recommendations, and/or other aspects of the Human-Recommender Interaction. Similarly, very few researchers evaluate their recommenders in online user experiments with subjective and experience-based metrics.While our papers, book chapters, and past tutorials on the user experience of recommender systems have been instrumental in raising awareness regarding these topics, we believe that a lack of more in-depth training in user-centric design and evaluation methods remains an important reason for the relative lack of user-centric recommender systems research. We therefore believe that a tutorial on the user experience of recommender systems is both timely and important. It will provide practical training in conducting user experiments and statistical analysis of the results of such experiments, thereby helping researchers and practitioners improve the user experience of the recommender systems they develop. In the long term, this will trigger more scientific user-centric work that can grow our knowledge on how certain recommender system aspects influence the user experience.