Live Session
Teatro Petruzzelli
Paper
15 Oct
 
10:15
CEST
Session 1: Large Language Models 1
Add Session to Calendar 2024-10-15 10:15 am 2024-10-15 11:00 am Europe/Rome Session 1: Large Language Models 1 Session 1: Large Language Models 1 is taking place on the RecSys Hub. Https://recsyshub.org
Main Track

SeCor: Aligning Semantic and Collaborative representations by Large Language Models for Next-Point-of-Interest Recommendations

View on ACM Digital Library

Shirui Wang (Tongji University), Bohan Xie (Tongji university), Ling Ding (Tongji University), Xiaoying Gao (Tongji University), Jianting Chen (Tongji University) and Yang Xiang (Tongji University)

View Paper PDFView Poster
Abstract

The widespread adoption of location-based applications has created a growing demand for point-of-interest (POI) recommendation, which aims to predict a user’s next POI based on their historical check-in data and current location. However, existing methods often struggle to capture the intricate relationships within check-in data. This is largely due to their limitations in representing temporal and spatial information and underutilizing rich semantic features. While large language models (LLMs) offer powerful semantic comprehension to solve them, they are limited by hallucination and the inability to incorporate global collaborative information. To address these issues, we propose a novel method SeCor, which treats POI recommendation as a multi-modal task and integrates semantic and collaborative representations to form an efficient hybrid encoding. SeCor first employs a basic collaborative filtering model to mine interaction features. These embeddings, as one modal information, are fed into LLM to align with semantic representation, leading to efficient hybrid embeddings. To mitigate the hallucination, SeCor recommends based on the hybrid embeddings rather than directly using the LLM’s output text. Extensive experiments on three public real-world datasets show that SeCor outperforms all baselines, achieving improved recommendation performance by effectively integrating collaborative and semantic information through LLMs.

Join the Conversation

Head to Slido and select the paper's assigned session to join the live discussion.

Conference Agenda

View Full Agenda →
No items found.