Live Session
Wednesday Posters
Late Breaking Results
TLRec: A Transfer Learning Framework to Enhance Large Language Models for Sequential Recommendation Tasks
Jiaye Lin (Tsinghua University), Shuang Peng (Zhejiang Lab), Zhong Zhang (Tencent AI Lab) and Peilin Zhao (Tencent AI Lab)
Abstract
Recently, Large Language Models (LLMs) have garnered significant attention in recommendation systems, improving recommendation performance through in-context learning or parameter-efficient fine-tuning. However, cross-domain generalization, i.e., model training in one scenario (source domain) but inference in another (target domain), is underexplored. In this paper, we present TLRec, a transfer learning framework aimed at enhancing LLMs for sequential recommendation tasks. TLRec specifically focuses on text inputs to mitigate the challenge of limited transferability across diverse domains, offering promising advantages over traditional recommendation models that heavily depend on unique identities (IDs) like user IDs and item IDs. Moreover, we leverage the source domain data to further enhance LLMs' performance in the target domain. Initially, we employ powerful closed-source LLMs (e.g., GPT-4) and chain-of-thought techniques to construct instruction tuning data from the third-party scenario (source domain). Subsequently, we apply curriculum learning to fine-tune LLMs for effective knowledge injection and perform recommendations in the target domain. Experimental results demonstrate that TLRec achieves superior performance under the zero-shot and few-shot settings. Additionally, comparative analyses involving LLMs with different types and parameter sizes validate theapplicability of our method.