Live Session
Tuesday Posters
Research
Better Generalization with Semantic IDs: A Case Study in Ranking for Recommendations
Anima Singh (Google DeepMind), Trung Vu (Google), Nikhil Mehta (Google DeepMind), Raghunandan Keshavan (Google), Maheswaran Sathiamoorthy (Google DeepMind), Yilin Zheng (Google), Lichan Hong (Google DeepMind), Lukasz Heldt (Google), Li Wei (Google), Devansh Tandon (Google), Ed Chi (Google DeepMind) and Xinyang Yi (Google DeepMind)
Abstract
Randomly-hashed item ids are used ubiquitously in recommendation models. However, the learned representations of random ids lack generalization across similar items, causing problems of learning unseen and long-tail items, especially when item corpus is large, power-law distributed, and evolving dynamically. In this paper, we first show that simply replacing ID features with content-based embeddings can cause a drop in quality due to reduced memorization capability. To strike a good balance of memorization and generalization, we further propose to use Semantic IDs -- a compact discrete item representation learned from frozen content embeddings using RQ-VAE that captures the hierarchy of concepts in items -- as a replacement for random item ids. Similar to content embeddings, the compactness of Semantic IDs poses a problem of easy adaption in recommendation models. We propose a few methods of adapting Semantic IDs in industry-scale ranking models, through hashing sub-pieces of of the Semantic-ID sequences. In particular, we find that the SentencePiece model that is commonly used in LLM tokenization outperforms manually crafted pieces such as bigrams. To the end, we evaluate our approaches in a real-world ranking model for YouTube recommendations. Our experiments demonstrate that Semantic IDs can replace the direct use of video IDs by improving the generalization ability on new and long-tail item slices without sacrificing overall model quality, while significantly reducing the model size.