The field of recommender systems is rapidly evolving with the rise of large generative models. Leveraging scaling laws and flexible content generation, these models offer enhanced performance and new capabilities, enabling more accurate and expressive recommendations. In this tutorial, we provide a comprehensive overview of recent advancements in developing large generative recommendation models, focusing on two key paradigms: (1) adapting pre-trained large generative models, such as large language models (LLMs), for recommendation tasks, and (2) promising paths toward developing large generative recommendation models from scratch, such as autoregressive (mainly based on semantic IDs) or diffusion models. We then make an in-depth discussion on the challenges, open questions, and potential future directions in developing large generative recommendation models that could shape the next generation recommender systems.
Time (AEST) | Session | Presenter |
---|---|---|
9:00 - 9:10 | Part 1: Background and Introduction | Tat-Seng Chua |
9:10 - 10:10 | Part 2: LLM-based Generative Recommendation | Leheng Sheng |
10:10 - 10:30 | Part 3.1: Introduction of Semantic IDs | Yupeng Hou |
10:30 - 11:00 | Coffee Break & QA Session | |
11:00 - 11:40 | Part 3.2: SemID-based Generative Recommendation | Yupeng Hou |
11:40 - 12:10 | Part 4: Diffusion-based Generative Recommendation | Jiancan Wu (proxy speaker of Zhengyi) |
12:10 - 12:30 | Part 5: Open Challenges and Beyond | Yupeng Hou |
We invite you to join our Discord community to connect with other researchers and practitioners interested in generative recommendation models. Share your ideas, ask questions, and stay updated on the latest developments in the field.
@inproceedings{www25-gen-rec-tutorial,
author = {Hou, Yupeng and Zhang, An and Sheng, Leheng and Yang, Zhengyi and Wang, Xiang and Chua, Tat-Seng and McAuley, Julian},
title = {Generative Recommendation Models: Progress and Directions},
year = {2025},
booktitle = {Companion Proceedings of the ACM Web Conference 2025},
}