Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/115697
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.contributorDepartment of Management and Marketingen_US
dc.creatorQu, Hen_US
dc.creatorFan, Wen_US
dc.creatorZhao, Zen_US
dc.creatorLi, Qen_US
dc.date.accessioned2025-10-23T03:46:25Z-
dc.date.available2025-10-23T03:46:25Z-
dc.identifier.issn1041-4347en_US
dc.identifier.urihttp://hdl.handle.net/10397/115697-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication H. Qu, W. Fan, Z. Zhao and Q. Li, 'TokenRec: Learning to Tokenize ID for LLM-Based Generative Recommendations,' in IEEE Transactions on Knowledge and Data Engineering, vol. 37, no. 10, pp. 6216-6231, Oct. 2025 is available at https://doi.org/10.1109/TKDE.2025.3599265.en_US
dc.subjectCollaborative filteringen_US
dc.subjectID tokenizationen_US
dc.subjectLarge language modelsen_US
dc.subjectRecommender systemsen_US
dc.subjectVector quantizationen_US
dc.titleTokenRec : learning to tokenize ID for LLM-based generative recommendationsen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage6216en_US
dc.identifier.epage6231en_US
dc.identifier.volume37en_US
dc.identifier.issue10en_US
dc.identifier.doi10.1109/TKDE.2025.3599265en_US
dcterms.abstractThere is a growing interest in utilizing large language models (LLMs) to advance next-generation Recommender Systems (RecSys), driven by their outstanding language understanding and reasoning capabilities. In this scenario, tokenizing users and items becomes essential for ensuring seamless alignment of LLMs with recommendations. While studies have made progress in representing users and items using textual contents or latent representations, challenges remain in capturing high-order collaborative knowledge into discrete tokens compatible with LLMs and generalizing to unseen users/items. To address these challenges, we propose a novel framework called TokenRec, which introduces an effective ID tokenization strategy and an efficient retrieval paradigm for LLM-based recommendations. Our tokenization strategy involves quantizing the masked user/item representations learned from collaborative filtering into discrete tokens, thus achieving smooth incorporation of high-order collaborative knowledge and generalizable tokenization of users and items for LLM-based RecSys. Meanwhile, our generative retrieval paradigm is designed to efficiently recommend top-K items for users, eliminating the need for the time-consuming auto-regressive decoding and beam search processes used by LLMs, thus significantly reducing inference time. Comprehensive experiments validate the effectiveness of the proposed methods, demonstrating that TokenRec outperforms competitive benchmarks, including both traditional recommender systems and emerging LLM-based recommender systems.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE transactions on knowledge and data engineering, Oct. 2025, v. 37, no. 10, p. 6216-6231en_US
dcterms.isPartOfIEEE transactions on knowledge and data engineeringen_US
dcterms.issued2025-10-
dc.identifier.scopus2-s2.0-105013749945-
dc.identifier.eissn1558-2191en_US
dc.description.validate202510 bcchen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.SubFormIDG000259/2025-09-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThe research described in this paper has been partially supported in part by the National Natural Science Foundation of China under Grant 62102335, in part by General Research Funds from the Hong Kong Research Grants Council under Grant PolyU 15207322, Grant 15200023, Grant 15206024, and Grant 15224524, internal research funds in part by Hong Kong Polytechnic University under Grant P0042693, Grant P0048625, and Grant P0051361. This work was supported in part by computational resources provided by The Centre for Large AI Models (CLAIM) of The Hong Kong Polytechnic University. Recommended for acceptance by X. Yu.en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryGreen (AAM)en_US
dc.relation.rdatahttps://github.com/Quhaoh233/TokenRecen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Qu_TokenRec_Learning_Tokenize.pdfPre-Published version6.48 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.