Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/115927
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.contributorDepartment of Data Science and Artificial Intelligence-
dc.creatorWu, J-
dc.creatorLiu, Q-
dc.creatorHu, H-
dc.creatorFan, W-
dc.creatorLiu, S-
dc.creatorLi, Q-
dc.creatorWu, XM-
dc.creatorTang, K-
dc.date.accessioned2025-11-18T06:48:02Z-
dc.date.available2025-11-18T06:48:02Z-
dc.identifier.isbn979-8-4007-1331-6-
dc.identifier.urihttp://hdl.handle.net/10397/115927-
dc.descriptionWWW '25: The ACM Web Conference 2025, Sydney NSW Australia, 28 April 2025 - 2 May 2025en_US
dc.language.isoenen_US
dc.publisherThe Association for Computing Machineryen_US
dc.rightsThis work is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/legalcode). WWW Companion ’25, April 28-May 2, 2025, Sydney, NSW, Australiaen_US
dc.rights©2025 Copyright held by the owner/author(s).en_US
dc.rightsThe following publication Wu, J., Liu, Q., Hu, H., Fan, W., Liu, S., Li, Q., Wu, X.-M., & Tang, K. (2025). Leveraging ChatGPT to Empower Training-free Dataset Condensation for Content-based Recommendation Companion Proceedings of the ACM on Web Conference 2025, Sydney NSW, Australia (pp. 1402-1406) is available at https://doi.org/10.1145/3701716.3715555.en_US
dc.subjectDataset condensationen_US
dc.subjectLarge language modelen_US
dc.subjectRecommender systemen_US
dc.titleLeveraging ChatGPT to empower training-free dataset condensation for content-based recommendationen_US
dc.typeConference Paperen_US
dc.identifier.spage1402-
dc.identifier.epage1406-
dc.identifier.doi10.1145/3701716.3715555-
dcterms.abstractModern Content-Based Recommendation (CBR) techniques utilize item content to deliver personalized services, effectively mitigating information overload. However, these methods often require resource-intensive training on large datasets. To address this issue, we explore dataset condensation for textual CBR in this paper. Dataset condensation aims to synthesize a compact yet informative dataset, enabling models to achieve performance comparable to those trained on full datasets. Applying existing approaches to CBR presents two key challenges: (1) the difficulty of synthesizing discrete texts and (2) the inability to preserve user-item preference information. To overcome these limitations, we propose TF-DCon, an efficient dataset condensation method for CBR. TF-DCon employs a prompt-evolution module to guide ChatGPT in condensing discrete texts and integrates a clustering-based module to condense user preferences effectively. Extensive experiments conducted on three real-world datasets demonstrate TF-DCon's effectiveness. Notably, we are able to approximate up to 97% of the original performance while reducing the dataset size by 95% (i.e., dataset MIND). We have released our code and data for other researchers to reproduce our results.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn WWW Companion ’25: Companion Proceedings of the ACM: Web Conference 2025, p. 1402-1406. New York, NY: The Association for Computing Machinery, 2025-
dcterms.issued2025-
dc.identifier.scopus2-s2.0-105009231367-
dc.relation.ispartofbookWWW Companion ’25: Companion Proceedings of the ACM: Web Conference 2025-
dc.relation.conferenceInternational World Wide Web Conference [WWW],-
dc.publisher.placeNew York, NYen_US
dc.description.validate202511 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextJiahao Wu, Shengcai Liu, and Ke Tang would like to acknowledge the support provided by the National Key Research and Development Program of China under Grant No. 2022YFA1004102, and in part by Guangdong Major Project of Basic and Applied Basic Research. Wenqi Fan and Qing Li would like to acknowledge the support provided by Hong Kong Research Grants Council through Research Impact Fund (project no. R1015-23).en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
3701716.3715555.pdf1.7 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.