Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/115927
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Computing | - |
| dc.contributor | Department of Data Science and Artificial Intelligence | - |
| dc.creator | Wu, J | - |
| dc.creator | Liu, Q | - |
| dc.creator | Hu, H | - |
| dc.creator | Fan, W | - |
| dc.creator | Liu, S | - |
| dc.creator | Li, Q | - |
| dc.creator | Wu, XM | - |
| dc.creator | Tang, K | - |
| dc.date.accessioned | 2025-11-18T06:48:02Z | - |
| dc.date.available | 2025-11-18T06:48:02Z | - |
| dc.identifier.isbn | 979-8-4007-1331-6 | - |
| dc.identifier.uri | http://hdl.handle.net/10397/115927 | - |
| dc.description | WWW '25: The ACM Web Conference 2025, Sydney NSW Australia, 28 April 2025 - 2 May 2025 | en_US |
| dc.language.iso | en | en_US |
| dc.publisher | The Association for Computing Machinery | en_US |
| dc.rights | This work is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/legalcode). WWW Companion ’25, April 28-May 2, 2025, Sydney, NSW, Australia | en_US |
| dc.rights | ©2025 Copyright held by the owner/author(s). | en_US |
| dc.rights | The following publication Wu, J., Liu, Q., Hu, H., Fan, W., Liu, S., Li, Q., Wu, X.-M., & Tang, K. (2025). Leveraging ChatGPT to Empower Training-free Dataset Condensation for Content-based Recommendation Companion Proceedings of the ACM on Web Conference 2025, Sydney NSW, Australia (pp. 1402-1406) is available at https://doi.org/10.1145/3701716.3715555. | en_US |
| dc.subject | Dataset condensation | en_US |
| dc.subject | Large language model | en_US |
| dc.subject | Recommender system | en_US |
| dc.title | Leveraging ChatGPT to empower training-free dataset condensation for content-based recommendation | en_US |
| dc.type | Conference Paper | en_US |
| dc.identifier.spage | 1402 | - |
| dc.identifier.epage | 1406 | - |
| dc.identifier.doi | 10.1145/3701716.3715555 | - |
| dcterms.abstract | Modern Content-Based Recommendation (CBR) techniques utilize item content to deliver personalized services, effectively mitigating information overload. However, these methods often require resource-intensive training on large datasets. To address this issue, we explore dataset condensation for textual CBR in this paper. Dataset condensation aims to synthesize a compact yet informative dataset, enabling models to achieve performance comparable to those trained on full datasets. Applying existing approaches to CBR presents two key challenges: (1) the difficulty of synthesizing discrete texts and (2) the inability to preserve user-item preference information. To overcome these limitations, we propose TF-DCon, an efficient dataset condensation method for CBR. TF-DCon employs a prompt-evolution module to guide ChatGPT in condensing discrete texts and integrates a clustering-based module to condense user preferences effectively. Extensive experiments conducted on three real-world datasets demonstrate TF-DCon's effectiveness. Notably, we are able to approximate up to 97% of the original performance while reducing the dataset size by 95% (i.e., dataset MIND). We have released our code and data for other researchers to reproduce our results. | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | In WWW Companion ’25: Companion Proceedings of the ACM: Web Conference 2025, p. 1402-1406. New York, NY: The Association for Computing Machinery, 2025 | - |
| dcterms.issued | 2025 | - |
| dc.identifier.scopus | 2-s2.0-105009231367 | - |
| dc.relation.ispartofbook | WWW Companion ’25: Companion Proceedings of the ACM: Web Conference 2025 | - |
| dc.relation.conference | International World Wide Web Conference [WWW], | - |
| dc.publisher.place | New York, NY | en_US |
| dc.description.validate | 202511 bcch | - |
| dc.description.oa | Version of Record | en_US |
| dc.identifier.FolderNumber | OA_Scopus/WOS | en_US |
| dc.description.fundingSource | RGC | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | Jiahao Wu, Shengcai Liu, and Ke Tang would like to acknowledge the support provided by the National Key Research and Development Program of China under Grant No. 2022YFA1004102, and in part by Guangdong Major Project of Basic and Applied Basic Research. Wenqi Fan and Qing Li would like to acknowledge the support provided by Hong Kong Research Grants Council through Research Impact Fund (project no. R1015-23). | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | CC | en_US |
| Appears in Collections: | Conference Paper | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 3701716.3715555.pdf | 1.7 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



