Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/114289
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Data Science and Artificial Intelligenceen_US
dc.creatorYang, Pen_US
dc.creatorWang, Qen_US
dc.creatorHuang, Zen_US
dc.creatorLiu, Ten_US
dc.creatorZhang, Cen_US
dc.creatorHan, Ben_US
dc.date.accessioned2025-07-22T01:34:18Z-
dc.date.available2025-07-22T01:34:18Z-
dc.identifier.urihttp://hdl.handle.net/10397/114289-
dc.language.isoenen_US
dc.rightsCopyright 2025 by the author(s).en_US
dc.rightsPosted with permission of the author.en_US
dc.rightsThe following publication Yang, P., Wang, Q., Huang, Z., Liu, T., Zhang, C., & Han, B. (2025). Exploring Criteria of Loss Reweighting to Enhance LLM Unlearning. In Forty-second International Conference on Machine Learning 2025 is available at https://icml.cc/virtual/2025/poster/44163.en_US
dc.titleExploring criteria of loss reweighting to enhance LLM unlearningen_US
dc.typeOther Conference Contributionsen_US
dcterms.abstractLoss reweighting has shown significant benefits for machine unlearning with large language models (LLMs). However, their exact functionalities are left unclear and the optimal strategy remains an open question, thus impeding the understanding and improvement of existing methodologies. In this paper, we identify two distinct goals of loss reweighting, namely, Saturation and Importance---the former indicates that those insufficiently optimized data should be emphasized, while the latter stresses some critical data that are most influential for loss minimization. To study their usefulness, we design specific reweighting strategies for each goal and evaluate their respective effects on unlearning. We conduct extensive empirical analyses on well-established benchmarks, and summarize some important observations as follows:(i) Saturation enhances efficacy more than importance-based reweighting, and their combination can yield additional improvements.(ii) Saturation typically allocates lower weights to data with lower likelihoods, whereas importance-based reweighting does the opposite.(iii) The efficacy of unlearning is also largely influenced by the smoothness and granularity of the weight distributions.Based on these findings, we propose SatImp, a simple reweighting method that combines the advantages of both saturation and importance.Empirical results on extensive datasets validate the efficacy of our method, potentially bridging existing research gaps and indicating directions for future research.Our code is available at https://github.com/tmlr-group/SatImp.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn ICML 2025: Forty-Second International Conference on Machine Learning, Vancouver Convention Center, July 13th - 19th 2025 [Poster], https://icml.cc/virtual/2025/poster/44163en_US
dcterms.issued2025-
dc.relation.conferenceInternational Conference on Machine Learning [ICML]en_US
dc.description.validate202507 bcchen_US
dc.description.oaOther Versionen_US
dc.identifier.FolderNumbera3947-
dc.identifier.SubFormID51801-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNSFC General Program No. 62376235en_US
dc.description.fundingTextGuangdong Basic and Applied Basic Research Foundation Nos. 2022A1515011652 and 2024A1515012399en_US
dc.description.fundingTextHKBU Faculty Niche Research Areas No. RC-FNRA-IG/22-23/SCI/04en_US
dc.description.fundingTextHKBU CSD Departmental Incentive Schemeen_US
dc.description.fundingTextAustralian Research Council projects: FT220100318, DP220102121, LP220100527, LP220200949, and IC190100031en_US
dc.description.oaCategoryCopyright retained by authoren_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
ICML25.916_Exploring_Criteria_of_Loss.pdf3.09 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Other Version
Show simple item record

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.