Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/114289
PIRA download icon_1.1View/Download Full Text
Title: Exploring criteria of loss reweighting to enhance LLM unlearning
Authors: Yang, P
Wang, Q
Huang, Z
Liu, T
Zhang, C 
Han, B
Issue Date: 2025
Source: In ICML 2025: Forty-Second International Conference on Machine Learning, Vancouver Convention Center, July 13th - 19th 2025 [Poster], https://icml.cc/virtual/2025/poster/44163
Abstract: Loss reweighting has shown significant benefits for machine unlearning with large language models (LLMs). However, their exact functionalities are left unclear and the optimal strategy remains an open question, thus impeding the understanding and improvement of existing methodologies. In this paper, we identify two distinct goals of loss reweighting, namely, Saturation and Importance---the former indicates that those insufficiently optimized data should be emphasized, while the latter stresses some critical data that are most influential for loss minimization. To study their usefulness, we design specific reweighting strategies for each goal and evaluate their respective effects on unlearning. We conduct extensive empirical analyses on well-established benchmarks, and summarize some important observations as follows:(i) Saturation enhances efficacy more than importance-based reweighting, and their combination can yield additional improvements.(ii) Saturation typically allocates lower weights to data with lower likelihoods, whereas importance-based reweighting does the opposite.(iii) The efficacy of unlearning is also largely influenced by the smoothness and granularity of the weight distributions.Based on these findings, we propose SatImp, a simple reweighting method that combines the advantages of both saturation and importance.Empirical results on extensive datasets validate the efficacy of our method, potentially bridging existing research gaps and indicating directions for future research.Our code is available at https://github.com/tmlr-group/SatImp.
Rights: Copyright 2025 by the author(s).
Posted with permission of the author.
The following publication Yang, P., Wang, Q., Huang, Z., Liu, T., Zhang, C., & Han, B. (2025). Exploring Criteria of Loss Reweighting to Enhance LLM Unlearning. In Forty-second International Conference on Machine Learning 2025 is available at https://icml.cc/virtual/2025/poster/44163.
Appears in Collections:Conference Paper

Files in This Item:
File Description SizeFormat 
ICML25.916_Exploring_Criteria_of_Loss.pdf3.09 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Other Version
Show full item record

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.