Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/110087
PIRA download icon_1.1View/Download Full Text
Title: Machine unlearning of pre-trained large language models
Authors: Yao, J
Chien, E
Du, M 
Niu, X
Wang, T
Cheng, Z
Yue, X
Issue Date: 2024
Source: In LW Ku, A Martins, V Srikumar (Eds.), Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), p. 8403-8419. Kerrville, TX, USA: Association for Computational Linguistics (ACL), 2024
Abstract: This study investigates the concept of the ‘right to be forgotten’ within the context of large language models (LLMs). We explore machine unlearning as a pivotal solution, with a focus on pre-trained models–a notably under-researched area. Our research delineates a comprehensive framework for machine unlearning in pre-trained LLMs, encompassing a critical analysis of seven diverse unlearning methods. Through rigorous evaluation using curated datasets from arXiv, books, and GitHub, we establish a robust benchmark for unlearning performance, demonstrating that these methods are over 105 times more computationally efficient than retraining. Our results show that integrating gradient ascent with gradient descent on in-distribution data improves hyperparameter robustness. We also provide detailed guidelines for efficient hyperparameter tuning in the unlearning process. Our findings advance the discourse on ethical AI practices, offering substantive insights into the mechanics of machine unlearning for pre-trained LLMs and underscoring the potential for responsible AI development.
Publisher: Association for Computational Linguistics
DOI: 10.18653/v1/2024.acl-long.457
Description: The 62nd Annual Meeting of the Association for Computational Linguistics, Bangkok, Thailand, August 11-16, 2024
Rights: ©2024 Association for Computational Linguistics
Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).
The following publication Jin Yao, Eli Chien, Minxin Du, Xinyao Niu, Tianhao Wang, Zezhou Cheng, and Xiang Yue. 2024. Machine Unlearning of Pre-trained Large Language Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8403–8419, Bangkok, Thailand. Association for Computational Linguistics is available at https://doi.org/10.18653/v1/2024.acl-long.457.
Appears in Collections:Conference Paper

Files in This Item:
File Description SizeFormat 
2024.acl-long.457.pdf525.65 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Page views

21
Citations as of Dec 22, 2024

Downloads

9
Citations as of Dec 22, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.