Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/110087
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Electrical and Electronic Engineering | en_US |
dc.creator | Yao, J | en_US |
dc.creator | Chien, E | en_US |
dc.creator | Du, M | en_US |
dc.creator | Niu, X | en_US |
dc.creator | Wang, T | en_US |
dc.creator | Cheng, Z | en_US |
dc.creator | Yue, X | en_US |
dc.date.accessioned | 2024-11-25T08:15:55Z | - |
dc.date.available | 2024-11-25T08:15:55Z | - |
dc.identifier.uri | http://hdl.handle.net/10397/110087 | - |
dc.description | The 62nd Annual Meeting of the Association for Computational Linguistics, Bangkok, Thailand, August 11-16, 2024 | en_US |
dc.language.iso | en | en_US |
dc.publisher | Association for Computational Linguistics | en_US |
dc.rights | ©2024 Association for Computational Linguistics | en_US |
dc.rights | Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/). | en_US |
dc.rights | The following publication Jin Yao, Eli Chien, Minxin Du, Xinyao Niu, Tianhao Wang, Zezhou Cheng, and Xiang Yue. 2024. Machine Unlearning of Pre-trained Large Language Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8403–8419, Bangkok, Thailand. Association for Computational Linguistics is available at https://doi.org/10.18653/v1/2024.acl-long.457. | en_US |
dc.title | Machine unlearning of pre-trained large language models | en_US |
dc.type | Conference Paper | en_US |
dc.identifier.spage | 8403 | en_US |
dc.identifier.epage | 8419 | en_US |
dc.identifier.doi | 10.18653/v1/2024.acl-long.457 | en_US |
dcterms.abstract | This study investigates the concept of the ‘right to be forgotten’ within the context of large language models (LLMs). We explore machine unlearning as a pivotal solution, with a focus on pre-trained models–a notably under-researched area. Our research delineates a comprehensive framework for machine unlearning in pre-trained LLMs, encompassing a critical analysis of seven diverse unlearning methods. Through rigorous evaluation using curated datasets from arXiv, books, and GitHub, we establish a robust benchmark for unlearning performance, demonstrating that these methods are over 105 times more computationally efficient than retraining. Our results show that integrating gradient ascent with gradient descent on in-distribution data improves hyperparameter robustness. We also provide detailed guidelines for efficient hyperparameter tuning in the unlearning process. Our findings advance the discourse on ethical AI practices, offering substantive insights into the mechanics of machine unlearning for pre-trained LLMs and underscoring the potential for responsible AI development. | en_US |
dcterms.accessRights | open access | en_US |
dcterms.bibliographicCitation | In LW Ku, A Martins, V Srikumar (Eds.), Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), p. 8403-8419. Kerrville, TX, USA: Association for Computational Linguistics (ACL), 2024 | en_US |
dcterms.issued | 2024 | - |
dc.relation.conference | Annual Meeting of the Association for Computational Linguistics [ACL] | en_US |
dc.description.validate | 202411 bcch | en_US |
dc.description.oa | Version of Record | en_US |
dc.identifier.FolderNumber | OA_Others | - |
dc.description.fundingSource | Others | en_US |
dc.description.fundingText | NSF | en_US |
dc.description.pubStatus | Published | en_US |
dc.description.oaCategory | CC | en_US |
Appears in Collections: | Conference Paper |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2024.acl-long.457.pdf | 525.65 kB | Adobe PDF | View/Open |
Page views
21
Citations as of Dec 22, 2024
Downloads
9
Citations as of Dec 22, 2024
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.