Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/110087
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineeringen_US
dc.creatorYao, Jen_US
dc.creatorChien, Een_US
dc.creatorDu, Men_US
dc.creatorNiu, Xen_US
dc.creatorWang, Ten_US
dc.creatorCheng, Zen_US
dc.creatorYue, Xen_US
dc.date.accessioned2024-11-25T08:15:55Z-
dc.date.available2024-11-25T08:15:55Z-
dc.identifier.urihttp://hdl.handle.net/10397/110087-
dc.descriptionThe 62nd Annual Meeting of the Association for Computational Linguistics, Bangkok, Thailand, August 11-16, 2024en_US
dc.language.isoenen_US
dc.publisherAssociation for Computational Linguisticsen_US
dc.rights©2024 Association for Computational Linguisticsen_US
dc.rightsMaterials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).en_US
dc.rightsThe following publication Jin Yao, Eli Chien, Minxin Du, Xinyao Niu, Tianhao Wang, Zezhou Cheng, and Xiang Yue. 2024. Machine Unlearning of Pre-trained Large Language Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8403–8419, Bangkok, Thailand. Association for Computational Linguistics is available at https://doi.org/10.18653/v1/2024.acl-long.457.en_US
dc.titleMachine unlearning of pre-trained large language modelsen_US
dc.typeConference Paperen_US
dc.identifier.spage8403en_US
dc.identifier.epage8419en_US
dc.identifier.doi10.18653/v1/2024.acl-long.457en_US
dcterms.abstractThis study investigates the concept of the ‘right to be forgotten’ within the context of large language models (LLMs). We explore machine unlearning as a pivotal solution, with a focus on pre-trained models–a notably under-researched area. Our research delineates a comprehensive framework for machine unlearning in pre-trained LLMs, encompassing a critical analysis of seven diverse unlearning methods. Through rigorous evaluation using curated datasets from arXiv, books, and GitHub, we establish a robust benchmark for unlearning performance, demonstrating that these methods are over 105 times more computationally efficient than retraining. Our results show that integrating gradient ascent with gradient descent on in-distribution data improves hyperparameter robustness. We also provide detailed guidelines for efficient hyperparameter tuning in the unlearning process. Our findings advance the discourse on ethical AI practices, offering substantive insights into the mechanics of machine unlearning for pre-trained LLMs and underscoring the potential for responsible AI development.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn LW Ku, A Martins, V Srikumar (Eds.), Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), p. 8403-8419. Kerrville, TX, USA: Association for Computational Linguistics (ACL), 2024en_US
dcterms.issued2024-
dc.relation.conferenceAnnual Meeting of the Association for Computational Linguistics [ACL]en_US
dc.description.validate202411 bcchen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Others-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNSFen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
2024.acl-long.457.pdf525.65 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

21
Citations as of Dec 22, 2024

Downloads

9
Citations as of Dec 22, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.