Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/116617
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.creatorGu, Zen_US
dc.creatorFan, Qen_US
dc.creatorSun, Len_US
dc.creatorLiu, Yen_US
dc.creatorYe, Xen_US
dc.date.accessioned2026-01-06T07:52:12Z-
dc.date.available2026-01-06T07:52:12Z-
dc.identifier.issn2154-817Xen_US
dc.identifier.urihttp://hdl.handle.net/10397/116617-
dc.language.isoenen_US
dc.publisherAssociation for Computing Machineryen_US
dc.rights©2025 Copyright held by the owner/author(s).en_US
dc.rightsThis work is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).en_US
dc.rightsThe following publication Gu, Z., Fan, Q., Sun, L., Liu, Y., & Ye, X. (2025, August). VFLAIR-LLM: A Comprehensive Framework and Benchmark for Split Learning of LLMs. In Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2, 5470-5481 is available at https://doi.org/10.1145/3711896.3737411.en_US
dc.subjectData privacyen_US
dc.subjectFederated learningen_US
dc.subjectLarge language modelsen_US
dc.subjectSplit learningen_US
dc.titleVFLAIR-LLM : a comprehensive framework and benchmark for split learning of LLMsen_US
dc.typeConference Paperen_US
dc.identifier.spage5470en_US
dc.identifier.epage5481en_US
dc.identifier.volume2en_US
dc.identifier.doi10.1145/3711896.3737411en_US
dcterms.abstractWith the advancement of Large Language Models (LLMs), LLM applications have expanded into a growing number of fields. However, users with data privacy concerns face limitations in directly utilizing LLM APIs, while private deployments incur significant computational demands. This creates a substantial challenge in achieving secure LLM adaptation under constrained local resources. To address this issue, collaborative learning methods, such as Split Learning (SL), offer a resource-efficient and privacy-preserving solution for adapting LLMs to private domains. In this study, we introduce VFLAIR-LLM (available at https://github.com/FLAIR-THU/VFLAIR-LLM), an extensible and lightweight split learning framework for LLMs, enabling privacy-preserving LLM inference and fine-tuning in resource-constrained environments. Our library provides two LLM partition settings, supporting three task types and 18 datasets. In addition, we provide standard modules for implementing and evaluating attacks and defenses. We benchmark 5 attacks and 9 defenses under various Split Learning for LLM(SL-LLM) settings, offering concrete insights and recommendations on the choice of model partition configurations, defense strategies, and relevant hyperparameters for real-world applications.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationProceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Aug. 2025, v. 2, p. 5470-5481en_US
dcterms.isPartOfProceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Miningen_US
dcterms.issued2025-08-
dc.identifier.scopus2-s2.0-105014453145-
dc.relation.conferenceACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2 [KDD ’25]en_US
dc.description.validate202601 bchyen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Others-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThis work was supported by the National Key R&D Program of China under Grant No.2022ZD0160504, and Wuxi Innovation Center of Tsinghua AIR, under Grant A20240103.en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
3711896_3737411.pdf1.99 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.