Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/115758
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineering-
dc.creatorXiao, Y-
dc.creatorHu, H-
dc.creatorYe, Q-
dc.creatorTang, L-
dc.creatorLiang, Z-
dc.creatorZheng, H-
dc.date.accessioned2025-10-28T02:00:41Z-
dc.date.available2025-10-28T02:00:41Z-
dc.identifier.issn1545-5971-
dc.identifier.urihttp://hdl.handle.net/10397/115758-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.subjectDefense against model extractionen_US
dc.subjectMachine learning privacyen_US
dc.subjectModel extraction attacken_US
dc.titleUnlocking high-fidelity learning : towards neuron-grained model extractionen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.doi10.1109/TDSC.2025.3588857-
dcterms.abstractModel extraction (ME) attacks replicate valuable black-box machine learning (ML) models via malicious query interactions. Cutting-edge attacks focus on actively designing query samples to enhance model fidelity and imprudently adhere to the standard ML training approach. This causes a deviation from the true objective of learning a model over a task. In this paper, we innovatively shift our focus from query selection to training process optimization, aiming to boost the similarity of the copy model with the victim model from neuron to model level. We leverage neuron matching theory to attain this objective and develop a general training booster framework, MEBooster, to fully exploit this theory. MEBooster comprises an initial bootstrapping phase that furnishes initial parameters and an optimal model architecture, followed by a post-processing phase that employs fine-tuning for enhanced neuron matching. Notably, MEBooster can seamlessly integrate with all existing model extraction attacks, enhancing their overall performance. Performance evaluation shows up to 58.10% fidelity gain in image classification. From a defender's perspective, we introduce a novel defensive strategy called Stochastic Norm Enlargement (SNE) to mitigate the risk of such attacks by enlarging the model parameters' norm property in training. Performance evaluation shows up to 58.81% extractability (i.e., fidelity) reduction.-
dcterms.accessRightsembargoed accessen_US
dcterms.bibliographicCitationIEEE transactions on dependable and secure computing, Date of Publication: 15 July 2025, Early Access, https://doi.org/10.1109/TDSC.2025.3588857-
dcterms.isPartOfIEEE transactions on dependable and secure computing-
dcterms.issued2025-
dc.identifier.scopus2-s2.0-105010973289-
dc.identifier.eissn1941-0018-
dc.description.validate202510 bcch-
dc.description.oaNot applicableen_US
dc.identifier.SubFormIDG000302/2025-08en_US
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThis work was supported by the National Natural Science Foundation of China (Grant No: 92270123 and 62372122), the Research Grants Council, Hong Kong SAR, China (Grant No: 15226221, 15225921, and 15208923), and the Innovation and Technology Fund (Grant No: ITS-140-23FP).en_US
dc.description.pubStatusEarly releaseen_US
dc.date.embargo0000-00-00 (to be updated)en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Open Access Information
Status embargoed access
Embargo End Date 0000-00-00 (to be updated)
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.