Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/95544
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electronic and Information Engineeringen_US
dc.creatorZheng, Hen_US
dc.creatorYe, Qen_US
dc.creatorHu, Hen_US
dc.creatorFang, Cen_US
dc.creatorShi, Jen_US
dc.date.accessioned2022-09-21T01:40:51Z-
dc.date.available2022-09-21T01:40:51Z-
dc.identifier.issn1545-5971en_US
dc.identifier.urihttp://hdl.handle.net/10397/95544-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rightsThis work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0en_US
dc.rightsThe following publication H. Zheng, Q. Ye, H. Hu, C. Fang and J. Shi, "Protecting Decision Boundary of Machine Learning Model With Differentially Private Perturbation," in IEEE Transactions on Dependable and Secure Computing, vol. 19, no. 3, pp. 2007-2022, 1 May-June 2022 is available at https://doi.org/10.1109/TDSC.2020.3043382en_US
dc.subjectAdversarial machine learningen_US
dc.subjectBoundary differential privacyen_US
dc.subjectModel defenseen_US
dc.subjectModel extractionen_US
dc.titleProtecting decision boundary of machine learning model with differentially private perturbationen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage2007en_US
dc.identifier.epage2022en_US
dc.identifier.volume19en_US
dc.identifier.issue3en_US
dc.identifier.doi10.1109/TDSC.2020.3043382en_US
dcterms.abstractMachine learning service API allows model owners to monetize proprietary models by offering prediction services to third-party users. However, existing literature shows that model parameters are vulnerable to extraction attacks which accumulate prediction queries and their responses to train a replica model. As countermeasures, researchers have proposed to reduce the rich API output, such as hiding the precise confidence. Nonetheless, even with response being only one bit, an adversary can still exploit fine-tuned queries with differential property to infer the decision boundary of the underlying model. In this paper, we propose boundary differential privacy (BDP) against such attacks by obfuscating the prediction responses with noises. BDP guarantees an adversary cannot learn the decision boundary of any two classes by a predefined precision no matter how many queries are issued to the prediction API. We first design a perturbation algorithm called boundary randomized response for a binary model. Then we prove it satisfies ε-BDP, followed by a generalization of this algorithm to a multiclass model. Finally, we generalize a hard boundary to soft boundary and design an adaptive perturbation algorithm that can still work in the latter case. The effectiveness and high utility of our solution are verified by extensive experiments on both linear and non-linear models.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE transactions on dependable and secure computing, May-June 2022, v. 19, no. 3, p. 2007-2022en_US
dcterms.isPartOfIEEE transactions on dependable and secure computingen_US
dcterms.issued2022-05-
dc.identifier.scopus2-s2.0-85097959198-
dc.identifier.eissn1941-0018en_US
dc.description.validate202209 bcfcen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberEIE-0261-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNSFCen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS43301512-
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Zheng_Protecting_Decision_Boundary.pdf3.78 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

71
Last Week
1
Last month
Citations as of Oct 13, 2024

Downloads

89
Citations as of Oct 13, 2024

SCOPUSTM   
Citations

17
Citations as of Oct 17, 2024

WEB OF SCIENCETM
Citations

15
Citations as of Oct 17, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.