Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/88239
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electronic and Information Engineeringen_US
dc.creatorZheng, Hen_US
dc.creatorYe, Qen_US
dc.creatorHu, Hen_US
dc.creatorFang, Cen_US
dc.creatorShi, Jen_US
dc.date.accessioned2020-09-28T08:33:27Z-
dc.date.available2020-09-28T08:33:27Z-
dc.identifier.isbn978-3-030-29958-3 (print)en_US
dc.identifier.isbn978-3-030-29959-0 (online)en_US
dc.identifier.issn0302-9743en_US
dc.identifier.urihttp://hdl.handle.net/10397/88239-
dc.description24th European Symposium on Research in Computer Security, Luxembourg, September 23-27, 2019en_US
dc.language.isoenen_US
dc.publisherSpringeren_US
dc.rights© Springer Nature Switzerland AG 2019en_US
dc.rightsZheng H., Ye Q., Hu H., Fang C., Shi J. (2019) BDPL: A Boundary Differentially Private Layer Against Machine Learning Model Extraction Attacks. In: Sako K., Schneider S., Ryan P. (eds) Computer Security – ESORICS 2019. ESORICS 2019. Lecture Notes in Computer Science, vol 11735. Springer, Cham.en_US
dc.rightsThe final authenticated version is available online at https://doi.org/10.1007/978-3-030-29959-0_4.en_US
dc.titleBDPL : a boundary differentially private layer against machine learning model extraction attacksen_US
dc.typeConference Paperen_US
dc.identifier.spage66en_US
dc.identifier.epage83en_US
dc.identifier.volume11735en_US
dc.identifier.doi10.1007/978-3-030-29959-0_4en_US
dcterms.abstractMachine learning models trained by large volume of proprietary data and intensive computational resources are valuable assets of their owners, who merchandise these models to third-party users through prediction service API. However, existing literature shows that model parameters are vulnerable to extraction attacks which accumulate a large number of prediction queries and their responses to train a replica model. As countermeasures, researchers have proposed to reduce the rich API output, such as hiding the precise confidence level of the prediction response. Nonetheless, even with response being only one bit, an adversary can still exploit fine-tuned queries with differential property to infer the decision boundary of the underlying model. In this paper, we propose boundary differential privacy ( ϵ -BDP) as a solution to protect against such attacks by obfuscating the prediction responses near the decision boundary. ϵ -BDP guarantees an adversary cannot learn the decision boundary by a predefined precision no matter how many queries are issued to the prediction API. We design and prove a perturbation algorithm called boundary randomized response that can achieve ϵ -BDP. The effectiveness and high utility of our solution against model extraction attacks are verified by extensive experiments on both linear and non-linear models.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationLecture notes in computer science (including subseries Lecture notes in artificial intelligence and lecture notes in bioinformatics), 2019, v. 11735, p. 66-83en_US
dcterms.isPartOfLecture notes in computer science (including subseries Lecture notes in artificial intelligence and lecture notes in bioinformatics)en_US
dcterms.issued2019-
dc.relation.conferenceEuropean Symposium on Research in Computer Security [ESORICS]en_US
dc.identifier.eissn1611-3349en_US
dc.description.validatebcrc 202009en_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumbera0483-n01-
dc.description.pubStatusPublisheden_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
ESORICS19.pdfPre-Published version1.23 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

365
Last Week
3
Last month
Citations as of Mar 24, 2024

Downloads

495
Citations as of Mar 24, 2024

SCOPUSTM   
Citations

30
Citations as of Mar 29, 2024

WEB OF SCIENCETM
Citations

19
Citations as of Mar 28, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.