Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/99847
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.creatorPang, Ren_US
dc.creatorXi, Zen_US
dc.creatorJi, Sen_US
dc.creatorLuo, Xen_US
dc.creatorWang, Ten_US
dc.date.accessioned2023-07-24T01:03:00Z-
dc.date.available2023-07-24T01:03:00Z-
dc.identifier.isbn978-1-939133-31-1en_US
dc.identifier.urihttp://hdl.handle.net/10397/99847-
dc.description31st USENIX Security Symposium, August 10–12, 2022, Boston, MA, USAen_US
dc.language.isoenen_US
dc.rights© Author(s)en_US
dc.rightsThe following publication Pang, R., Xi, Z., Ji, S., Luo, X., & Wang, T. (2022). On the security risks of {AutoML}. In 31st USENIX Security Symposium (USENIX Security 22) (pp. 3953-3970) is available at https://www.usenix.org/conference/usenixsecurity22/presentation/pang-renen_US
dc.titleOn the security risks of AutoMLen_US
dc.typeConference Paperen_US
dc.identifier.spage3953en_US
dc.identifier.epage3970en_US
dcterms.abstractNeural architecture search (NAS) represents an emerging machine learning (ML) paradigm that automatically searches for model architectures tailored to given tasks, which significantly simplifies the development of ML systems and propels the trend of ML democratization. Yet, thus far little is known about the potential security risks incurred by NAS, which is concerning given the increasing use of NAS-generated models in critical domains.en_US
dcterms.abstractThis work represents a solid initial step towards bridging the gap. First, through an extensive empirical study of 10 popular NAS methods, we show that compared with their manually designed counterparts, NAS-generated models tend to suffer greater vulnerabilities to various malicious manipulations (e.g., adversarial evasion, model poisoning, functionality stealing). Further, with both empirical and analytical evidence, we provide possible explanations for such phenomena: given the prohibitive search space and training cost, most NAS methods favor models that converge fast at early training stages; this preference results in architectural properties associated with attack vulnerabilities (e.g., high loss smoothness, low gradient variance). Our findings not only reveal the relationships between model characteristics and attack vulnerabilities but also suggest the inherent connections underlying different attacks. Finally, we discuss potential remedies to mitigate such drawbacks, including increasing cell depth and suppressing skip connects, which lead to several promising research directions.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn Proceedings of the 31st USENIX Security Symposium, August 10–12, 2022, Boston, MA, USA, p. 3953-3970en_US
dcterms.issued2022-
dc.relation.conferenceUSENIX Security Symposium [USENIX Security]en_US
dc.description.validate202307 bcwwen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumbera2291-
dc.identifier.SubFormID47373-
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCopyright retained by authoren_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
sec22-pang-ren.pdf768.06 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

131
Citations as of Oct 6, 2025

Downloads

44
Citations as of Oct 6, 2025

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.