Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/113109
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Applied Mathematics-
dc.creatorLiu, CYen_US
dc.creatorJiao, YLen_US
dc.creatorWang, JHen_US
dc.creatorHuang, Jen_US
dc.date.accessioned2025-05-19T00:53:15Z-
dc.date.available2025-05-19T00:53:15Z-
dc.identifier.urihttp://hdl.handle.net/10397/113109-
dc.language.isoenen_US
dc.rights© 2024 Society for Industrial and Applied Mathematicsen_US
dc.rightsCopyright © by SIAM. Unauthorized reproduction of this article is prohibited.en_US
dc.rightsThe following publication Liu, C., Jiao, Y., Wang, J., & Huang, J. (2024). Nonasymptotic Bounds for Adversarial Excess Risk under Misspecified Models. SIAM Journal on Mathematics of Data Science, 6(4), 847-868 is available at https://dx.doi.org/10.1137/23M1598210.en_US
dc.subjectAdversarial attacken_US
dc.subjectApproximation erroren_US
dc.subjectGeneralizationen_US
dc.subjectMisspecified modelen_US
dc.subjectRobustnessen_US
dc.titleNonasymptotic bounds for adversarial excess risk under misspecified modelsen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage847en_US
dc.identifier.epage868en_US
dc.identifier.volume6en_US
dc.identifier.issue4en_US
dc.identifier.doi10.1137/23M1598210en_US
dcterms.abstractWe propose a general approach to evaluating the performance of robust estimators based on adversarial losses under misspecified models. We first show that adversarial risk is equivalent to the risk induced by a distributional adversarial attack under certain smoothness conditions. This ensures that the adversarial training procedure is well-defined. To evaluate the generalization performance of the adversarial estimator, we study the adversarial excess risk. Our proposed analysis method includes investigations on both generalization error and approximation error. We then establish nonasymptotic upper bounds for the adversarial excess risk associated with Lipschitz loss functions. In addition, we apply our general results to adversarial training for classification and regression problems. For the quadratic loss in nonparametric regression, we show that the adversarial excess risk bound can be improved over that for a general loss.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationSIAM journal on mathematics of data science, 2024, v. 6, no. 4, p. 847-868en_US
dcterms.isPartOfSIAM journal on mathematics of data scienceen_US
dcterms.issued2024-
dc.identifier.isiWOS:001343415400001-
dc.identifier.eissn2577-0187en_US
dc.description.validate202505 bcrc-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOS-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNational Nature Science Foundation of China; Fundamental Research Funds for the Central Universities; research fund of KLATASDSMOE of China; CUHK Startup Grant; Hong Kong Polytechnic University.en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
23m1598210.pdf475.73 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.