Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/115705
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineeringen_US
dc.creatorLian, Jen_US
dc.creatorPan, Jen_US
dc.creatorWang, Len_US
dc.creatorWang, Yen_US
dc.creatorMei, Sen_US
dc.creatorChau, LPen_US
dc.date.accessioned2025-10-23T06:04:58Z-
dc.date.available2025-10-23T06:04:58Z-
dc.identifier.issn0950-7051en_US
dc.identifier.urihttp://hdl.handle.net/10397/115705-
dc.language.isoenen_US
dc.publisherElsevieren_US
dc.subjectBenchmarken_US
dc.subjectObject detectionen_US
dc.subjectPhysical attacken_US
dc.titlePadetbench : towards benchmarking texture- and patch-based physical attacks against object detectionen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume329en_US
dc.identifier.doi10.1016/j.knosys.2025.114395en_US
dcterms.abstractPhysical attacks against object detection have gained significant attention due to their practical implications. However, conducting physical experiments is time-consuming and labor-intensive, and controlling physical dynamics and cross-domain transformations in the real world is challenging, leading to inconsistent evaluations and hindering the development of robust models. To address these issues, we rigorously explore realistic simulations to benchmark physical attacks under controlled conditions. This approach ensures fairness and resolves the problem of capturing strictly aligned adversarial images, which is challenging in the real world. Our benchmark includes 23 physical attacks, 48 object detectors, comprehensive physical dynamics, and evaluation metrics. We provide end-to-end pipelines for dataset generation, detection, evaluation, and analysis. The benchmark is flexible and scalable, allowing easy integration of new objects, attacks, models, and vision tasks. Based on this benchmark, we generate comprehensive datasets and perform over 8000 evaluations, including overall assessments and detailed ablation studies. These experiments provide detailed analyses from detection and attack perspectives, highlight limitations of existing algorithms, and offer revealing insights. The code and datasets are publicly available at https://github.com/JiaweiLian/PADetBench.en_US
dcterms.accessRightsembargoed accessen_US
dcterms.bibliographicCitationKnowledge-based systems, 4 Nov. 2025, v. 329, pt. B, 114395en_US
dcterms.isPartOfKnowledge-based systemsen_US
dcterms.issued2025-11-04-
dc.identifier.scopus2-s2.0-105015969559-
dc.identifier.artn114395en_US
dc.description.validate202510 bchyen_US
dc.description.oaNot applicableen_US
dc.identifier.SubFormIDG000263/2025-10-
dc.description.fundingSourceSelf-fundeden_US
dc.description.fundingTextThe research work was conducted in the JC STEM Lab of Machine Learning and Computer Vision funded by The Hong Kong Jockey Club Charities Trust. This research was partially supported by the Global STEM Professorship Scheme from the Hong Kong Special Administrative Region and by the National Natural Science Foundation of China (Grant No. 62171381 ).en_US
dc.description.pubStatusPublisheden_US
dc.date.embargo2027-11-04en_US
dc.description.oaCategoryGreen (AAM)en_US
dc.relation.rdatahttps://github.com/JiaweiLian/PADetBenchen_US
Appears in Collections:Journal/Magazine Article
Open Access Information
Status embargoed access
Embargo End Date 2027-11-04
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.