Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/115705
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Electrical and Electronic Engineering | en_US |
| dc.creator | Lian, J | en_US |
| dc.creator | Pan, J | en_US |
| dc.creator | Wang, L | en_US |
| dc.creator | Wang, Y | en_US |
| dc.creator | Mei, S | en_US |
| dc.creator | Chau, LP | en_US |
| dc.date.accessioned | 2025-10-23T06:04:58Z | - |
| dc.date.available | 2025-10-23T06:04:58Z | - |
| dc.identifier.issn | 0950-7051 | en_US |
| dc.identifier.uri | http://hdl.handle.net/10397/115705 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Elsevier | en_US |
| dc.subject | Benchmark | en_US |
| dc.subject | Object detection | en_US |
| dc.subject | Physical attack | en_US |
| dc.title | Padetbench : towards benchmarking texture- and patch-based physical attacks against object detection | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.volume | 329 | en_US |
| dc.identifier.doi | 10.1016/j.knosys.2025.114395 | en_US |
| dcterms.abstract | Physical attacks against object detection have gained significant attention due to their practical implications. However, conducting physical experiments is time-consuming and labor-intensive, and controlling physical dynamics and cross-domain transformations in the real world is challenging, leading to inconsistent evaluations and hindering the development of robust models. To address these issues, we rigorously explore realistic simulations to benchmark physical attacks under controlled conditions. This approach ensures fairness and resolves the problem of capturing strictly aligned adversarial images, which is challenging in the real world. Our benchmark includes 23 physical attacks, 48 object detectors, comprehensive physical dynamics, and evaluation metrics. We provide end-to-end pipelines for dataset generation, detection, evaluation, and analysis. The benchmark is flexible and scalable, allowing easy integration of new objects, attacks, models, and vision tasks. Based on this benchmark, we generate comprehensive datasets and perform over 8000 evaluations, including overall assessments and detailed ablation studies. These experiments provide detailed analyses from detection and attack perspectives, highlight limitations of existing algorithms, and offer revealing insights. The code and datasets are publicly available at https://github.com/JiaweiLian/PADetBench. | en_US |
| dcterms.accessRights | embargoed access | en_US |
| dcterms.bibliographicCitation | Knowledge-based systems, 4 Nov. 2025, v. 329, pt. B, 114395 | en_US |
| dcterms.isPartOf | Knowledge-based systems | en_US |
| dcterms.issued | 2025-11-04 | - |
| dc.identifier.scopus | 2-s2.0-105015969559 | - |
| dc.identifier.artn | 114395 | en_US |
| dc.description.validate | 202510 bchy | en_US |
| dc.description.oa | Not applicable | en_US |
| dc.identifier.SubFormID | G000263/2025-10 | - |
| dc.description.fundingSource | Self-funded | en_US |
| dc.description.fundingText | The research work was conducted in the JC STEM Lab of Machine Learning and Computer Vision funded by The Hong Kong Jockey Club Charities Trust. This research was partially supported by the Global STEM Professorship Scheme from the Hong Kong Special Administrative Region and by the National Natural Science Foundation of China (Grant No. 62171381 ). | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.date.embargo | 2027-11-04 | en_US |
| dc.description.oaCategory | Green (AAM) | en_US |
| dc.relation.rdata | https://github.com/JiaweiLian/PADetBench | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



