Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/115979
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Civil and Environmental Engineering | - |
| dc.creator | Li, P | - |
| dc.creator | Peng, Y | - |
| dc.creator | Wang, SM | - |
| dc.creator | Zhong, C | - |
| dc.date.accessioned | 2025-11-18T06:48:41Z | - |
| dc.date.available | 2025-11-18T06:48:41Z | - |
| dc.identifier.uri | http://hdl.handle.net/10397/115979 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Institute of Electrical and Electronics Engineers | en_US |
| dc.rights | © 2025 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ | en_US |
| dc.rights | The following publication P. Li, Y. Peng, S. -M. Wang and C. Zhong, "Improved RT-DETR Framework for Railway Obstacle Detection," in IEEE Access, vol. 13, pp. 125869-125880, 2025 is available at https://doi.org/10.1109/ACCESS.2025.3589159. | en_US |
| dc.subject | Convolutional neural network (CNN) | en_US |
| dc.subject | Deep learning | en_US |
| dc.subject | Obstacle intrusion detection | en_US |
| dc.subject | Railway traffic | en_US |
| dc.subject | Transformer | en_US |
| dc.title | Improved RT-DETR framework for railway obstacle detection | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.spage | 125869 | - |
| dc.identifier.epage | 125880 | - |
| dc.identifier.volume | 13 | - |
| dc.identifier.doi | 10.1109/ACCESS.2025.3589159 | - |
| dcterms.abstract | Obstacle intrusion detection in railway systems is a critical technology for ensuring the operational safety of trains. However, existing algorithms face challenges related to insufficient multiscale object detection, high model redundancy, and poor real-time performance. Building upon the RT-DETR framework, this study proposes a Multiscale Separable Deformable (MSD) module that integrates depthwise convolution with deformable convolution to enhance feature extraction capabilities while reducing computational load. Additionally, a Deformable Agent Attention (DAA) mechanism is designed to optimize attention weights through sparse queries, effectively improving detection accuracy for small targets and enhancing inference speed in complex scenarios. Experimental results demonstrate that the improved model achieves 87.9% mean average precision (mAP) on a railway dataset, with a detection speed of 90 frames per second (FPS). The proposed model achieves a +1.7% mAP improvement and 13.9% faster inference speed compared to RT-DETR, while simultaneously reducing model parameters by 24.6%. As a result, the proposed model is highly effective for multiple obstacle intrusion detection in complex real-world scenarios. | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | IEEE access, 2025, v. 13, p. 125869-125880 | - |
| dcterms.isPartOf | IEEE access | - |
| dcterms.issued | 2025 | - |
| dc.identifier.scopus | 2-s2.0-105010927066 | - |
| dc.identifier.eissn | 2169-3536 | - |
| dc.description.validate | 202511 bcch | - |
| dc.description.oa | Version of Record | en_US |
| dc.identifier.FolderNumber | OA_Scopus/WOS | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | This work was supported in part by Wuyi University’s Hong Kong and Macao Joint Research and Development Fund under Grant 2019WGALH15 and Grant 2019WGALH17, and in part by the Innovation and Technology Commission of Hong Kong SAR Government to Hong Kong Branch of Chinese National Rail Transit Electrification and Automation Engineering Technology Research Center under Grant K-BBY1. | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.oaCategory | CC | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Li_Improved_RT-DETR_Framework.pdf | 1.82 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



