Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/113768
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Mechanical Engineering | - |
dc.creator | Li, H | - |
dc.creator | Chu, HK | - |
dc.creator | Sun, Y | - |
dc.date.accessioned | 2025-06-23T00:57:53Z | - |
dc.date.available | 2025-06-23T00:57:53Z | - |
dc.identifier.uri | http://hdl.handle.net/10397/113768 | - |
dc.language.iso | en | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers | en_US |
dc.rights | © 2025 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | en_US |
dc.rights | The following publication H. Li, H. K. Chu and Y. Sun, "Improving RGB-Thermal Semantic Scene Understanding With Synthetic Data Augmentation for Autonomous Driving," in IEEE Robotics and Automation Letters, vol. 10, no. 5, pp. 4452-4459, May 2025 is available at https://doi.org/10.1109/LRA.2025.3548399. | en_US |
dc.subject | Autonomous driving | en_US |
dc.subject | RGB-T fusion | en_US |
dc.subject | Semantic scene understanding | en_US |
dc.subject | Synthetic image generation | en_US |
dc.title | Improving RGB-Thermal semantic scene understanding with synthetic data augmentation for autonomous driving | en_US |
dc.type | Journal/Magazine Article | en_US |
dc.identifier.spage | 4452 | - |
dc.identifier.epage | 4459 | - |
dc.identifier.volume | 10 | - |
dc.identifier.issue | 5 | - |
dc.identifier.doi | 10.1109/LRA.2025.3548399 | - |
dcterms.abstract | Semantic scene understanding is an important capability for autonomous vehicles. Despite recent advances in RGB-Thermal (RGB-T) semantic segmentation, existing methods often rely on parameter-heavy models, which are particularly constrained by the lack of precisely-labeled training data. To alleviate this limitation, we propose a data-driven method, SyntheticSeg, to enhance RGB-T semantic segmentation. Specifically, we utilize generative models to generate synthetic RGB-T images from the semantic layouts in real datasets and construct a large-scale, high-fidelity synthetic dataset to provide the segmentation models with sufficient training data. We also introduce a novel metric that measures both the scarcity and segmentation difficulty of semantic layouts, guiding sampling from the synthetic dataset to alleviate class imbalance and improve the overall segmentation performance. Experimental results on a public dataset demonstrate our superior performance over the state of the arts. | - |
dcterms.accessRights | open access | en_US |
dcterms.bibliographicCitation | IEEE robotics and automation letters, May 2025, v. 10, no. 5, p. 4452-4459 | - |
dcterms.isPartOf | IEEE robotics and automation letters | - |
dcterms.issued | 2025-05 | - |
dc.identifier.scopus | 2-s2.0-105001589534 | - |
dc.identifier.eissn | 2377-3766 | - |
dc.description.validate | 202506 bcch | - |
dc.description.oa | Accepted Manuscript | en_US |
dc.identifier.FolderNumber | a3739 | en_US |
dc.identifier.SubFormID | 50914 | en_US |
dc.description.fundingSource | Others | en_US |
dc.description.fundingText | Innovation and Technology Fund | en_US |
dc.description.pubStatus | Published | en_US |
dc.description.oaCategory | Green (AAM) | en_US |
Appears in Collections: | Conference Paper |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Li_Improving_RGB-Thermal_Semantic.pdf | Pre-Published version | 5.15 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.