Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/107112
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineering-
dc.creatorLiu, T-
dc.creatorLam, KM-
dc.creatorZhao, R-
dc.creatorQiu, G-
dc.date.accessioned2024-06-13T01:03:59Z-
dc.date.available2024-06-13T01:03:59Z-
dc.identifier.issn1051-8215-
dc.identifier.urihttp://hdl.handle.net/10397/107112-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication T. Liu, K. -M. Lam, R. Zhao and G. Qiu, "Deep Cross-Modal Representation Learning and Distillation for Illumination-Invariant Pedestrian Detection," in IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 1, pp. 315-329, Jan. 2022 is available at https://doi.org/10.1109/TCSVT.2021.3060162.en_US
dc.subjectCross-modal representationen_US
dc.subjectIllumination-invariant pedestrian detectionen_US
dc.subjectKnowledge distillationen_US
dc.subjectMultispectral fusionen_US
dc.titleDeep cross-modal representation learning and distillation for illumination-invariant pedestrian detectionen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage315-
dc.identifier.epage329-
dc.identifier.volume32-
dc.identifier.issue1-
dc.identifier.doi10.1109/TCSVT.2021.3060162-
dcterms.abstractIntegrating multispectral data has been demonstrated to be an effective solution for illumination-invariant pedestrian detection, in particular, RGB and thermal images can provide complementary information to handle light variations. However, most of the current multispectral detectors fuse the multimodal features by simple concatenation, without discovering their latent relationships. In this paper, we propose a cross-modal feature learning (CFL) module, based on a split-and-aggregation strategy, to explicitly explore both the shared and modality-specific representations between paired RGB and thermal images. We insert the proposed CFL module into multiple layers of a two-branch-based pedestrian detection network, to learn the cross-modal representations in diverse semantic levels. By introducing a segmentation-based auxiliary task, the multimodal network is trained end-to-end by jointly optimizing a multi-task loss. On the other hand, to alleviate the reliance of existing multispectral pedestrian detectors on thermal images, we propose a knowledge distillation framework to train a student detector, which only receives RGB images as input and distills the cross-modal representations guided by a well-trained multimodal teacher detector. In order to facilitate the cross-modal knowledge distillation, we design different distillation loss functions for the feature, detection and segmentation levels. Experimental results on the public KAIST multispectral pedestrian benchmark validate that the proposed cross-modal representation learning and distillation method achieves robust performance.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE transactions on circuits and systems for video technology, Jan. 2022, v. 32, no. 1, p. 315-329-
dcterms.isPartOfIEEE transactions on circuits and systems for video technology-
dcterms.issued2022-01-
dc.identifier.scopus2-s2.0-85101738767-
dc.identifier.eissn1558-2205-
dc.description.validate202403 bckw-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberEIE-0101en_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextKey-Area Research and Development Program of Guangdong Province 2020 under Project 76; Education Department of Guangdong Province, Chinaen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS55021281en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Liu_Deep_Cross-Modal_Representation.pdfPre-Published version3.18 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

2
Citations as of Jun 30, 2024

SCOPUSTM   
Citations

48
Citations as of Jun 21, 2024

WEB OF SCIENCETM
Citations

39
Citations as of Jun 27, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.