Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/107112
PIRA download icon_1.1View/Download Full Text
Title: Deep cross-modal representation learning and distillation for illumination-invariant pedestrian detection
Authors: Liu, T 
Lam, KM 
Zhao, R 
Qiu, G
Issue Date: Jan-2022
Source: IEEE transactions on circuits and systems for video technology, Jan. 2022, v. 32, no. 1, p. 315-329
Abstract: Integrating multispectral data has been demonstrated to be an effective solution for illumination-invariant pedestrian detection, in particular, RGB and thermal images can provide complementary information to handle light variations. However, most of the current multispectral detectors fuse the multimodal features by simple concatenation, without discovering their latent relationships. In this paper, we propose a cross-modal feature learning (CFL) module, based on a split-and-aggregation strategy, to explicitly explore both the shared and modality-specific representations between paired RGB and thermal images. We insert the proposed CFL module into multiple layers of a two-branch-based pedestrian detection network, to learn the cross-modal representations in diverse semantic levels. By introducing a segmentation-based auxiliary task, the multimodal network is trained end-to-end by jointly optimizing a multi-task loss. On the other hand, to alleviate the reliance of existing multispectral pedestrian detectors on thermal images, we propose a knowledge distillation framework to train a student detector, which only receives RGB images as input and distills the cross-modal representations guided by a well-trained multimodal teacher detector. In order to facilitate the cross-modal knowledge distillation, we design different distillation loss functions for the feature, detection and segmentation levels. Experimental results on the public KAIST multispectral pedestrian benchmark validate that the proposed cross-modal representation learning and distillation method achieves robust performance.
Keywords: Cross-modal representation
Illumination-invariant pedestrian detection
Knowledge distillation
Multispectral fusion
Publisher: Institute of Electrical and Electronics Engineers
Journal: IEEE transactions on circuits and systems for video technology 
ISSN: 1051-8215
EISSN: 1558-2205
DOI: 10.1109/TCSVT.2021.3060162
Rights: © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
The following publication T. Liu, K. -M. Lam, R. Zhao and G. Qiu, "Deep Cross-Modal Representation Learning and Distillation for Illumination-Invariant Pedestrian Detection," in IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 1, pp. 315-329, Jan. 2022 is available at https://doi.org/10.1109/TCSVT.2021.3060162.
Appears in Collections:Journal/Magazine Article

Files in This Item:
File Description SizeFormat 
Liu_Deep_Cross-Modal_Representation.pdfPre-Published version3.18 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Page views

2
Citations as of Jun 30, 2024

SCOPUSTM   
Citations

48
Citations as of Jun 21, 2024

WEB OF SCIENCETM
Citations

39
Citations as of Jun 27, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.