Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/107130
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineering-
dc.creatorLiu, Ten_US
dc.creatorZhao, Ren_US
dc.creatorXiao, Jen_US
dc.creatorLam, KMen_US
dc.date.accessioned2024-06-13T01:04:05Z-
dc.date.available2024-06-13T01:04:05Z-
dc.identifier.issn1070-9908en_US
dc.identifier.urihttp://hdl.handle.net/10397/107130-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication T. Liu, R. Zhao, J. Xiao and K. -M. Lam, "Progressive Motion Representation Distillation With Two-Branch Networks for Egocentric Activity Recognition," in IEEE Signal Processing Letters, vol. 27, pp. 1320-1324, 2020 is available at https://doi.org/10.1109/LSP.2020.3011326.en_US
dc.subjectEgocentric activity recognitionen_US
dc.subjectKnowledge distillationen_US
dc.subjectMotion representationen_US
dc.subjectTwo-branch networksen_US
dc.titleProgressive motion representation distillation with two-branch networks for egocentric activity recognitionen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1320en_US
dc.identifier.epage1324en_US
dc.identifier.volume27en_US
dc.identifier.doi10.1109/LSP.2020.3011326en_US
dcterms.abstractVideo-based egocentric activity recognition involves fine-grained spatiooral human-object interactions. State-of-the-art methods, based on the two-branch-based architecture, rely on pre-calculated optical flows to provide motion information. However, this two-stage strategy is computationally intensive, storage demanding, and not task-oriented, which hampers it from being deployed in real-world applications. Albeit there have been numerous attempts to explore other motion representations to replace optical flows, most of the methods were designed for third-person activities, without capturing fine-grained cues. To tackle these issues, in this letter, we propose a progressive motion representation distillation (PMRD) method, based on two-branch networks, for egocentric activity recognition. We exploit a generalized knowledge distillation framework to train a hallucination network, which receives RGB frames as input and produces motion cues guided by the optical-flow network. Specifically, we propose a progressive metric loss, which aims to distill local fine-grained motion patterns in terms of each temporal progress level. To further enforce the proposed distillation framework to concentrate on those informative frames, we integrate a temporal attention mechanism into the metric loss. Moreover, a multi-stage training procedure is employed for the efficient learning of the hallucination network. Experimental results on three egocentric activity benchmarks demonstrate the state-of-the-art performance of the proposed method.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE signal processing letters, 2020, v. 27, p. 1320-1324en_US
dcterms.isPartOfIEEE signal processing lettersen_US
dcterms.issued2020-
dc.identifier.scopus2-s2.0-85089947593-
dc.identifier.eissn1558-2361en_US
dc.description.validate202403 bckw-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberEIE-0188-
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS50281666-
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Liu_Progressive_Motion_Representation.pdfPre-Published version872.05 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

2
Citations as of Jun 30, 2024

Downloads

1
Citations as of Jun 30, 2024

SCOPUSTM   
Citations

3
Citations as of Jun 21, 2024

WEB OF SCIENCETM
Citations

3
Citations as of Jun 27, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.