Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/92141
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electronic and Information Engineering-
dc.creatorLiu, ZS-
dc.creatorSiu, WC-
dc.creatorChan, YL-
dc.date.accessioned2022-02-08T02:18:14Z-
dc.date.available2022-02-08T02:18:14Z-
dc.identifier.urihttp://hdl.handle.net/10397/92141-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rightsThis work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/en_US
dc.rightsThe following publication Z. -S. Liu, W. -C. Siu and Y. -L. Chan, "Efficient Video Super-Resolution via Hierarchical Temporal Residual Networks," in IEEE Access, vol. 9, pp. 106049-106064, 2021 is available at https://doi.org/10.1109/ACCESS.2021.3098326en_US
dc.subjectDeep learningen_US
dc.subjectHierarchical structureen_US
dc.subjectResidual networken_US
dc.subjectSuper-resolutionen_US
dc.subjectVideoen_US
dc.titleEfficient video super-resolution via hierarchical temporal residual networksen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage106049-
dc.identifier.epage106064-
dc.identifier.volume9-
dc.identifier.doi10.1109/ACCESS.2021.3098326-
dcterms.abstractSuper-Resolving (SR) video is more challenging compared with image super-resolution because of the demanding computation time. To enlarge a low-resolution video, the temporal relationship among frames must be fully exploited. We can model video SR as a multi-frame SR problem and use deep learning methods to estimate the spatial and temporal information. This paper proposes a lighter residual network, based on a multi-stage back projection for multi-frame SR. We improve the back projection based residual block by adding weights for adaptive feature tuning, and add global & local connections to explore deeper feature representation. We jointly learn spatial-temporal feature maps by using the proposed Spatial Convolution Packing scheme as an attention mechanism to extract more information from both spatial and temporal domains. Different from others, our proposed network can input multiple low-resolution frames to obtain multiple super-resolved frames simultaneously. We can then further improve the video SR quality by self-ensemble enhancement to meet videos with different motions and distortions. Results of much experimental work show that our proposed approaches give large improvement over other state-of-the-art video SR methods. Compared to recent CNN based video SR works, our approaches can save, up to 60% computation time and achieve 0.6 dB PSNR improvement.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE access, 2021, v. 9, p. 106049-106064-
dcterms.isPartOfIEEE access-
dcterms.issued2021-
dc.identifier.scopus2-s2.0-85111050362-
dc.identifier.eissn2169-3536-
dc.description.validate202202 bcvc-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThis work was supported in part by the Hong Kong Polytechnic University Internal Research Grant ZZHR, and in part by the RGC Project of the Hong Kong Special Administrative Region, China, under Grant University Grants Committee (UGC)/IDS(C)11/E01/20.en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Liu_Efficient_Video_Super-resolution.pdf2.72 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

117
Last Week
1
Last month
Citations as of Nov 9, 2025

Downloads

86
Citations as of Nov 9, 2025

SCOPUSTM   
Citations

4
Citations as of Dec 19, 2025

WEB OF SCIENCETM
Citations

1
Citations as of Dec 18, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.