Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/105644
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorGu, Sen_US
dc.creatorZuo, Wen_US
dc.creatorGuo, Sen_US
dc.creatorChen, Yen_US
dc.creatorChen, Cen_US
dc.creatorZhang, Len_US
dc.date.accessioned2024-04-15T07:35:38Z-
dc.date.available2024-04-15T07:35:38Z-
dc.identifier.isbn978-1-5386-0457-1 (Electronic)en_US
dc.identifier.isbn978-1-5386-0458-8 (Print on Demand(PoD))en_US
dc.identifier.urihttp://hdl.handle.net/10397/105644-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication S. Gu, W. Zuo, S. Guo, Y. Chen, C. Chen and L. Zhang, "Learning Dynamic Guidance for Depth Image Enhancement," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 712-721 is available at https://doi.org/10.1109/CVPR.2017.83.en_US
dc.titleLearning dynamic guidance for depth image enhancementen_US
dc.typeConference Paperen_US
dc.identifier.spage712en_US
dc.identifier.epage721en_US
dc.identifier.doi10.1109/CVPR.2017.83en_US
dcterms.abstractThe depth images acquired by consumer depth sensors (e.g., Kinect and ToF) usually are of low resolution and insufficient quality. One natural solution is to incorporate with high resolution RGB camera for exploiting their statistical correlation. However, most existing methods are intuitive and limited in characterizing the complex and dynamic dependency between intensity and depth images. To address these limitations, we propose a weighted analysis representation model for guided depth image enhancement, which advances the conventional methods in two aspects: (i) task driven learning and (ii) dynamic guidance. First, we generalize the analysis representation model by including a guided weight function for dependency modeling. And the task-driven learning formulation is introduced to obtain the optimized guidance tailored to specific enhancement task. Second, the depth image is gradually enhanced along with the iterations, and thus the guidance should also be dynamically adjusted to account for the updating of depth image. To this end, stage-wise parameters are learned for dynamic guidance. Experiments on guided depth image upsampling and noisy depth image restoration validate the effectiveness of our method.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitation2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 21-26 July 2017, Honolulu, Hawaii, p. 712-721en_US
dcterms.issued2017-
dc.identifier.scopus2-s2.0-85044282356-
dc.relation.conferenceConference on Computer Vision and Pattern Recognition [CVPR]-
dc.description.validate202402 bcch-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberCOMP-1084-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNSFCen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS13899670-
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Gu_Learning_Dynamic_Guidance.pdfPre-Published version5.53 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

115
Last Week
2
Last month
Citations as of Apr 12, 2026

Downloads

145
Citations as of Apr 12, 2026

SCOPUSTM   
Citations

96
Citations as of May 8, 2026

WEB OF SCIENCETM
Citations

90
Citations as of Apr 23, 2026

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.