Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/105454
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorZheng, Hen_US
dc.creatorYong, Hen_US
dc.creatorZhang, Len_US
dc.date.accessioned2024-04-15T07:34:28Z-
dc.date.available2024-04-15T07:34:28Z-
dc.identifier.isbn978-1-6654-4509-2 (Electronic)en_US
dc.identifier.isbn978-1-6654-4510-8 (Print on Demand(PoD))en_US
dc.identifier.urihttp://hdl.handle.net/10397/105454-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights©2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication H. Zheng, H. Yong and L. Zhang, "Deep Convolutional Dictionary Learning for Image Denoising," 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2021, pp. 630-641 is available at https://doi.org/10.1109/CVPR46437.2021.00069.en_US
dc.titleDeep convolutional dictionary learning for image denoisingen_US
dc.typeConference Paperen_US
dc.identifier.spage630en_US
dc.identifier.epage641en_US
dc.identifier.doi10.1109/CVPR46437.2021.00069en_US
dcterms.abstractInspired by the great success of deep neural networks (DNNs), many unfolding methods have been proposed to integrate traditional image modeling techniques, such as dictionary learning (DicL) and sparse coding, into DNNs for image restoration. However, the performance of such methods remains limited for several reasons. First, the unfolded architectures do not strictly follow the image representation model of DicL and lose the desired physical meaning. Second, handcrafted priors are still used in most unfolding methods without effectively utilizing the learning capability of DNNs. Third, a universal dictionary is learned to represent all images, reducing the model representation flexibility. We propose a novel framework of deep convolutional dictionary learning (DCDicL), which follows the representation model of DicL strictly, learns the priors for both representation coefficients and the dictionaries, and can adaptively adjust the dictionary for each input image based on its content. The effectiveness of our DCDicL method is validated on the image denoising problem. DCDicL demonstrates leading denoising performance in terms of both quantitative metrics (e.g., PSNR, SSIM) and visual quality. In particular, it can reproduce the subtle image structures and textures, which are hard to recover by many existing denoising DNNs. The code is available at: https://github.com/natezhenghy/DCDicL_denoising.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitation2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 19-25 June 2021, p. 630-641en_US
dcterms.issued2021-
dc.identifier.scopus2-s2.0-85120632625-
dc.relation.conferenceIEEE/CVF Conference on Computer Vision and Pattern Recognition [CVPR]-
dc.description.validate202402 bcch-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberCOMP-0035-
dc.description.fundingSourceRGCen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS56309531-
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Zheng_Deep_Convolutional_Dictionary.pdfPre-Published version5.72 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

96
Last Week
10
Last month
Citations as of Nov 9, 2025

Downloads

183
Citations as of Nov 9, 2025

SCOPUSTM   
Citations

122
Citations as of Dec 19, 2025

WEB OF SCIENCETM
Citations

108
Citations as of Dec 18, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.