Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/105454
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Computing | - |
| dc.creator | Zheng, H | en_US |
| dc.creator | Yong, H | en_US |
| dc.creator | Zhang, L | en_US |
| dc.date.accessioned | 2024-04-15T07:34:28Z | - |
| dc.date.available | 2024-04-15T07:34:28Z | - |
| dc.identifier.isbn | 978-1-6654-4509-2 (Electronic) | en_US |
| dc.identifier.isbn | 978-1-6654-4510-8 (Print on Demand(PoD)) | en_US |
| dc.identifier.uri | http://hdl.handle.net/10397/105454 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Institute of Electrical and Electronics Engineers | en_US |
| dc.rights | ©2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | en_US |
| dc.rights | The following publication H. Zheng, H. Yong and L. Zhang, "Deep Convolutional Dictionary Learning for Image Denoising," 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2021, pp. 630-641 is available at https://doi.org/10.1109/CVPR46437.2021.00069. | en_US |
| dc.title | Deep convolutional dictionary learning for image denoising | en_US |
| dc.type | Conference Paper | en_US |
| dc.identifier.spage | 630 | en_US |
| dc.identifier.epage | 641 | en_US |
| dc.identifier.doi | 10.1109/CVPR46437.2021.00069 | en_US |
| dcterms.abstract | Inspired by the great success of deep neural networks (DNNs), many unfolding methods have been proposed to integrate traditional image modeling techniques, such as dictionary learning (DicL) and sparse coding, into DNNs for image restoration. However, the performance of such methods remains limited for several reasons. First, the unfolded architectures do not strictly follow the image representation model of DicL and lose the desired physical meaning. Second, handcrafted priors are still used in most unfolding methods without effectively utilizing the learning capability of DNNs. Third, a universal dictionary is learned to represent all images, reducing the model representation flexibility. We propose a novel framework of deep convolutional dictionary learning (DCDicL), which follows the representation model of DicL strictly, learns the priors for both representation coefficients and the dictionaries, and can adaptively adjust the dictionary for each input image based on its content. The effectiveness of our DCDicL method is validated on the image denoising problem. DCDicL demonstrates leading denoising performance in terms of both quantitative metrics (e.g., PSNR, SSIM) and visual quality. In particular, it can reproduce the subtle image structures and textures, which are hard to recover by many existing denoising DNNs. The code is available at: https://github.com/natezhenghy/DCDicL_denoising. | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 19-25 June 2021, p. 630-641 | en_US |
| dcterms.issued | 2021 | - |
| dc.identifier.scopus | 2-s2.0-85120632625 | - |
| dc.relation.conference | IEEE/CVF Conference on Computer Vision and Pattern Recognition [CVPR] | - |
| dc.description.validate | 202402 bcch | - |
| dc.description.oa | Accepted Manuscript | en_US |
| dc.identifier.FolderNumber | COMP-0035 | - |
| dc.description.fundingSource | RGC | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.identifier.OPUS | 56309531 | - |
| dc.description.oaCategory | Green (AAM) | en_US |
| Appears in Collections: | Conference Paper | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Zheng_Deep_Convolutional_Dictionary.pdf | Pre-Published version | 5.72 MB | Adobe PDF | View/Open |
Page views
96
Last Week
10
10
Last month
Citations as of Nov 9, 2025
Downloads
183
Citations as of Nov 9, 2025
SCOPUSTM
Citations
122
Citations as of Dec 19, 2025
WEB OF SCIENCETM
Citations
108
Citations as of Dec 18, 2025
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



