Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/105639
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorGu, Sen_US
dc.creatorMeng, Den_US
dc.creatorZuo, Wen_US
dc.creatorZhang, Len_US
dc.date.accessioned2024-04-15T07:35:36Z-
dc.date.available2024-04-15T07:35:36Z-
dc.identifier.isbn978-1-5386-1032-9 (Electronic)en_US
dc.identifier.isbn78-1-5386-1033-6 (Print on Demand(PoD))en_US
dc.identifier.urihttp://hdl.handle.net/10397/105639-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication S. Gu, D. Meng, W. Zuo and L. Zhang, "Joint Convolutional Analysis and Synthesis Sparse Representation for Single Image Layer Separation," 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017, pp. 1717-1725 is available at https://doi.org/10.1109/ICCV.2017.189.en_US
dc.titleJoint convolutional analysis and synthesis sparse representation for single image layer separationen_US
dc.typeConference Paperen_US
dc.identifier.spage1717en_US
dc.identifier.epage1725en_US
dc.identifier.doi10.1109/ICCV.2017.189en_US
dcterms.abstractAnalysis sparse representation (ASR) and synthesis sparse representation (SSR) are two representative approaches for sparsity-based image modeling. An image is described mainly by the non-zero coefficients in SSR, while is mainly characterized by the indices of zeros in ASR. To exploit the complementary representation mechanisms of ASR and SSR, we integrate the two models and propose a joint convolutional analysis and synthesis (JCAS) sparse representation model. The convolutional implementation is adopted to more effectively exploit the image global information. In JCAS, a single image is decomposed into two layers, one is approximated by ASR to represent image large-scale structures, and the other by SSR to represent image fine-scale textures. The synthesis dictionary is adaptively learned in JCAS to describe the texture patterns for different single image layer separation tasks. We evaluate the proposed JCAS model on a variety of applications, including rain streak removal, high dynamic range image tone mapping, etc. The results show that our JCAS method outperforms state-of-the-arts in these applications in terms of both quantitative measure and visual perception quality.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitation2017 IEEE International Conference on Computer Vision (ICCV), 22–29 October 2017, Venice, Italy, p. 1717-1725en_US
dcterms.issued2017-
dc.identifier.scopus2-s2.0-85041920505-
dc.relation.conferenceInternational Conference on Computer Vision [ICCV]-
dc.description.validate202402 bcch-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberCOMP-1050-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNSFCen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS13899748-
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Gu_Joint_Convolutional_Analysis.pdfPre-Published version2.62 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

96
Last Week
12
Last month
Citations as of Nov 9, 2025

Downloads

37
Citations as of Nov 9, 2025

SCOPUSTM   
Citations

225
Citations as of Dec 19, 2025

WEB OF SCIENCETM
Citations

201
Citations as of Dec 18, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.