Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/65935
Title: Joint learning of multiple regressors for single image super-resolution
Authors: Zhang, K
Wang, B
Zuo, W
Zhang, H
Zhang, L
Keywords: Image super-resolution
Joint learning
Linear regression
Local learning
Mixture of experts
Issue Date: 2016
Publisher: Institute of Electrical and Electronics Engineers
Source: IEEE signal processing letters, 2016, v. 23, no. 1, 7339441, p. 102-106 How to cite?
Journal: IEEE signal processing letters 
Abstract: Using a global regression model for single image super-resolution (SISR) generally fails to produce visually pleasant output. The recently developed local learning methods provide a remedy by partitioning the feature space into a number of clusters and learning a simple local model for each cluster. However, in these methods the space partition is conducted separately from local model learning, which results in an abundant number of local models to achieve satisfying performance. To address this problem, we propose a mixture of experts (MoE) method to jointly learn the feature space partition and local regression models. Our MoE consists of two components: gating network learning and local regressors learning. An expectation-maximization (EM) algorithm is adopted to train MoE on a large set of LR/HR patch pairs. Experimental results demonstrate that the proposed method can use much less local models and time to achieve comparable or superior results to state-of-the-art SISR methods, providing a highly practical solution to real applications.
URI: http://hdl.handle.net/10397/65935
ISSN: 1070-9908
EISSN: 1558-2361
DOI: 10.1109/LSP.2015.2504121
Appears in Collections:Journal/Magazine Article

Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

2
Citations as of Aug 18, 2017

Page view(s)

7
Last Week
1
Last month
Checked on Aug 13, 2017

Google ScholarTM

Check

Altmetric



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.