Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/106928
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineering-
dc.creatorLi, H-
dc.creatorLam, KM-
dc.creatorWang, M-
dc.date.accessioned2024-06-07T00:58:56Z-
dc.date.available2024-06-07T00:58:56Z-
dc.identifier.issn0923-5965-
dc.identifier.urihttp://hdl.handle.net/10397/106928-
dc.language.isoenen_US
dc.publisherElsevier BVen_US
dc.rights© 2018 Elsevier B.V. All rights reserved.en_US
dc.rights© 2018. This manuscript version is made available under the CC-BY-NC-ND 4.0 license https://creativecommons.org/licenses/by-nc-nd/4.0/en_US
dc.rightsThe following publication Li, H., Lam, K. M., & Wang, M. (2019). Image super-resolution via feature-augmented random forest. Signal Processing: Image Communication, 72, 25-34 is available at https://doi.org/10.1016/j.image.2018.12.001.en_US
dc.subjectClustering and regressionen_US
dc.subjectGradient magnitude filteren_US
dc.subjectImage super-resolutionen_US
dc.subjectRandom foresten_US
dc.subjectWeighted ridge regressionen_US
dc.titleImage super-resolution via feature-augmented random foresten_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage25-
dc.identifier.epage34-
dc.identifier.volume72-
dc.identifier.doi10.1016/j.image.2018.12.001-
dcterms.abstractRecent random-forest (RF)-based image super-resolution approaches inherit some properties from dictionary-learning-based algorithms, but the effectiveness of the features working in RF is overlooked in the literature. In this paper, we present a novel feature-augmented random forest (FARF) method for image super-resolution, where the conventional gradient-based features are proposed to augment the features used in RF, and different feature recipes are formulated on different processing stages in an RF. The advantages of our method are that, firstly, the dictionary-learning-based features are enhanced by adding gradient magnitudes, based on the observation that the non-linear gradient magnitudes are highly discriminative. Secondly, generalized locality-sensitive hashing (LSH) is used to replace principal component analysis (PCA) for feature dimensionality reduction in constructing the trees, but the original high-dimensional features are employed, instead of the compressed LSH features, for the leaf-nodes’ regressors. With the use of the original higher dimensional features, the regressors can achieve better learning performances. Finally, we present a generalized weighted ridge regression (GWRR) model for the leaf-nodes’ regressors. Experiment results on several public benchmark datasets show that our FARF method can achieve an average gain of about 0.3 dB, compared to traditional RF-based methods. Furthermore, a fine-tuned FARF model can compare to, or (in many cases) outperform, some recent state-of-the-art deep-learning-based algorithms.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationSignal processing. Image communication, Mar. 2019, v. 72, p. 25-34-
dcterms.isPartOfSignal processing. Image communication-
dcterms.issued2019-03-
dc.identifier.scopus2-s2.0-85058475488-
dc.description.validate202405 bcch-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberEIE-0407en_US
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS20083223en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Li_Image_Super-Resolution_Via.pdfPre-Published version2.58 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

4
Citations as of Jun 30, 2024

Downloads

2
Citations as of Jun 30, 2024

SCOPUSTM   
Citations

15
Citations as of Jun 21, 2024

WEB OF SCIENCETM
Citations

13
Citations as of Jun 27, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.