Please use this identifier to cite or link to this item:
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorSchool of Nursingen_US
dc.contributorDepartment of Computingen_US
dc.creatorHang, Wen_US
dc.creatorLiang, Sen_US
dc.creatorChoi, KSen_US
dc.creatorChung, FLen_US
dc.creatorWang, Sen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for Publishedertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication W. Hang, S. Liang, K. -S. Choi, F. -L. Chung and S. Wang, "Selective Transfer Classification Learning With Classification-Error-Based Consensus Regularization," in IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 5, no. 2, pp. 178-190, April 2021 is available at
dc.subjectClassification-error-based consensus regularization (CCR)en_US
dc.subjectLeave-one-out cross-validationen_US
dc.subjectLeast square-support vector machine (LS-SVM)en_US
dc.subjectTransfer learningen_US
dc.titleSelective transfer classification learning with classification-error-based consensus regularizationen_US
dc.typeJournal/Magazine Articleen_US
dcterms.abstractTransfer learning methods are conventionally conducted by utilizing abundant labeled data in the source domain to build an accurate classifier for the target domain with scarce labeled data. However, most current transfer learning methods assume that all the source data are relevant to target domain, which may induce negative learning effect when the assumption becomes invalid as in many practical scenarios. To tackle this issue, the key is to accurately and quickly select the correlated source data and the corresponding weights. In this paper, we make use of the least square-support vector machine (LS-SVM) framework for identifying the correlated data and their weights from source domain. By keeping the consistency between the distributions of the classification errors of both the source and target domains, we first propose the classification-error-based consensus regularization (CCR), which can guarantee the performance improvement of the target classifier. Based on this approach, a novel CCR-based selective transfer classification learning method (CSTL) is then developed to autonomously and quickly choose the correlated source data and their weights to exploit the transferred knowledge by solving the LS-SVM based objective function. This method minimizes the leave-one-out cross-validation error despite scarce target training data. The advantages of the CSTL are demonstrated by evaluating its performance on public image and text datasets and comparing it with that of the state-of-the-art transfer learning methods.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE transactions on emerging topics in computational intelligence, Apr. 2021, v. 5, no. 2, p. 178-190en_US
dcterms.isPartOfIEEE transactions on emerging topics in computational intelligenceen_US
dc.description.ros202103 bcrcen_US
dc.description.oaAccepted Manuscripten_US
dc.description.fundingTextPolyU 152040/16Een_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
a0597-n20_462.pdfPre-Published version2.36 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

Last Week
Last month
Citations as of Sep 17, 2023


Citations as of Sep 17, 2023


Citations as of Sep 21, 2023

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.