Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/101790
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Land Surveying and Geo-Informatics-
dc.creatorLiu, Wen_US
dc.creatorLiu, Jen_US
dc.creatorLuo, Zen_US
dc.creatorZhang, Hen_US
dc.creatorGao, Ken_US
dc.creatorLi, Jen_US
dc.date.accessioned2023-09-18T07:44:45Z-
dc.date.available2023-09-18T07:44:45Z-
dc.identifier.issn1569-8432en_US
dc.identifier.urihttp://hdl.handle.net/10397/101790-
dc.language.isoenen_US
dc.publisherElsevieren_US
dc.rights© 2022 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/).en_US
dc.rightsThe following publication Liu, W., Liu, J., Luo, Z., Zhang, H., Gao, K., & Li, J. (2022). Weakly supervised high spatial resolution land cover mapping based on self-training with weighted pseudo-labels. International Journal of Applied Earth Observation and Geoinformation, 112, 102931 is available at https://doi.org/10.1016/j.jag.2022.102931.en_US
dc.subjectLand cover mappingen_US
dc.subjectPseudo-learningen_US
dc.subjectSelf-trainingen_US
dc.subjectSemantic segmentationen_US
dc.subjectUnsupervised domain adaptationen_US
dc.titleWeakly supervised high spatial resolution land cover mapping based on self-training with weighted pseudo-labelsen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume112en_US
dc.identifier.doi10.1016/j.jag.2022.102931en_US
dcterms.abstractDespite its success, deep learning in land cover mapping requires a massive amount of pixel-wise labeled images. It typically assumes that the training and test scenes are similar in data distribution. The performance of models trained on any particular dataset could degrade significantly on a new dataset due to the domain shift or domain gap across datasets, resulting in new training data requiring labor-intensive manual pixel-wise labeling. This paper proposes a land cover mapping framework combining Feature Pyramid Network (FPN) and self-training. In the FPN, we integrate ConvNeXt with a Pyramid Pooling Module (PPM). Combining the FPN and the PPM improves the segmentation performance, which benefits from the multiscale aggregation of pyramid features. To fully exploit pseudo-labels, we design an Unsupervised Domain Adaptation (UDA) land cover mapping scheme with self-training using weighted pseudo-labels of the target samples. The proposed land cover mapping framework could benefit from multiscale aggregation of pyramid features and the full use of the pseudo-labels. Comparison results on the LoveDA dataset, the latest large-scale unsupervised domain adaptation dataset for land cover mapping, empirically demonstrated that our land cover mapping approach significantly outperforms the baselines in both UDA scenarios, i.e., Urban → Rural and Rural → Urban. The models of this paper are now publicly available on GitHub.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationInternational Journal of Applied Earth Observation and Geoinformation, Aug. 2022, v. 112, 102931en_US
dcterms.isPartOfInternational journal of applied earth observation and geoinformationen_US
dcterms.issued2022-08-
dc.identifier.scopus2-s2.0-85135810626-
dc.identifier.eissn1872-826Xen_US
dc.identifier.artn102931en_US
dc.description.validate202309 bcvc-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOS-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNational Natural Science Foundation of China; Natural Science Foundation of Jiangxi Province; Hong Kong Polytechnic Universityen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
1-s2.0-S1569843222001297-main.pdf1.76 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

88
Citations as of Apr 14, 2025

Downloads

52
Citations as of Apr 14, 2025

SCOPUSTM   
Citations

14
Citations as of Sep 12, 2025

WEB OF SCIENCETM
Citations

8
Citations as of Nov 14, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.