Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/114989
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Land Surveying and Geo-Informatics-
dc.contributorDepartment of Land Surveying and Geo-Informatics-
dc.contributorResearch Institute for Land and Space-
dc.creatorCao, YX-
dc.creatorHuang, X-
dc.creatorWeng, QH-
dc.date.accessioned2025-09-02T00:31:58Z-
dc.date.available2025-09-02T00:31:58Z-
dc.identifier.issn1569-8432-
dc.identifier.urihttp://hdl.handle.net/10397/114989-
dc.language.isoenen_US
dc.publisherElsevieren_US
dc.rights© 2025 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC license (http://creativecommons.org/licenses/bync/4.0/).en_US
dc.rightsThe following publication Cao, Y., Huang, X., & Weng, Q. (2025). A SAM-adapted weakly-supervised semantic segmentation method constrained by uncertainty and transformation consistency. International Journal of A is available at https://dx.doi.org/10.1016/j.jag.2025.104440.en_US
dc.subjectSegment anything model (SAM)en_US
dc.subjectWeakly supervised learningen_US
dc.subjectUncertaintyen_US
dc.subjectTransformation consistencyen_US
dc.subjectSemantic segmentationen_US
dc.titleA SAM-adapted weakly-supervised semantic segmentation method constrained by uncertainty and transformation consistencyen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume137-
dc.identifier.doi10.1016/j.jag.2025.104440-
dcterms.abstractSemantic segmentation of remote sensing imagery is a fundamental task to generate pixel-wise category maps. Existing deep learning networks rely heavily on dense pixel-wise labels, incurring high acquisition costs. Given this challenge, this study introduces sparse point labels, a type of cost-effective weak labels, for semantic segmentation. Existing weakly-supervised methods often leverage low-level visual or high-level semantic features from networks to generate supervision information for unlabeled pixels, which can easily lead to the issue of label noises. Furthermore, these methods rarely explore the general-purpose foundation model, segment anything model (SAM), with strong zero-shot generalization capacity in image segmentation. In this paper, we proposed a SAM-adapted weakly-supervised method with three components: 1) an adapted EfficientViT-SAM network (AESAM) for semantic segmentation guided by point labels, 2) an uncertainty-based pseudo-label generation module to select reliable pseudo-labels for supervising unlabeled pixels, and 3) a transformation consistency constraint for enhancing AESAM's robustness to data perturbations. The proposed method was tested on the ISPRS Vaihingen dataset (collected from airplane), the Zurich Summer dataset (satellite), and the UAVid dataset (drone). Results demonstrated a significant improvement in mean F1 (by 5.89 %-10.56 %) and mean IoU (by 5.95 %-11.13 %) compared to the baseline method. Compared to the closest competitors, there was an increase in mean F1 (by 0.83 %-5.29 %) and mean IoU (by 1.04 %-6.54 %). Furthermore, our approach requires only fine-tuning a small number of parameters (0.9 M) using cheap point labels, making it promising for scenarios with limited labeling budgets. The code is available at https://github.com/lauraset/SAM-UTC-WSSS-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationInternational journal of applied earth observation and geoinformation, Mar. 2025, v. 137, 104440-
dcterms.isPartOfInternational journal of applied earth observation and geoinformation-
dcterms.issued2025-03-
dc.identifier.isiWOS:001434737600001-
dc.identifier.eissn1872-826X-
dc.identifier.artn104440-
dc.description.validate202509 bcrc-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextGlobal STEM Professorship, Hong Kong SAR Government; Hong Kong Polytechnic Universityen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
1-s2.0-S1569843225000871-main.pdf18.78 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

WEB OF SCIENCETM
Citations

4
Citations as of Apr 23, 2026

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.