Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/113846
PIRA download icon_1.1View/Download Full Text
Title: One Class One Click : quasi scene-level weakly supervised point cloud semantic segmentation with active learning
Authors: Wang, P 
Yao, W 
Shao, J 
Issue Date: Oct-2023
Source: ISPRS journal of photogrammetry and remote sensing, Oct. 2023, v. 204, p. 89-104
Abstract: Reliance on vast annotations to achieve leading performance severely restricts the practicality of large-scale point cloud semantic segmentation. For the purpose of reducing data annotation costs, effective labeling schemes are developed and contribute to attaining competitive results under weak supervision strategy. Revisiting current weak label forms, we introduce One Class One Click (OCOC), a low cost yet informative quasi scene-level label, which encapsulates both point-level and scene-level annotations. An active weakly supervised framework is proposed to leverage scarce labels by involving weak supervision from both global and local perspectives. Contextual constraints are imposed by an auxiliary scene classification task, respectively based on global feature embedding and point-wise prediction aggregation, which restricts the model prediction merely to OCOC labels within a sub-cloud. Furthermore, we design a context-aware pseudo labeling strategy, which effectively supplement point-level supervisory signals subject to OCOC labels. Finally, an active learning scheme with a uncertainty measure — temporal output discrepancy is integrated to examine informative samples and provides guidance on sub-clouds query, which is conducive to quickly attaining desirable OCOC annotations and reduces the labeling cost to an extremely low extent. Extensive experimental analysis using three LiDAR benchmarks respectively collected from airborne, mobile and ground platforms demonstrates that our proposed method achieves very promising results though subject to scarce labels. It considerably outperforms genuine scene-level weakly supervised methods by up to 25% in terms of average F1 score and achieves competitive results against full supervision schemes. On terrestrial LiDAR dataset — Semantics3D, using approximately 2‱ of labels, our method achieves an average F1 score of 85.2%, which increases by 11.58% compared to the baseline model. Codes are publicly available at https://github.com/PuzoW/One-Class-One-Click.
Keywords: Active learning
Point cloud
Semantic segmentation
Weakly supervised learning
Publisher: Elsevier BV
Journal: ISPRS journal of photogrammetry and remote sensing 
ISSN: 0924-2716
EISSN: 1872-8235
DOI: 10.1016/j.isprsjprs.2023.09.002
Rights: © 2023 The Authors. Published by Elsevier B.V. on behalf of International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
The following publication Wang, P., Yao, W., & Shao, J. (2023). One Class One Click: Quasi scene-level weakly supervised point cloud semantic segmentation with active learning. ISPRS Journal of Photogrammetry and Remote Sensing, 204, 89-104 is available at https://doi.org/10.1016/j.isprsjprs.2023.09.002.
Appears in Collections:Journal/Magazine Article

Files in This Item:
File Description SizeFormat 
1-s2.0-S0924271623002344-main.pdf4.74 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

21
Citations as of Dec 19, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.