Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/109488
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorLi, Ren_US
dc.creatorLi, Sen_US
dc.creatorHe, Cen_US
dc.creatorZhang, Yen_US
dc.creatorJia, Xen_US
dc.creatorZhang, Len_US
dc.date.accessioned2024-11-01T08:04:35Z-
dc.date.available2024-11-01T08:04:35Z-
dc.identifier.isbn978-1-6654-6946-3en_US
dc.identifier.urihttp://hdl.handle.net/10397/109488-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication R. Li, S. Li, C. He, Y. Zhang, X. Jia and L. Zhang, "Class-Balanced Pixel-Level Self-Labeling for Domain Adaptive Semantic Segmentation," 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 2022, pp. 11583-11593 is available at https://doi.org/10.1109/CVPR52688.2022.01130.en_US
dc.titleClass-balanced pixel-level self-labeling for domain adaptive semantic segmentationen_US
dc.typeConference Paperen_US
dc.identifier.spage11583en_US
dc.identifier.epage11593en_US
dc.identifier.doi10.1109/CVPR52688.2022.01130en_US
dcterms.abstractDomain adaptive semantic segmentation aims to learn a model with the supervision of source domain data, and produce satisfactory dense predictions on unlabeled target domain. One popular solution to this challenging task is self-training, which selects high-scoring predictions on target samples as pseudo labels for training. However, the produced pseudo labels often contain much noise because the model is biased to source domain as well as majority categories. To address the above issues, we propose to di-rectly explore the intrinsic pixel distributions of target do-main data, instead of heavily relying on the source domain. Specifically, we simultaneously cluster pixels and rectify pseudo labels with the obtained cluster assignments. This process is done in an online fashion so that pseudo labels could co-evolve with the segmentation model without extra training rounds. To overcome the class imbalance problem on long-tailed categories, we employ a distribution align-ment technique to enforce the marginal class distribution of cluster assignments to be close to that of pseudo labels. The proposed method, namely Class-balanced Pixel-level Self-Labeling (CPSL), improves the segmentation performance on target domain over state-of-the-arts by a large margin, especially on long-tailed categories. The source code is available at ht tps: / / gi thub. com/lslrh/CPSL.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitation2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition : New Orleans, Louisiana, 19 - 24 June 2022, p. 11583-11593en_US
dcterms.issued2022-
dc.identifier.scopus2-s2.0-85138622387-
dc.relation.ispartofbook2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition : New Orleans, Louisiana, 19 - 24 June 2022en_US
dc.relation.conferenceConference on Computer Vision and Pattern Recognition [CVPR]-
dc.description.validate202411 bcch-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberOA_Others-
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Li_Class-balanced_Pixel-level_Self-Labeling.pdfPre-Published version12.64 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

38
Citations as of Apr 14, 2025

Downloads

17
Citations as of Apr 14, 2025

SCOPUSTM   
Citations

85
Citations as of May 29, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.