Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/110271
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineering-
dc.creatorZhang, Y-
dc.creatorZhang, Z-
dc.creatorLiu, T-
dc.creatorKong, J-
dc.date.accessioned2024-12-03T03:09:09Z-
dc.date.available2024-12-03T03:09:09Z-
dc.identifier.issn1370-4621-
dc.identifier.urihttp://hdl.handle.net/10397/110271-
dc.language.isoenen_US
dc.publisherSpringer New York LLCen_US
dc.rights© The Author(s) 2024en_US
dc.rightsThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.en_US
dc.rightsThe following publication Zhang, Y., Zhang, Z., Liu, T. et al. Category-Aware Saliency Enhance Learning Based on CLIP for Weakly Supervised Salient Object Detection. Neural Process Lett 56, 49 (2024) is available at https://doi.org/10.1007/s11063-024-11530-2.en_US
dc.subjectCategory-aware Saliency Enhance Learningen_US
dc.subjectCLIPen_US
dc.subjectSalient object detectionen_US
dc.subjectWeakly superviseden_US
dc.titleCategory-aware saliency enhance learning based on CLIP for weakly supervised salient object detectionen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume56-
dc.identifier.issue2-
dc.identifier.doi10.1007/s11063-024-11530-2-
dcterms.abstractWeakly supervised salient object detection (SOD) using image-level category labels has been proposed to reduce the annotation cost of pixel-level labels. However, existing methods mostly train a classification network to generate a class activation map, which suffers from coarse localization and difficult pseudo-label updating. To address these issues, we propose a novel Category-aware Saliency Enhance Learning (CSEL) method based on contrastive vision-language pre-training (CLIP), which can perform image-text classification and pseudo-label updating simultaneously. Our proposed method transforms image-text classification into pixel-text matching and generates a category-aware saliency map, which is evaluated by the classification accuracy. Moreover, CSEL assesses the quality of the category-aware saliency map and the pseudo saliency map, and uses the quality confidence scores as weights to update the pseudo labels. The two maps mutually enhance each other to guide the pseudo saliency map in the correct direction. Our SOD network can be trained jointly under the supervision of the updated pseudo saliency maps. We test our model on various well-known RGB-D and RGB SOD datasets. Our model achieves an S-measure of 87.6% on the RGB-D NLPR dataset and 84.3% on the RGB ECSSD dataset. Additionally, we obtain satisfactory performance on the weakly supervised E-measure, F-measure, and mean absolute error metrics for other datasets. These results demonstrate the effectiveness of our model.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationNeural processing letters, Apr. 2024, v. 56, no. 2, 49-
dcterms.isPartOfNeural processing letters-
dcterms.issued2024-04-
dc.identifier.scopus2-s2.0-85185410786-
dc.identifier.eissn1573-773X-
dc.identifier.artn49-
dc.description.validate202412 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNational Natural Science Foundation of China; Scientific and Technological Aid Program of Xinjiang; 111 Projectsen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
s11063-024-11530-2.pdf1.87 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

15
Citations as of Apr 14, 2025

Downloads

8
Citations as of Apr 14, 2025

SCOPUSTM   
Citations

2
Citations as of Jul 3, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.