Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/109635
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorWang, Y-
dc.creatorTu, Z-
dc.creatorXiang, Y-
dc.creatorZhou, S-
dc.creatorChen, X-
dc.creatorLi, B-
dc.creatorZhang, T-
dc.date.accessioned2024-11-08T06:10:44Z-
dc.date.available2024-11-08T06:10:44Z-
dc.identifier.isbn979-8-4007-0103-0-
dc.identifier.urihttp://hdl.handle.net/10397/109635-
dc.descriptionKDD '23: The 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach, CA USA, August 6-10, 2023en_US
dc.language.isoenen_US
dc.publisherAssociation for Computing Machineryen_US
dc.rights© 2023 Copyright held by the owner/author(s).en_US
dc.rightsThis work is licensed under a Creative Commons Attribution International 4.0 License (https://creativecommons.org/licenses/by/4.0/).en_US
dc.rightsThe following publication Wang, Y., Tu, Z., Xiang, Y., Zhou, S., Chen, X., Li, B., & Zhang, T. (2023). Rapid Image Labeling via Neuro-Symbolic Learning Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach, CA, USA. (pp. 2467-2477) is available at https://doi.org/10.1145/3580305.3599485.en_US
dc.subjectActive learningen_US
dc.subjectImage labelingen_US
dc.subjectInductive logic learningen_US
dc.subjectNeuro-symbolic learningen_US
dc.titleRapid image labeling via neuro-symbolic learningen_US
dc.typeConference Paperen_US
dc.identifier.spage2467-
dc.identifier.epage2477-
dc.identifier.doi10.1145/3580305.3599485-
dcterms.abstractThe success of Computer Vision (CV) relies heavily on manually annotated data. However, it is prohibitively expensive to annotate images in key domains such as healthcare, where data labeling requires significant domain expertise and cannot be easily delegated to crowd workers. To address this challenge, we propose a neuro-symbolic approach called RAPID, which infers image labeling rules from a small amount of labeled data provided by domain experts and automatically labels unannotated data using the rules. Specifically, RAPID combines pre-trained CV models and inductive logic learning to infer the logic-based labeling rules. RAPID achieves a labeling accuracy of 83.33% to 88.33% on four image labeling tasks with only 12 to 39 labeled samples. In particular, RAPID significantly outperforms finetuned CV models in two highly specialized tasks. These results demonstrate the effectiveness of RAPID in learning from small data and its capability to generalize among different tasks.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn KDD ’23 : Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, p. 2467-2477. New York, NY: The Association for Computing Machinery, 2023-
dcterms.issued2023-
dc.identifier.scopus2-s2.0-85171361696-
dc.relation.ispartofbookKDD ’23 : Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining-
dc.relation.conferenceConference on Knowledge Discovery and Data Mining [KDD]-
dc.description.validate202411 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextAmazon Research Awarden_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
3580305.3599485.pdf1.93 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

8
Citations as of Nov 17, 2024

Downloads

7
Citations as of Nov 17, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.