Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/106934
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineering-
dc.creatorHu, J-
dc.creatorLam, KM-
dc.creatorLou, P-
dc.creatorLiu, Q-
dc.creatorDeng, W-
dc.date.accessioned2024-06-07T00:58:58Z-
dc.date.available2024-06-07T00:58:58Z-
dc.identifier.issn1047-3203-
dc.identifier.urihttp://hdl.handle.net/10397/106934-
dc.language.isoenen_US
dc.publisherAcademic Pressen_US
dc.rights© 2018 Elsevier Inc. All rights reserved.en_US
dc.rights© 2018. This manuscript version is made available under the CC-BY-NC-ND 4.0 license https://creativecommons.org/licenses/by-nc-nd/4.0/en_US
dc.rightsThe following publication Hu, J., Lam, K. M., Lou, P., Liu, Q., & Deng, W. (2018). Can a machine have two systems for recognition, like human beings?. Journal of Visual Communication and Image Representation, 56, 275-286 is available at https://doi.org/10.1016/j.jvcir.2018.09.008.en_US
dc.subjectFeature-pool selectionen_US
dc.subjectHierarchical tree structureen_US
dc.subjectImage annotationen_US
dc.subjectMulti-labelingen_US
dc.titleCan a machine have two systems for recognition, like human beings?en_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage275-
dc.identifier.epage286-
dc.identifier.volume56-
dc.identifier.doi10.1016/j.jvcir.2018.09.008-
dcterms.abstractArtificial Intelligence has attracted much of researchers’ attention in recent years. A question we always ask is: “Can machines replace human beings to some extent?” This paper aims to explore the knowledge learning for an image-annotation framework, which is an easy task for humans but a tough task for machines. This paper’s research is based on an assumption that machines have two systems of thinking, each of which handles the labels of images at different abstract levels. Based on this, a new hierarchical model for image annotation is introduced. We explore not only the relationships between the labels and the features used, but also the relationships between labels. More specifically, we divide labels into several hierarchies for efficient and accurate labeling, which are constructed using our Associative Memory Sharing method, proposed in this paper.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationJournal of visual communication and image representation, Oct. 2018, v. 56, p. 275-286-
dcterms.isPartOfJournal of visual communication and image representation-
dcterms.issued2018-10-
dc.identifier.scopus2-s2.0-85054467746-
dc.identifier.eissn1095-9076-
dc.description.validate202405 bcch-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberEIE-0466en_US
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS20084297en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Lam_Can_Machine_Have.pdfPre-Published version1.91 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

3
Citations as of Jun 30, 2024

Downloads

3
Citations as of Jun 30, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.