Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/1888
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electronic and Information Engineering-
dc.creatorWang, Z-
dc.creatorGuan, G-
dc.creatorWang, J-
dc.creatorFeng, DD-
dc.date.accessioned2014-12-11T08:26:44Z-
dc.date.available2014-12-11T08:26:44Z-
dc.identifier.isbn978-1-4244-2295-1-
dc.identifier.urihttp://hdl.handle.net/10397/1888-
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.rights© 2008 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.en_US
dc.rightsThis material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.en_US
dc.subjectGaussian processesen_US
dc.subjectContent-based retrievalen_US
dc.subjectMultimedia systemsen_US
dc.subjectSearch enginesen_US
dc.subjectSemantic Weben_US
dc.subjectVideo retrievalen_US
dc.titleMeasuring semantic similarity between concepts in visual domainen_US
dc.typeConference Paperen_US
dc.description.otherinformationAuthor name used in this publication: Dagan Fengen_US
dc.description.otherinformationRefereed conference paperen_US
dc.identifier.doi10.1109/MMSP.2008.4665152-
dcterms.abstractConcept similarity has been intensively researched in the natural language processing domain due to its important role in many applications such as language modeling and information retrieval. There are few studies on measuring concept similarity in visual domain, though concept based multimedia information retrieval has attracted a lot of attentions. In this paper, we present a scalable framework for such a purpose, which is different from traditional approaches to exploring correlation among concepts in image/video annotation domain. For each concept, a model based on feature distribution is built using sample images collected from the Internet. And similarity between concepts is measured with the similarity between their models. Hereby, a Gaussian Mixture Model (GMM) is employed to model each concept and two similarity measurements are investigated. Experimental results on 13,974 images of 16 concepts collected through image search engines have demonstrated that the similarity between concepts is very close to human perception. In addition, the entropy of GMM cluster distributions can be a good indication of selecting concepts for image/video annotation.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationProceedings of the 2008 IEEE 10th Workshop on Multimedia Signal Processing : 8-10 October, 2008, Cairns, Australia, p. 628-633-
dcterms.issued2008-
dc.identifier.scopus2-s2.0-58049114918-
dc.identifier.rosgroupidr43956-
dc.description.ros2008-2009 > Academic research: refereed > Refereed conference paper-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_IR/PIRAen_US
dc.description.pubStatusPublisheden_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Wang_et_al_Measuring_Semantic_Similarity.pdf1.32 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

92
Last Week
0
Last month
Citations as of Jul 3, 2022

Downloads

170
Citations as of Jul 3, 2022

SCOPUSTM   
Citations

3
Last Week
0
Last month
0
Citations as of Jul 7, 2022

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.