Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/89114
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Chinese and Bilingual Studies-
dc.creatorShwartz, V-
dc.creatorSantus, E-
dc.creatorSchlechtweg, D-
dc.date.accessioned2021-02-04T02:39:27Z-
dc.date.available2021-02-04T02:39:27Z-
dc.identifier.isbn9.78E+12-
dc.identifier.urihttp://hdl.handle.net/10397/89114-
dc.language.isoenen_US
dc.publisherAssociation for Computational Linguistics (ACL)en_US
dc.rights© 2017 Association for Computational Linguisticsen_US
dc.rightsACL materials are Copyright © 1963–2021 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License. Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).en_US
dc.rightsThe following publication Shwartz, V., Santus, E., & Schlechtweg, D. (2017). Hypernyms under siege: Linguistically-motivated artillery for hypernymy detection. Paper presented at the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017 - Proceedings of Conference, , 1, 65-75 is available at https://dx.doi.org/10.18653/v1/e17-1007en_US
dc.titleHypernyms under siege : linguistically-motivated artillery for hypernymy detectionen_US
dc.typeConference Paperen_US
dc.identifier.spage65-
dc.identifier.epage75-
dc.identifier.volume1-
dc.identifier.doi10.18653/v1/e17-1007-
dcterms.abstractThe fundamental role of hypernymy in NLP has motivated the development of many methods for the automatic identification of this relation, most of which rely on word distribution. We investigate an extensive number of such unsupervised measures, using several distributional semantic models that differ by context type and feature weighting. We analyze the performance of the different methods based on their linguistic motivation. Comparison to the state-of-the-art supervised methods shows that while supervised methods generally outperform the unsupervised ones, the former are sensitive to the distribution of training instances, hurting their reliability. Being based on general linguistic hypotheses and independent from training data, unsupervised measures are more robust, and therefore are still useful artillery for hypernymy detection.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn Proceeding of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, 3-7 April 2017 (Vol 1, Long Papers),�p. 65-75. Stroudsburg : Association for Computational Linguistics, 2017.-
dcterms.issued2017-
dc.identifier.scopus2-s2.0-85021636919-
dc.relation.ispartofbookProceeding of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, 3-7 April 2017-
dc.relation.conferenceConference of the European Chapter of the Association for Computational Linguistics [EACL]-
dc.description.validate202101 bcrc-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.pubStatusPublisheden_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
E17-1007.pdf214.22 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

68
Last Week
1
Last month
Citations as of Apr 21, 2024

Downloads

18
Citations as of Apr 21, 2024

SCOPUSTM   
Citations

74
Citations as of Apr 19, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.