Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/77461
Title: Determining gains acquired from word embedding quantitatively using discrete distribution clustering
Authors: Ye, J 
Li, Y 
Wu, Z 
Wang, JZ 
Li, W 
Li, J 
Issue Date: 2017
Publisher: Association for Computational Linguistics (ACL)
Source: ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers), 3 Jul - 4 Aug 2017, v. 1, p. 1847-1856 How to cite?
Abstract: Word embeddings have become widely-used in document analysis. While a large number of models for mapping words to vector spaces have been developed, it remains undetermined how much net gain can be achieved over traditional approaches based on bag-of-words. In this paper, we propose a new document clustering approach by combining any word embedding with a state-of-the-art algorithm for clustering empirical distributions. By using the Wasserstein distance between distributions, the word-to-word semantic relationship is taken into account in a principled way. The new clustering method is easy to use and consistently outperforms other methods on a variety of data sets. More importantly, the method provides an effective framework for determining when and how much word embeddings contribute to document analysis. Experimental results with multiple embedding models are reported.
URI: http://hdl.handle.net/10397/77461
ISBN: 9.78195E+12
DOI: 10.18653/v1/P17-1169
Appears in Collections:Conference Paper

Access
View full-text via PolyU eLinks SFX Query
Show full item record

SCOPUSTM   
Citations

1
Citations as of Sep 11, 2018

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.