Please use this identifier to cite or link to this item:
Title: Nine features in a random forest to learn taxonomical semantic relations
Authors: Santus, E
Lenci, A
Chiu, TS
Lu, Q 
Huang, CR 
Issue Date: 2016
Source: The 10th Language Resources and Evaluation Conference, Portoro┼ż, Slovenia, 23-28 May 2016, p.4557-4564
Abstract: ROOT9 is a supervised system for the classification of hypernyms, co-hyponyms and random words that is derived from the already introduced ROOT13 (Santus et al., 2016). It relies on a Random Forest algorithm and nine unsupervised corpus-based features. We evaluate it with a 10-fold cross validation on 9,600 pairs, equally distributed among the three classes and involving several Parts-Of-Speech (i.e. adjectives, nouns and verbs). When all the classes are present, ROOT9 achieves an F1 score of 90.7%, against a baseline of 57.2% (vector cosine). When the classification is binary, ROOT9 achieves the following results against the baseline. hypernyms-co-hyponyms 95.7% vs. 69.8%, hypernyms-random 91.8% vs. 64.1% and co-hyponyms-random 97.8% vs. 79.4%. In order to compare the performance with the state-of-the-art, we have also evaluated ROOT9 in subsets of the Weeds et al. (2014) datasets, proving that it is in fact competitive. Finally, we investigated whether the system learns the semantic relation or it simply learns the prototypical hypernyms, as claimed by Levy et al. (2015). The second possibility seems to be the most likely, even though ROOT9 can be trained on negative examples (i.e., switched hypernyms) to drastically reduce this bias.
Keywords: Vector space models
Distributional semantic models
Semantic relations
Appears in Collections:Conference Paper

Show full item record

Page view(s)

Last Week
Last month
Citations as of Sep 21, 2020

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.