Back to results list
Show full item record
Please use this identifier to cite or link to this item:
|Title:||Terminology extraction using contextual information||Authors:||Ji, Luning||Degree:||M.Phil.||Issue Date:||2007||Abstract:||This work investigates different algorithms of automatic terminology extraction. The investigation considers two characteristics of terminology - unithood and termhood corresponding to two steps in terminology extraction, namely, term extraction and terminology verification. In the first step for term extraction, two statistic-based measurements considering the internal and contextual relationship are used to estimate the soundness of an extracted string pattern being a valid term. In the second step for terminology verification, window-based contextual information within a logical sentence is used. Two window-based approaches using on domain knowledge and syntax of the contextual information are proposed. After evaluating the merits and problems of each approach, a hybrid approach is designed to combine both the syntactic information and domain specific knowledge to verify the extracted candidate terms as terminology or not. Furthermore, a component-based composition algorithm is proposed to help verify the extracted terms as valid terminology. Experiments show that the hybrid approach can achieve significant improvement with the best F-measure, not only maintaining a good precision but also a good recall. Due to the special nature of Chinese, this work investigates details of the effect of word segmentation in terminology extraction through the comparisons of two preprocessing models - a character-based model and a word-based model. Limitations of segmentation and some feasible suggestions for dealing with these limitations are also provided. Furthermore, this work investigates methods to construct a core lexicon for a specific domain from an existing domain lexicon. The core lexicon contains the most fundamental terms used in a domain through which other terms in the domain can be constructed. Three different approaches considering four characteristics of core lexicon are proposed and implemented. Evaluations show that the automatic extracted core lexicon can have good coverage of the domain lexicon as well as being minimal with on redundant terms. The use of a core lexicon can reduce program runtime and memory usage in real applications.||Subjects:||Hong Kong Polytechnic University -- Dissertations.
Chinese language -- Terms and phrases -- Data processing.
Natural language processing (Computer science)
Chinese language -- Data processing.
|Pages:||xii, 146 leaves : ill. ; 30 cm.|
|Appears in Collections:||Thesis|
View full-text via https://theses.lib.polyu.edu.hk/handle/200/2233
Citations as of May 22, 2022
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.