Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/93462
Title: Post-processing and applications of pre-trained models for natural language processing
Authors: Yang, Ruosong
Degree: Ph.D.
Issue Date: 2022
Abstract: Pre-trained models have enabled a new era in natural language processing. The first-generation pre-trained models, word embedding, aim to embed syntactic or semantic information into low-dimension and continuous word vectors. While the second-generation pre-trained models attempt to pre-train large language models and the architecture can be used to fine-tune various downstream tasks. However, word embedding models follow distributional hypothesis so that they cannot distinguish antonyms and rare words cannot learn precise representations. Pre-trained language models such as RoBERTa ignore coherence information, and text length during training is much longer than that in applications. Also, training pre-trained models requires powerful hardware. To tackle these issues effectively, we propose to utilize post-processing to enhance two types of pre-trained models. Besides, we also utilize two types of pre-trained models to enhance specific applications on text assessment.
In this thesis, we review existing pre-trained models as well as works about text assessment first. Then we conduct four works including two works that post-processing pre-trained models and two applications on text assessment. More specifically, in the first work, we explore how to utilize the glossary to enhance word embeddings so that the post-processed word embeddings can both capture syntactic and semantic information better. In the second work, we utilize pre-trained word embedding to solve automated post scoring. To better integrate given topics and quoted posts in forums, we propose a representation model and a matching model. In the third work, we propose to utilize self-supervised intermediate tasks to enhance pre-trained language models. Meanwhile, we investigate how these intermediate tasks benefit downstream tasks. In the last work, we use pre-trained language models to learn text representations and proposed to combine regression loss and ranking loss to enhance the performance of automated text scoring. In addition, we conclude our work and addressed future directions.
Subjects: Natural language processing (Computer science)
Hong Kong Polytechnic University -- Dissertations
Pages: xv, 118 pages : color illustrations
Appears in Collections:Thesis

Show full item record

Page views

32
Last Week
0
Last month
Citations as of Apr 28, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.