Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/93462
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorYang, Ruosong-
dc.identifier.urihttps://theses.lib.polyu.edu.hk/handle/200/11707-
dc.language.isoEnglish-
dc.titlePost-processing and applications of pre-trained models for natural language processing-
dc.typeThesis-
dcterms.abstractPre-trained models have enabled a new era in natural language processing. The first-generation pre-trained models, word embedding, aim to embed syntactic or semantic information into low-dimension and continuous word vectors. While the second-generation pre-trained models attempt to pre-train large language models and the architecture can be used to fine-tune various downstream tasks. However, word embedding models follow distributional hypothesis so that they cannot distinguish antonyms and rare words cannot learn precise representations. Pre-trained language models such as RoBERTa ignore coherence information, and text length during training is much longer than that in applications. Also, training pre-trained models requires powerful hardware. To tackle these issues effectively, we propose to utilize post-processing to enhance two types of pre-trained models. Besides, we also utilize two types of pre-trained models to enhance specific applications on text assessment.-
dcterms.abstractIn this thesis, we review existing pre-trained models as well as works about text assessment first. Then we conduct four works including two works that post-processing pre-trained models and two applications on text assessment. More specifically, in the first work, we explore how to utilize the glossary to enhance word embeddings so that the post-processed word embeddings can both capture syntactic and semantic information better. In the second work, we utilize pre-trained word embedding to solve automated post scoring. To better integrate given topics and quoted posts in forums, we propose a representation model and a matching model. In the third work, we propose to utilize self-supervised intermediate tasks to enhance pre-trained language models. Meanwhile, we investigate how these intermediate tasks benefit downstream tasks. In the last work, we use pre-trained language models to learn text representations and proposed to combine regression loss and ranking loss to enhance the performance of automated text scoring. In addition, we conclude our work and addressed future directions.-
dcterms.accessRightsopen access-
dcterms.educationLevelPh.D.-
dcterms.extentxv, 118 pages : color illustrations-
dcterms.issued2022-
dcterms.LCSHNatural language processing (Computer science)-
dcterms.LCSHHong Kong Polytechnic University -- Dissertations-
Appears in Collections:Thesis
Show simple item record

Page views

34
Last Week
0
Last month
Citations as of May 12, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.