Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/107877
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Computing | en_US |
dc.creator | Wang, R | en_US |
dc.creator | Li, J | en_US |
dc.creator | Li, P | en_US |
dc.date.accessioned | 2024-07-15T07:55:29Z | - |
dc.date.available | 2024-07-15T07:55:29Z | - |
dc.identifier.isbn | 979-8-89176-061-5 | en_US |
dc.identifier.uri | http://hdl.handle.net/10397/107877 | - |
dc.description | The 2023 Conference on Empirical Methods in Natural Language Processing, December 6-10, 2023, Singapore | en_US |
dc.language.iso | en | en_US |
dc.publisher | Association for Computational Linguistics (ACL) | en_US |
dc.rights | © 2023 Association for Computational Linguistics | en_US |
dc.rights | Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/). | en_US |
dc.rights | The following publication Renzhi Wang, Jing Li, and Piji Li. 2023. InfoDiffusion: Information Entropy Aware Diffusion Process for Non-Autoregressive Text Generation. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 13757–13770, Singapore. Association for Computational Linguistics is available at https://aclanthology.org/2023.findings-emnlp.919/. | en_US |
dc.title | InfoDiffusion : information entropy aware diffusion process for non-autoregressive text generation | en_US |
dc.type | Conference Paper | en_US |
dc.identifier.spage | 13757 | en_US |
dc.identifier.epage | 13770 | en_US |
dcterms.abstract | Diffusion models have garnered considerable interest in the field of text generation. Several studies have explored text diffusion models with different structures and applied them to various tasks, including named entity recognition and summarization. However, there exists a notable disparity between the “easy-first” text generation process of current diffusion models and the “keyword-first” natural text generation process of humans, which has received limited attention. To bridge this gap, we propose InfoDiffusion, a non-autoregressive text diffusion model. Our approach introduces a “keyinfo-first” generation strategy and incorporates a noise schedule based on the amount of text information. In addition, InfoDiffusion combines self-conditioning with a newly proposed partially noising model structure. Experimental results show that InfoDiffusion outperforms the baseline model in terms of generation quality and diversity, as well as exhibiting higher sampling efficiency. | en_US |
dcterms.accessRights | open access | en_US |
dcterms.bibliographicCitation | In Findings of the Association for Computational Linguistics: EMNLP 2023 , p. 13757–13770, Singapore. Association for Computational Linguistics, 2023 | en_US |
dcterms.issued | 2023 | - |
dc.relation.conference | Conference on Empirical Methods in Natural Language Processing [EMNLP] | en_US |
dc.description.validate | 202407 bcwh | en_US |
dc.description.oa | Version of Record | en_US |
dc.identifier.FolderNumber | a3033 | - |
dc.identifier.SubFormID | 49241 | - |
dc.description.fundingSource | Others | en_US |
dc.description.fundingText | National Natural Science Foundation of China | en_US |
dc.description.pubStatus | Published | en_US |
dc.description.oaCategory | CC | en_US |
Appears in Collections: | Conference Paper |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2023.findings-emnlp.919.pdf | 450.29 kB | Adobe PDF | View/Open |
Page views
7
Citations as of Jul 21, 2024
Downloads
1
Citations as of Jul 21, 2024
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.