Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/112586
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.creatorLi, Yen_US
dc.creatorLi, Wen_US
dc.creatorNie, Len_US
dc.date.accessioned2025-04-17T06:34:43Z-
dc.date.available2025-04-17T06:34:43Z-
dc.identifier.urihttp://hdl.handle.net/10397/112586-
dc.description60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland, May 22-27, 2022en_US
dc.language.isoenen_US
dc.publisherAssociation for Computational Linguisticsen_US
dc.rights©2022 Association for Computational Linguisticsen_US
dc.rightsMaterials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License. (https://creativecommons.org/licenses/by/4.0/)en_US
dc.rightsThe following publication Li, Y., Li, W., & Nie, L. (2022, May). MMCoQA: Conversational Question Answering over Text, Tables, and Images. In S. Muresan, P. Nakov, & A. Villavicencio, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) Dublin, Ireland, 4220-4231 is available at https://doi.org/10.18653/v1/2022.acl-long.290.en_US
dc.titleMMCoQA : conversational question answering over text, tables, and imagesen_US
dc.typeConference Paperen_US
dc.identifier.spage4220en_US
dc.identifier.epage4231en_US
dc.identifier.volume1en_US
dc.identifier.doi10.18653/v1/2022.acl-long.290en_US
dcterms.abstractThe rapid development of conversational assistants accelerates the study on conversational question answering (QA). However, the existing conversational QA systems usually answer users’ questions with a single knowledge source, e.g., paragraphs or a knowledge graph, but overlook the important visual cues, let alone multiple knowledge sources of different modalities. In this paper, we hence define a novel research task, i.e., multimodal conversational question answering (MMCoQA), aiming to answer users’ questions with multimodal knowledge sources via multi-turn conversations. This new task brings a series of research challenges, including but not limited to priority, consistency, and complementarity of multimodal knowledge. To facilitate the data-driven approaches in this area, we construct the first multimodal conversational QA dataset, named MMConvQA. Questions are fully annotated with not only natural language answers but also the corresponding evidence and valuable decontextualized self-contained questions. Meanwhile, we introduce an end-to-end baseline model, which divides this complex research task into question understanding, multi-modal evidence retrieval, and answer extraction. Moreover, we report a set of benchmarking results, and the results indicate that there is ample room for improvement.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn S. Muresan, P. Nakov, A. Villavicencio (Eds.), Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), p. 4220-4231. Stroudsburg, PA: Association for Computational Linguistics (ACL), 2022en_US
dcterms.issued2022-
dc.identifier.scopus2-s2.0-85133856550-
dc.relation.ispartofbookProceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)en_US
dc.relation.conferenceAssociation for Computational Linguistics [ACL]en_US
dc.description.validate202504 bcchen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Others-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextNational Natural Science Foundation of China (62076212); PolyU internal grants (ZVQ0)en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
2022.acl-long.290.pdf5.93 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.