Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/105525
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Computing | - |
| dc.creator | Huang, Q | - |
| dc.creator | Wei, J | - |
| dc.creator | Cai, Y | - |
| dc.creator | Zheng, C | - |
| dc.creator | Chen, J | - |
| dc.creator | Leung, HF | - |
| dc.creator | Li, Q | - |
| dc.date.accessioned | 2024-04-15T07:34:51Z | - |
| dc.date.available | 2024-04-15T07:34:51Z | - |
| dc.identifier.isbn | 978-1-952148-25-5 | - |
| dc.identifier.uri | http://hdl.handle.net/10397/105525 | - |
| dc.description | 58th Annual Meeting of the Association for Computational Linguistics, Online, July 5th-10th, 2020 | en_US |
| dc.language.iso | en | en_US |
| dc.publisher | Association for Computational Linguistics (ACL) | en_US |
| dc.rights | © 2020 Association for Computational Linguistics | en_US |
| dc.rights | This publication is licensed on a Creative Commons Attribution 4.0 International License. (https://creativecommons.org/licenses/by/4.0/) | en_US |
| dc.rights | The following publication Qingbao Huang, Jielong Wei, Yi Cai, Changmeng Zheng, Junying Chen, Ho-fung Leung, and Qing Li. 2020. Aligned Dual Channel Graph Convolutional Network for Visual Question Answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7166–7176, Online. Association for Computational Linguistics is available at https://doi.org/10.18653/v1/2020.acl-main.642. | en_US |
| dc.title | Aligned dual channel graph convolutional network for visual question answering | en_US |
| dc.type | Conference Paper | en_US |
| dc.identifier.spage | 7166 | - |
| dc.identifier.epage | 7176 | - |
| dc.identifier.doi | 10.18653/v1/2020.acl-main.642 | - |
| dcterms.abstract | Visual question answering aims to answer the natural language question about a given image. Existing graph-based methods only focus on the relations between objects in an image and neglect the importance of the syntactic dependency relations between words in a question. To simultaneously capture the relations between objects in an image and the syntactic dependency relations between words in a question, we propose a novel dual channel graph convolutional network (DC-GCN) for better combining visual and textual advantages. The DC-GCN model consists of three parts: an I-GCN module to capture the relations between objects in an image, a Q-GCN module to capture the syntactic dependency relations between words in a question, and an attention alignment module to align image representations and question representations. Experimental results show that our model achieves comparable performance with the state-of-the-art approaches. | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, p. 7166-7176. Stroudsburg, PA, USA: Association for Computational Linguistics (ACL), 2020 | - |
| dcterms.issued | 2020 | - |
| dc.relation.ispartofbook | Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics | - |
| dc.relation.conference | Annual Meeting of the Association for Computational Linguistics [ACL] | - |
| dc.description.validate | 202402 bcch | - |
| dc.description.oa | Version of Record | en_US |
| dc.identifier.FolderNumber | COMP-0282 | en_US |
| dc.description.fundingSource | RGC | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | Internal research grant from PolyU | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.identifier.OPUS | 49984830 | en_US |
| dc.description.oaCategory | CC | en_US |
| Appears in Collections: | Conference Paper | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 2020.acl-main.642.pdf | 3.81 MB | Adobe PDF | View/Open |
Page views
98
Last Week
3
3
Last month
Citations as of Nov 30, 2025
Downloads
24
Citations as of Nov 30, 2025
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



