Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/105447
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computingen_US
dc.creatorWang, Yen_US
dc.creatorLi, Jen_US
dc.creatorLyu, Men_US
dc.creatorKing, Ien_US
dc.date.accessioned2024-04-15T07:34:25Z-
dc.date.available2024-04-15T07:34:25Z-
dc.identifier.isbn978-1-952148-60-6en_US
dc.identifier.urihttp://hdl.handle.net/10397/105447-
dc.descriptionEMNLP 2020: The 2020 Conference on Empirical Methods in Natural Language Processing, 16th-20th November 2020, Onlineen_US
dc.language.isoenen_US
dc.publisherAssociation for Computational Linguistics (ACL)en_US
dc.rights© 2020 Association for Computational Linguisticsen_US
dc.rightsThis publication is licensed on a Creative Commons Attribution 4.0 International License. (https://creativecommons.org/licenses/by/4.0/)en_US
dc.rightsThe following publication Yue Wang, Jing Li, Michael Lyu, and Irwin King. 2020. Cross-Media Keyphrase Prediction: A Unified Framework with Multi-Modality Multi-Head Attention and Image Wordings. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3311–3324, Online. Association for Computational Linguistics is available at https://doi.org/10.18653/v1/2020.emnlp-main.268.en_US
dc.titleCross-media keyphrase prediction : a unified framework with multi-modality multi-head attention and image wordingsen_US
dc.typeConference Paperen_US
dc.identifier.spage3311en_US
dc.identifier.epage3324en_US
dc.identifier.doi10.18653/v1/2020.emnlp-main.268en_US
dcterms.abstractSocial media produces large amounts of contents every day. To help users quickly capture what they need, keyphrase prediction is receiving a growing attention. Nevertheless, most prior efforts focus on text modeling, largely ignoring the rich features embedded in the matching images. In this work, we explore the joint effects of texts and images in predicting the keyphrases for a multimedia post. To better align social media style texts and images, we propose: (1) a novel Multi-Modality MultiHead Attention (M3H-Att) to capture the intricate cross-media interactions; (2) image wordings, in forms of optical characters and image attributes, to bridge the two modalities. Moreover, we design a unified framework to leverage the outputs of keyphrase classification and generation and couple their advantages. Extensive experiments on a large-scale dataset newly collected from Twitter show that our model significantly outperforms the previous state of the art based on traditional attention mechanisms. Further analyses show that our multi-head attention is able to attend information from various aspects and boost classification or generation in diverse scenarios.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, p. 3311-3324. Stroudsburg, PA, USA: Association for Computational Linguistics (ACL), 2020en_US
dcterms.issued2020-
dc.relation.ispartofbookProceedings of the 2020 Conference on Empirical Methods in Natural Language Processingen_US
dc.relation.conferenceConference on Empirical Methods in Natural Language Processing [EMNLP]en_US
dc.description.validate202402 bcchen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberCOMP-0007-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextStartup Funden_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS50290564-
dc.description.oaCategoryCCen_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
2020.emnlp-main.268.pdf9.02 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

100
Last Week
6
Last month
Citations as of Nov 30, 2025

Downloads

57
Citations as of Nov 30, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.