Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/107170
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineeringen_US
dc.creatorQi, Qen_US
dc.creatorZhao, Sen_US
dc.creatorShen, Jen_US
dc.creatorLam, KMen_US
dc.date.accessioned2024-06-13T01:04:21Z-
dc.date.available2024-06-13T01:04:21Z-
dc.identifier.isbn978-1-5386-9552-4 (Electronic)en_US
dc.identifier.isbn978-1-5386-9553-1 (Print on Demand(PoD))en_US
dc.identifier.urihttp://hdl.handle.net/10397/107170-
dc.description2019 IEEE International Conference on Multimedia and Expo (ICME), 08-12 July 2019, Shanghai, Chinaen_US
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights©2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication Q. Qi, S. Zhao, J. Shen and K. -M. Lam, "Multi-scale Capsule Attention-Based Salient Object Detection with Multi-crossed Layer Connections," 2019 IEEE International Conference on Multimedia and Expo (ICME), Shanghai, China, 2019, pp. 1762-1767 is available at https://doi.org/10.1109/ICME.2019.00303.en_US
dc.subjectCapsule attentionen_US
dc.subjectMulti-crossed layer connectionsen_US
dc.subjectSalient object detectionen_US
dc.titleMulti-scale Capsule attention-based Salient object detection with multi-crossed layer connectionsen_US
dc.typeConference Paperen_US
dc.identifier.spage1762en_US
dc.identifier.epage1767en_US
dc.identifier.doi10.1109/ICME.2019.00303en_US
dcterms.abstractWith the popularization of convolutional networks being used for saliency models, saliency detection performance has achieved significant improvement. However, how to integrate accurate and crucial features for modeling saliency is still underexplored. In this paper, we present CapSalNet, which includes a multi-scale Capsule attention module and multi-crossed layer connections for Salient object detection. We first propose a novel capsule attention model, which integrates multi-scale contextual information with dynamic routing. Then, our model adaptively learns to aggregate multi-level features by using multi-crossed skip-layer connections. Finally, the predicted results are efficiently fused to generate the final saliency map in a coarse-to-fine manner. Comprehensive experiments on four benchmark datasets demonstrate that our proposed algorithm outperforms existing state-of-the-art approaches.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn Proceedings of 2019 IEEE International Conference on Multimedia and Expo (ICME), 08-12 July 2019, Shanghai, China, p. 1762-1767en_US
dcterms.issued2019-
dc.identifier.scopus2-s2.0-85071014898-
dc.relation.conferenceIEEE International Conference on Multimedia and Expo [ICME]en_US
dc.description.validate202404 bckwen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberEIE-0343-
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS20082700-
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Lam_Multi-Scale_Capsule_Attention-Based.pdfPre-Published version1.11 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

2
Citations as of Jun 30, 2024

SCOPUSTM   
Citations

18
Citations as of Jun 21, 2024

WEB OF SCIENCETM
Citations

15
Citations as of Jun 27, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.