Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/107179
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineering-
dc.creatorHe, CHen_US
dc.creatorLai, SCen_US
dc.creatorLam, KMen_US
dc.date.accessioned2024-06-13T01:04:25Z-
dc.date.available2024-06-13T01:04:25Z-
dc.identifier.isbn978-1-4799-8131-1 (Electronic)en_US
dc.identifier.isbn978-1-4799-8132-8 (Print on Demand(PoD))en_US
dc.identifier.urihttp://hdl.handle.net/10397/107179-
dc.descriptionICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 12-17 May 2019, Brighton, UKen_US
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights©2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication C. -H. He, S. -C. Lai and K. -M. Lam, "Improving Object Detection with Relation Graph Inference," ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 2019, pp. 2537-2541 is available at https://doi.org/10.1109/ICASSP.2019.8682335.en_US
dc.subjectGraph convolutional neural networken_US
dc.subjectObject detectionen_US
dc.titleImproving object detection with relation graph inferenceen_US
dc.typeConference Paperen_US
dc.identifier.spage2537en_US
dc.identifier.epage2541en_US
dc.identifier.doi10.1109/ICASSP.2019.8682335en_US
dcterms.abstractMany classic object detection approaches have proven that detection performance can be improved by adding the object's context information. However, only a few methods have attempted to exploit the object-to-object relationship during learning. The reason for this is that objects may appear at different locations in an image, with an arbitrary size and scale. This makes it difficult to model the objects in a unified way within a network. Inspired by Graph Convolutional Network (GCN), we propose a detection algorithm that can infer the relationship among multiple objects during the inference, achieved by constructing a relation graph dynamically with a self-adopted attention mechanism. The relation graph encodes both the geometric and visual relationship between objects. This can enrich the object feature by aggregating the information from the object and its relevant neighbors. Experiments show that our proposed module can efficiently improve the detection performance of existing object detectors.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn Proceedings of ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 12-17 May 2019, Brighton, UK, p. 2537-2541en_US
dcterms.issued2019-
dc.identifier.scopus2-s2.0-85069003239-
dc.relation.conferenceInternational Conference on Acoustics, Speech, and Signal Processing [ICASSP]-
dc.description.validate202404 bckw-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberEIE-0382-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextHong Kong Polytechnic Universityen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS20082841-
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
He_Improving_Object_Detection.pdfPre-Published version1.46 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

2
Citations as of Jun 30, 2024

SCOPUSTM   
Citations

18
Citations as of Jun 21, 2024

WEB OF SCIENCETM
Citations

13
Citations as of Jun 27, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.