Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/96590
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electronic and Information Engineering-
dc.creatorHu, Jen_US
dc.creatorZheng, Yen_US
dc.creatorLam, KMen_US
dc.creatorLou, Pen_US
dc.date.accessioned2022-12-07T02:55:32Z-
dc.date.available2022-12-07T02:55:32Z-
dc.identifier.issn2169-3536en_US
dc.identifier.urihttp://hdl.handle.net/10397/96590-
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rightsThis work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/en_US
dc.rightsThe following publication Hu, J., Zheng, Y., Lam, K. M., & Lou, P. (2022). DWANet: Focus on Foreground Features for More Accurate Location. IEEE Access, 10, 30716-30729 is available at https://doi.org/10.1109/ACCESS.2022.3158681.en_US
dc.subjectFeature fusionen_US
dc.subjectForeground featuresen_US
dc.subjectMulti-scaleen_US
dc.subjectObject detectionen_US
dc.titleDWANet : focus on foreground features for more accurate locationen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage30716en_US
dc.identifier.epage30729en_US
dc.identifier.volume10en_US
dc.identifier.doi10.1109/ACCESS.2022.3158681en_US
dcterms.abstractObject detection can locate objects in an image using bounding boxes, which can facilitate classification and image understanding, resulting in a wide range of applications. Knowing how to mine useful features from images and detect objects of different scales have become the focus for object-detection research. In this paper, considering the importance of foreground features in the process of object detection, a foreground feature extraction module, based on deformable convolution, is proposed, and the attention mechanism is integrated to suppress the interference from the background. To learn effective features, considering that different layers in a convolutional neural network have different contributions, we propose methods to learn the weights for feature fusion. Experiments on the VOC datasets and COCO datasets show that the proposed algorithm can effectively improve the object detection accuracy, which is 12.1% higher than Faster R-CNN, 1.5% higher than RefineDet, and 2.3% higher than the Hierarchical Shot Detector (HSD).-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE access, 2022, v. 10, p. 30716-30729en_US
dcterms.isPartOfIEEE accessen_US
dcterms.issued2022-
dc.identifier.scopus2-s2.0-85126708658-
dc.description.validate202212 bckw-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOS-
dc.description.pubStatusPublisheden_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Hu_DWANet_Focus_Foreground.pdf6.16 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

64
Last Week
0
Last month
Citations as of May 12, 2024

Downloads

27
Citations as of May 12, 2024

SCOPUSTM   
Citations

1
Citations as of May 16, 2024

WEB OF SCIENCETM
Citations

1
Citations as of May 16, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.