Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/96590
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Electronic and Information Engineering | - |
dc.creator | Hu, J | en_US |
dc.creator | Zheng, Y | en_US |
dc.creator | Lam, KM | en_US |
dc.creator | Lou, P | en_US |
dc.date.accessioned | 2022-12-07T02:55:32Z | - |
dc.date.available | 2022-12-07T02:55:32Z | - |
dc.identifier.issn | 2169-3536 | en_US |
dc.identifier.uri | http://hdl.handle.net/10397/96590 | - |
dc.language.iso | en | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers | en_US |
dc.rights | This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ | en_US |
dc.rights | The following publication Hu, J., Zheng, Y., Lam, K. M., & Lou, P. (2022). DWANet: Focus on Foreground Features for More Accurate Location. IEEE Access, 10, 30716-30729 is available at https://doi.org/10.1109/ACCESS.2022.3158681. | en_US |
dc.subject | Feature fusion | en_US |
dc.subject | Foreground features | en_US |
dc.subject | Multi-scale | en_US |
dc.subject | Object detection | en_US |
dc.title | DWANet : focus on foreground features for more accurate location | en_US |
dc.type | Journal/Magazine Article | en_US |
dc.identifier.spage | 30716 | en_US |
dc.identifier.epage | 30729 | en_US |
dc.identifier.volume | 10 | en_US |
dc.identifier.doi | 10.1109/ACCESS.2022.3158681 | en_US |
dcterms.abstract | Object detection can locate objects in an image using bounding boxes, which can facilitate classification and image understanding, resulting in a wide range of applications. Knowing how to mine useful features from images and detect objects of different scales have become the focus for object-detection research. In this paper, considering the importance of foreground features in the process of object detection, a foreground feature extraction module, based on deformable convolution, is proposed, and the attention mechanism is integrated to suppress the interference from the background. To learn effective features, considering that different layers in a convolutional neural network have different contributions, we propose methods to learn the weights for feature fusion. Experiments on the VOC datasets and COCO datasets show that the proposed algorithm can effectively improve the object detection accuracy, which is 12.1% higher than Faster R-CNN, 1.5% higher than RefineDet, and 2.3% higher than the Hierarchical Shot Detector (HSD). | - |
dcterms.accessRights | open access | en_US |
dcterms.bibliographicCitation | IEEE access, 2022, v. 10, p. 30716-30729 | en_US |
dcterms.isPartOf | IEEE access | en_US |
dcterms.issued | 2022 | - |
dc.identifier.scopus | 2-s2.0-85126708658 | - |
dc.description.validate | 202212 bckw | - |
dc.description.oa | Version of Record | en_US |
dc.identifier.FolderNumber | OA_Scopus/WOS | - |
dc.description.pubStatus | Published | en_US |
Appears in Collections: | Journal/Magazine Article |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Hu_DWANet_Focus_Foreground.pdf | 6.16 MB | Adobe PDF | View/Open |
Page views
64
Last Week
0
0
Last month
Citations as of May 12, 2024
Downloads
27
Citations as of May 12, 2024
SCOPUSTM
Citations
1
Citations as of May 16, 2024
WEB OF SCIENCETM
Citations
1
Citations as of May 16, 2024
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.