Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/92496
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Industrial and Systems Engineeringen_US
dc.creatorFan, Jen_US
dc.creatorLi, Sen_US
dc.creatorZheng, Pen_US
dc.creatorLee, CKMen_US
dc.date.accessioned2022-04-07T06:33:52Z-
dc.date.available2022-04-07T06:33:52Z-
dc.identifier.isbn978-1-6654-1873-7 (Electronic ISBN)en_US
dc.identifier.isbn978-1-6654-1872-0 (USB ISBN)en_US
dc.identifier.isbn978-1-6654-4809-3 (Print on Demand(PoD) ISBN)en_US
dc.identifier.urihttp://hdl.handle.net/10397/92496-
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.rights© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication Fan, J., Li, S., Zheng, P., & Lee, C. K. (2021, August). A High-Resolution Network-Based Approach for 6D Pose Estimation of Industrial Parts. In 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE) (pp. 1452-1457). IEEE is available at https://doi.org/10.1109/CASE49439.2021.9551495en_US
dc.titleA high-resolution network-based approach for 6D pose estimation of industrial partsen_US
dc.typeConference Paperen_US
dc.identifier.spage1452en_US
dc.identifier.epage1457en_US
dc.identifier.doi10.1109/CASE49439.2021.9551495en_US
dcterms.abstractThe estimation of 6D pose of industrial parts is a fundamental problem in smart manufacturing. Traditional approaches mainly focus on matching corresponding key point pairs between observed 2D images and 3D object models via hand-crafted feature descriptors. However, key points are hard to discover from images when the parts are piled up in disorder or occluded by other distractors, e.g., human hands. Although the emerging deep learning-based methods are capable of inferring the poses of occluded parts, the accuracy is not satisfactory largely due to the loss of spatial resolution from multiple downsampling operations inside convolutional neural networks. To overcome this challenge, this paper proposes a 6D pose estimation model consisting of a pose estimator and a pose refiner, by leveraging High-Resolution Networks as the backbone. Experiments are further conducted on a dataset of industrial parts to demonstrate its effectiveness.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIn 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE), p. 1452-1457. Lyon, France, August 23-27, 2021en_US
dcterms.issued2021-
dc.identifier.scopus2-s2.0-85116990231-
dc.relation.ispartofbook2021 IEEE 17th International Conference on Automation Science and Engineering (CASE)en_US
dc.relation.conferenceInternational Conference on Automation Science and Engineering [CASE]en_US
dc.description.validate202204 bcvcen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumbera1291, ISE-0093-
dc.identifier.SubFormID44484-
dc.description.fundingSourceRGCen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS57294581-
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Fan_High-Resolution_Network-Based_Approach.pdfPre-Published Version3.8 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

100
Last Week
0
Last month
Citations as of May 19, 2024

Downloads

140
Citations as of May 19, 2024

SCOPUSTM   
Citations

4
Citations as of May 17, 2024

WEB OF SCIENCETM
Citations

1
Citations as of May 2, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.