Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/106339
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Mechanical Engineering-
dc.creatorLu, Ben_US
dc.creatorChen, Wen_US
dc.creatorJin, YMen_US
dc.creatorZhang, Den_US
dc.creatorDou, Qen_US
dc.creatorChu, HKen_US
dc.creatorHeng, PAen_US
dc.creatorLiu, YHen_US
dc.date.accessioned2024-05-09T00:52:52Z-
dc.date.available2024-05-09T00:52:52Z-
dc.identifier.isbn978-1-7281-6213-3 (Print on Demand(PoD))en_US
dc.identifier.isbn978-1-7281-6212-6 (Electronic)en_US
dc.identifier.urihttp://hdl.handle.net/10397/106339-
dc.description2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, Nevada, USA, 25 October 2020 - 24 January 2021en_US
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.rights© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication B. Lu et al., "A Learning-Driven Framework with Spatial Optimization For Surgical Suture Thread Reconstruction and Autonomous Grasping Under Multiple Topologies and Environmental Noises," 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020, pp. 3075-3082 is available at https://doi.org/10.1109/IROS45743.2020.9341445.en_US
dc.titleA learning-driven framework with spatial optimization for surgical suture thread reconstruction and autonomous grasping under multiple topologies and environmental noisesen_US
dc.typeConference Paperen_US
dc.identifier.spage3075en_US
dc.identifier.epage3082en_US
dc.identifier.doi10.1109/IROS45743.2020.9341445en_US
dcterms.abstractSurgical knot tying is one of the most fundamental and important procedures in surgery, and a high-quality knot can significantly benefit the postoperative recovery of the patient. However, a longtime operation may easily cause fatigue to surgeons, especially during the tedious wound closure task. In this paper, we present a vision-based method to automate the suture thread grasping, which is a sub-task in surgical knot tying and an intermediate step between the stitching and looping manipulations. To achieve this goal, the acquisition of a suture's three-dimensional (3D) information is critical. Towards this objective, we adopt a transfer-learning strategy first to fine-tune a pre-trained model by learning the information from large legacy surgical data and images obtained by the onsite equipment. Thus, a robust suture segmentation can be achieved regardless of inherent environment noises. We further leverage a searching strategy with termination policies for a suture's sequence inference based on the analysis of multiple topologies. Exact results of the pixel-level sequence along a suture can be obtained, and they can be further applied for a 3D shape reconstruction using our optimized shortest path approach. The grasping point considering the suturing criterion can be ultimately acquired. Experiments regarding the suture 2D segmentation and ordering sequence inference under environmental noises were extensively evaluated. Results related to the automated grasping operation were demonstrated by simulations in V-REP and by robot experiments using Universal Robot (UR) together with the da Vinci Research Kit (dVRK) adopting our learning-driven framework.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitation2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, Nevada, USA, 25 October 2020 - 24 January 2021, p. 3075-3082en_US
dcterms.issued2020-
dc.identifier.scopus2-s2.0-85102405326-
dc.relation.conferenceIntelligent Robots and Systems [IROS]-
dc.description.validate202405 bcch-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberME-0181-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextCUHK-SJTU Joint Research Fund; VC Fund of CUHK T-Stone Robotics Instituteen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS52542014-
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Chu_Learning-Driven_Framework_Spatial.pdfPre-Published version19.71 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

92
Last Week
4
Last month
Citations as of Nov 9, 2025

Downloads

87
Citations as of Nov 9, 2025

SCOPUSTM   
Citations

15
Citations as of Dec 19, 2025

WEB OF SCIENCETM
Citations

18
Citations as of Dec 18, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.