Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/113793
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Mechanical Engineering-
dc.creatorZhou, P-
dc.creatorQi, J-
dc.creatorDuan, A-
dc.creatorHuo, S-
dc.creatorWu, Z-
dc.creatorNavarroAlarcon, D-
dc.date.accessioned2025-06-24T06:37:54Z-
dc.date.available2025-06-24T06:37:54Z-
dc.identifier.issn1551-3203-
dc.identifier.urihttp://hdl.handle.net/10397/113793-
dc.language.isoenen_US
dc.publisherIEEE Computer Societyen_US
dc.rights© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.rightsThe following publication P. Zhou, J. Qi, A. Duan, S. Huo, Z. Wu and D. Navarro-Alarcon, "Imitating Tool-Based Garment Folding From a Single Visual Observation Using Hand-Object Graph Dynamics," in IEEE Transactions on Industrial Informatics, vol. 20, no. 4, pp. 6245-6256, April 2024 is avaliable at https://doi.org/10.1109/TII.2023.3342895.en_US
dc.subjectCloth foldingen_US
dc.subjectGraph dynamics modelen_US
dc.subjectHand-object graph (HoG)en_US
dc.subjectImitation learning (IL)en_US
dc.subjectTool manipulationen_US
dc.titleImitating tool-based garment folding from a single visual observation using hand-object graph dynamicsen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage6245-
dc.identifier.epage6256-
dc.identifier.volume20-
dc.identifier.issue4-
dc.identifier.doi10.1109/TII.2023.3342895-
dcterms.abstractGarment folding is a ubiquitous domestic task that is difficult to automate due to the highly deformable nature of fabrics. In this article, we propose a novel method of learning from demonstrations that enables robots to autonomously manipulate an assistive tool to fold garments. In contrast to traditional methods (that rely on low-level pixel features), our proposed solution uses a dense visual descriptor to encode the demonstration into a high-level hand-object graph (HoG) that allows to efficiently represent the interactions between the manipulated tool and robots. With that, we leverage graph neural network to autonomously learn the forward dynamics model from HoGs, then, given only a single demonstration, the imitation policy is optimized with a model predictive controller to accomplish the folding task. To validate the proposed approach, we conducted a detailed experimental study on a robotic platform instrumented with vision sensors and a custom-made end-effector that interacts with the folding board.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationIEEE transactions on industrial informatics, Apr. 2024, v. 20, no. 4, p. 6245-6256-
dcterms.isPartOfIEEE transactions on industrial informatics-
dcterms.issued2024-04-
dc.identifier.scopus2-s2.0-85181580625-
dc.identifier.eissn1941-0050-
dc.description.validate202506 bcch-
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumbera3769aen_US
dc.identifier.SubFormID50996en_US
dc.description.fundingSourceRGCen_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
Zhou_Imitating_Tool_Based.pdfPre-Published version11.05 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

SCOPUSTM   
Citations

25
Citations as of Dec 19, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.