Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/95535
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Mechanical Engineeringen_US
dc.creatorZhu, Jen_US
dc.creatorNavarro-Alarcon, Den_US
dc.creatorPassama, Ren_US
dc.creatorCherubini, Aen_US
dc.date.accessioned2022-09-21T01:40:48Z-
dc.date.available2022-09-21T01:40:48Z-
dc.identifier.issn0921-8890en_US
dc.identifier.urihttp://hdl.handle.net/10397/95535-
dc.language.isoenen_US
dc.publisherElsevieren_US
dc.rights© 2021 Published by Elsevier B.V.en_US
dc.rights© 2021. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/.en_US
dc.rightsThe following publication Zhu, J., Navarro-Alarcon, D., Passama, R., & Cherubini, A. (2021). Vision-based manipulation of deformable and rigid objects using subspace projections of 2D contours. Robotics and Autonomous Systems, 142, 103798 is available at https://doi.org/10.1016/j.robot.2021.103798.en_US
dc.subjectDeformable object manipulationen_US
dc.subjectSensor-based controlen_US
dc.subjectVisual servoingen_US
dc.titleVision-based manipulation of deformable and rigid objects using subspace projections of 2D contoursen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume142en_US
dc.identifier.doi10.1016/j.robot.2021.103798en_US
dcterms.abstractThis paper proposes a unified vision-based manipulation framework using image contours of deformable/rigid objects. Instead of explicitly defining the features by geometries or functions, the robot automatically learns the visual features from processed vision data. Our method simultaneously generates – from the same data – both visual features and the interaction matrix that relates them to the robot control inputs. Extraction of the feature vector and control commands is done online and adaptively, and requires little data for initialization. Our method allows the robot to manipulate an object without knowing whether it is rigid or deformable. To validate our approach, we conduct numerical simulations and experiments with both deformable and rigid objects.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationRobotics and autonomous systems, Aug. 2021, v. 142, 103798en_US
dcterms.isPartOfRobotics and autonomous systemsen_US
dcterms.issued2021-08-
dc.identifier.scopus2-s2.0-85105839683-
dc.identifier.artn103798en_US
dc.description.validate202209 bcfcen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberME-0038-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextEU H2020 research and innovation programmeen_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS51696642-
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
NavarroAlarcon_Vision-Based_Manipulation_Deformable.pdfPre-Published version8.11 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

53
Last Week
0
Last month
Citations as of Sep 22, 2024

Downloads

62
Citations as of Sep 22, 2024

SCOPUSTM   
Citations

39
Citations as of Sep 26, 2024

WEB OF SCIENCETM
Citations

32
Citations as of Sep 26, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.