Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/118126
DC FieldValueLanguage
dc.contributorDepartment of Industrial and Systems Engineeringen_US
dc.creatorYin, Yen_US
dc.creatorWan, Ken_US
dc.creatorLi, Cen_US
dc.creatorZheng, Pen_US
dc.date.accessioned2026-03-18T03:24:51Z-
dc.date.available2026-03-18T03:24:51Z-
dc.identifier.issn0007-8506en_US
dc.identifier.urihttp://hdl.handle.net/10397/118126-
dc.language.isoenen_US
dc.publisherElsevieren_US
dc.subjectHuman robot collaborationen_US
dc.subjectHuman-guided robot learningen_US
dc.subjectManufacturing systemen_US
dc.titleAn LLM-enabled human demonstration-assisted hybrid robot skill synthesis approach for human-robot collaborative assemblyen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage1en_US
dc.identifier.epage5en_US
dc.identifier.volume74en_US
dc.identifier.issue1en_US
dc.identifier.doi10.1016/j.cirp.2025.04.015en_US
dcterms.abstractEffective human-robot collaborative assembly (HRCA) demands robots with advanced skill learning and communication capabilities. To address this challenge, this paper proposes a large language model (LLM)-enabled, human demonstration-assisted hybrid robot skill synthesis approach, facilitated via a mixed reality (MR) interface. Our key innovation lies in fine-tuning LLMs to directly translate human language instructions into reward functions, which guide a deep reinforcement learning (DRL) module to autonomously generate robot executable actions. Furthermore, human demonstrations are intuitively tracked via MR, enabling a more adaptive and efficient hybrid skill learning. Finally, the effectiveness of the proposed approach has been demonstrated through multiple HRCA tasks.en_US
dcterms.accessRightsembargoed accessen_US
dcterms.bibliographicCitationCIRP annals : manufactering technology, 2025, v. 74, no. 1, p. 1-5en_US
dcterms.isPartOfCIRP annals : manufactering technologyen_US
dcterms.issued2025-
dc.identifier.scopus2-s2.0-105002928032-
dc.identifier.eissn1726-0604en_US
dc.description.validate202603 bchyen_US
dc.description.oaNot applicableen_US
dc.identifier.SubFormIDG001252/2025-12-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThis work was mainly supported by the funding support from the National Natural Science Foundation of China (No. 52422514), the General Research Fund (GRF) (Project No. PolyU 15210222 and PolyU 15206723) and the Collaborative Research Fund (CRF) (Project No. C6044-23GF) from the Research Grants Council (RGC), Hong Kong.en_US
dc.description.pubStatusPublisheden_US
dc.date.embargo2027-12-31en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Open Access Information
Status embargoed access
Embargo End Date 2027-12-31
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.