Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/118126
Title: An LLM-enabled human demonstration-assisted hybrid robot skill synthesis approach for human-robot collaborative assembly
Authors: Yin, Y 
Wan, K 
Li, C 
Zheng, P 
Issue Date: 2025
Source: CIRP annals : manufactering technology, 2025, v. 74, no. 1, p. 1-5
Abstract: Effective human-robot collaborative assembly (HRCA) demands robots with advanced skill learning and communication capabilities. To address this challenge, this paper proposes a large language model (LLM)-enabled, human demonstration-assisted hybrid robot skill synthesis approach, facilitated via a mixed reality (MR) interface. Our key innovation lies in fine-tuning LLMs to directly translate human language instructions into reward functions, which guide a deep reinforcement learning (DRL) module to autonomously generate robot executable actions. Furthermore, human demonstrations are intuitively tracked via MR, enabling a more adaptive and efficient hybrid skill learning. Finally, the effectiveness of the proposed approach has been demonstrated through multiple HRCA tasks.
Keywords: Human robot collaboration
Human-guided robot learning
Manufacturing system
Publisher: Elsevier
Journal: CIRP annals : manufactering technology 
ISSN: 0007-8506
EISSN: 1726-0604
DOI: 10.1016/j.cirp.2025.04.015
Appears in Collections:Journal/Magazine Article

Open Access Information
Status embargoed access
Embargo End Date 2027-12-31
Access
View full-text via PolyU eLinks SFX Query
Show full item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.