Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/110001
DC Field | Value | Language |
---|---|---|
dc.contributor | Department of Industrial and Systems Engineering | - |
dc.creator | Zheng, P | - |
dc.creator | Li, C | - |
dc.creator | Fan, J | - |
dc.creator | Wang, L | - |
dc.date.accessioned | 2024-11-20T07:30:48Z | - |
dc.date.available | 2024-11-20T07:30:48Z | - |
dc.identifier.issn | 0007-8506 | - |
dc.identifier.uri | http://hdl.handle.net/10397/110001 | - |
dc.language.iso | en | en_US |
dc.publisher | Elsevier BV | en_US |
dc.rights | © 2024 The Author(s). Published by Elsevier Ltd on behalf of CIRP. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). | en_US |
dc.rights | The following publication Zheng, P., Li, C., Fan, J., & Wang, L. (2024). A vision-language-guided and deep reinforcement learning-enabled approach for unstructured human-robot collaborative manufacturing task fulfilment. CIRP Annals, 73(1), 341-344 is available at https://doi.org/10.1016/j.cirp.2024.04.003. | en_US |
dc.subject | Human-guided robot learning | en_US |
dc.subject | Human-robot collaboration | en_US |
dc.subject | Manufacturing system | en_US |
dc.title | A vision-language-guided and deep reinforcement learning-enabled approach for unstructured human-robot collaborative manufacturing task fulfilment | en_US |
dc.type | Journal/Magazine Article | en_US |
dc.identifier.spage | 341 | - |
dc.identifier.epage | 344 | - |
dc.identifier.volume | 73 | - |
dc.identifier.issue | 1 | - |
dc.identifier.doi | 10.1016/j.cirp.2024.04.003 | - |
dcterms.abstract | Human-Robot Collaboration (HRC) has emerged as a pivot in contemporary human-centric smart manufacturing scenarios. However, the fulfilment of HRC tasks in unstructured scenes brings many challenges to be overcome. In this work, mixed reality head-mounted display is modelled as an effective data collection, communication, and state representation interface/tool for HRC task settings. By integrating vision-language cues with large language model, a vision-language-guided HRC task planning approach is firstly proposed. Then, a deep reinforcement learning-enabled mobile manipulator motion control policy is generated to fulfil HRC task primitives. Its feasibility is demonstrated in several HRC unstructured manufacturing tasks with comparative results. | - |
dcterms.accessRights | open access | en_US |
dcterms.bibliographicCitation | CIRP annals : manufactering technology, 2024, v. 73, no. 1, p. 341-344 | - |
dcterms.isPartOf | CIRP annals : manufactering technology | - |
dcterms.issued | 2024 | - |
dc.identifier.scopus | 2-s2.0-85190754943 | - |
dc.identifier.eissn | 1726-0604 | - |
dc.description.validate | 202411 bcch | - |
dc.description.oa | Version of Record | en_US |
dc.identifier.FolderNumber | OA_Scopus/WOS | en_US |
dc.description.fundingSource | Self-funded | en_US |
dc.description.pubStatus | Published | en_US |
dc.description.oaCategory | CC | en_US |
Appears in Collections: | Journal/Magazine Article |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
1-s2.0-S0007850624000180-main.pdf | 1.29 MB | Adobe PDF | View/Open |
Page views
26
Citations as of Apr 14, 2025
Downloads
11
Citations as of Apr 14, 2025
SCOPUSTM
Citations
14
Citations as of Jul 3, 2025

Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.