Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/118029
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Mechanical Engineering | - |
| dc.creator | Lee, HY | - |
| dc.creator | Zhou, P | - |
| dc.creator | Duan, A | - |
| dc.creator | Ma, W | - |
| dc.creator | Yang, C | - |
| dc.creator | Navarro-Alarcon, D | - |
| dc.date.accessioned | 2026-03-12T01:03:03Z | - |
| dc.date.available | 2026-03-12T01:03:03Z | - |
| dc.identifier.issn | 0736-5845 | - |
| dc.identifier.uri | http://hdl.handle.net/10397/118029 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Elsevier Ltd | en_US |
| dc.rights | © 2026 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC license ( http://creativecommons.org/licenses/by-nc/4.0/ ). | en_US |
| dc.rights | The following publication Lee, H.-Y., Zhou, P., Duan, A., Ma, W., Yang, C., & Navarro-Alarcon, D. (2026). Non-prehensile tool-object manipulation by integrating LLM-based planning and manoeuvrability-driven controls. Robotics and Computer-Integrated Manufacturing, 100, 103231 is available at https://doi.org/10.1016/j.rcim.2026.103231. | en_US |
| dc.subject | Human–robot collaboration | en_US |
| dc.subject | Large Language Models (LLMs) | en_US |
| dc.subject | Symbolic planning | en_US |
| dc.title | Non-prehensile tool-object manipulation by integrating LLM-based planning and manoeuvrability-driven controls | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.volume | 100 | - |
| dc.identifier.doi | 10.1016/j.rcim.2026.103231 | - |
| dcterms.abstract | The ability to wield tools was once considered exclusive to human intelligence, but it is now known that many other animals, like crows, possess this capability. Yet, robotic systems still fall short of matching biological dexterity. In this paper, we investigate the use of Large Language Models (LLMs), tool affordances, and object manoeuvrability for non-prehensile tool-based manipulation tasks. Our novel method leverages LLMs based on scene information and natural language instructions to enable symbolic task planning for tool-object manipulation. This approach allows the system to convert a human language sentence into a sequence of feasible motion functions. We have developed a novel manoeuvrability-driven controller using a new tool affordance model derived from visual feedback. This controller helps guide the robot’s tool utilization and manipulation actions, even within confined areas, using a stepping incremental approach. The proposed methodology is evaluated with experiments to prove its effectiveness under various manipulation scenarios. | - |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | Robotics and computer - integrated manufacturing, Aug. 2026, v. 100, 103231 | - |
| dcterms.isPartOf | Robotics and computer - integrated manufacturing | - |
| dcterms.issued | 2026-08 | - |
| dc.identifier.scopus | 2-s2.0-105027635194 | - |
| dc.identifier.eissn | 1879-2537 | - |
| dc.identifier.artn | 103231 | - |
| dc.description.validate | 202603 bcch | - |
| dc.description.oa | Version of Record | en_US |
| dc.identifier.FolderNumber | OA_TA | en_US |
| dc.description.fundingSource | RGC | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | This work is supported in part by the Research Grants Council of Hong Kong under grant C4042-23GF, and in part by the National Natural Science Foundation of China (NSFC) under Grant No. 62403211. | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.description.TA | Elsevier (2026) | en_US |
| dc.description.oaCategory | TA | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 1-s2.0-S0736584526000116-main.pdf | 3.5 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



