Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/117274
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Industrial and Systems Engineering | en_US |
| dc.creator | Gui, H | en_US |
| dc.creator | Li, M | en_US |
| dc.date.accessioned | 2026-02-09T07:45:04Z | - |
| dc.date.available | 2026-02-09T07:45:04Z | - |
| dc.identifier.issn | 0736-5845 | en_US |
| dc.identifier.uri | http://hdl.handle.net/10397/117274 | - |
| dc.language.iso | en | en_US |
| dc.publisher | Pergamon Press | en_US |
| dc.subject | Diffusion model | en_US |
| dc.subject | Digital twin | en_US |
| dc.subject | Human motion prediction | en_US |
| dc.subject | Human robot fabric cutting | en_US |
| dc.subject | Spatiotemporal modelling | en_US |
| dc.title | A temporal spatial human digital twin approach for modeling human behavior with uncertainty | en_US |
| dc.type | Journal/Magazine Article | en_US |
| dc.identifier.volume | 99 | en_US |
| dc.identifier.doi | 10.1016/j.rcim.2025.103203 | en_US |
| dcterms.abstract | Modeling human behavior is paramount in the process of human-robot interaction (HRI). Human motion during HRI is uncertain. Even when the same individual performs the same action, it is impossible to ensure that the motion trajectory will be identical every time. These factors make modeling human behavior extremely challenging. Beyond human uncertainty, there are dynamic temporal spatial dependencies in HRI. Effectively capturing uncertainty while fully integrating temporal spatial features presents a significant challenge. Moreover, representing human behavior solely through human skeleton is insufficient. Recently, human digital twins have been developed to represent human geometry. However, current human digital twins are not well-suited for dynamic HRI scenarios, as they struggle to accurately depict high-dimensional human parameters, leading to issues such as nonlinear mapping and joint drift. In summary, existing methods find it difficult to address the human uncertainty, the temporal spatial dependencies, and the high-dimensional human parameters. To address the above challenges, this study proposes a temporal spatial human digital twin (TSHDT) for modeling human behavior in HRI. The TSHDT is based on predicted human skeletons and integrates forward and inverse kinematics along with diffusion prior distribution to represent high-dimensional human parameters, thus preventing joint drift and nonlinear mapping between joints. In developing the TSHDT, we introduce the human robot temporal spatial (HRTS) diffusion model to mitigate the uncertainty in human motion. The unique diffusion and denoising processes of the HRTS diffusion model can effectively submerge uncertainty in noise and accurately predict human motion during subsequent denoising steps. To ensure that the denoising process favors accuracy over diversity, we propose the temporal spatial fusion graph convolutional network (TSFGCN) to capture temporal spatial features between humans and robots, embedding them into the HRTS diffusion model. Finally, the effectiveness of the TSHDT was validated via predictive collision detection in human-robot fabric cutting experiments. Results demonstrate that the proposed method accurately models human behavior in collision detection experiments, achieving outstanding F1 scores. | en_US |
| dcterms.accessRights | embargoed access | en_US |
| dcterms.bibliographicCitation | Robotics and computer - integrated manufacturing, June 2026, v. 99, 103203 | en_US |
| dcterms.isPartOf | Robotics and computer - integrated manufacturing | en_US |
| dcterms.issued | 2026-06 | - |
| dc.identifier.scopus | 2-s2.0-105024559706 | - |
| dc.identifier.artn | 103203 | en_US |
| dc.description.validate | 202602 bchy | en_US |
| dc.description.oa | Not applicable | en_US |
| dc.identifier.SubFormID | G000856/2026-01 | - |
| dc.description.fundingSource | RGC | en_US |
| dc.description.fundingSource | Others | en_US |
| dc.description.fundingText | This research is partially supported by two grants from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No T32–707/22-N and C7076–22G), and the Innovation and Technology Commission of the HKSAR Government through the InnoHK initiative. | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.date.embargo | 2028-06-30 | en_US |
| dc.description.oaCategory | Green (AAM) | en_US |
| Appears in Collections: | Journal/Magazine Article | |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



