Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/117274
DC FieldValueLanguage
dc.contributorDepartment of Industrial and Systems Engineeringen_US
dc.creatorGui, Hen_US
dc.creatorLi, Men_US
dc.date.accessioned2026-02-09T07:45:04Z-
dc.date.available2026-02-09T07:45:04Z-
dc.identifier.issn0736-5845en_US
dc.identifier.urihttp://hdl.handle.net/10397/117274-
dc.language.isoenen_US
dc.publisherPergamon Pressen_US
dc.subjectDiffusion modelen_US
dc.subjectDigital twinen_US
dc.subjectHuman motion predictionen_US
dc.subjectHuman robot fabric cuttingen_US
dc.subjectSpatiotemporal modellingen_US
dc.titleA temporal spatial human digital twin approach for modeling human behavior with uncertaintyen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume99en_US
dc.identifier.doi10.1016/j.rcim.2025.103203en_US
dcterms.abstractModeling human behavior is paramount in the process of human-robot interaction (HRI). Human motion during HRI is uncertain. Even when the same individual performs the same action, it is impossible to ensure that the motion trajectory will be identical every time. These factors make modeling human behavior extremely challenging. Beyond human uncertainty, there are dynamic temporal spatial dependencies in HRI. Effectively capturing uncertainty while fully integrating temporal spatial features presents a significant challenge. Moreover, representing human behavior solely through human skeleton is insufficient. Recently, human digital twins have been developed to represent human geometry. However, current human digital twins are not well-suited for dynamic HRI scenarios, as they struggle to accurately depict high-dimensional human parameters, leading to issues such as nonlinear mapping and joint drift. In summary, existing methods find it difficult to address the human uncertainty, the temporal spatial dependencies, and the high-dimensional human parameters. To address the above challenges, this study proposes a temporal spatial human digital twin (TSHDT) for modeling human behavior in HRI. The TSHDT is based on predicted human skeletons and integrates forward and inverse kinematics along with diffusion prior distribution to represent high-dimensional human parameters, thus preventing joint drift and nonlinear mapping between joints. In developing the TSHDT, we introduce the human robot temporal spatial (HRTS) diffusion model to mitigate the uncertainty in human motion. The unique diffusion and denoising processes of the HRTS diffusion model can effectively submerge uncertainty in noise and accurately predict human motion during subsequent denoising steps. To ensure that the denoising process favors accuracy over diversity, we propose the temporal spatial fusion graph convolutional network (TSFGCN) to capture temporal spatial features between humans and robots, embedding them into the HRTS diffusion model. Finally, the effectiveness of the TSHDT was validated via predictive collision detection in human-robot fabric cutting experiments. Results demonstrate that the proposed method accurately models human behavior in collision detection experiments, achieving outstanding F1 scores.en_US
dcterms.accessRightsembargoed accessen_US
dcterms.bibliographicCitationRobotics and computer - integrated manufacturing, June 2026, v. 99, 103203en_US
dcterms.isPartOfRobotics and computer - integrated manufacturingen_US
dcterms.issued2026-06-
dc.identifier.scopus2-s2.0-105024559706-
dc.identifier.artn103203en_US
dc.description.validate202602 bchyen_US
dc.description.oaNot applicableen_US
dc.identifier.SubFormIDG000856/2026-01-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThis research is partially supported by two grants from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No T32–707/22-N and C7076–22G), and the Innovation and Technology Commission of the HKSAR Government through the InnoHK initiative.en_US
dc.description.pubStatusPublisheden_US
dc.date.embargo2028-06-30en_US
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Journal/Magazine Article
Open Access Information
Status embargoed access
Embargo End Date 2028-06-30
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.