Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/111360
DC Field | Value | Language |
---|---|---|
dc.contributor | School of Fashion and Textiles | en_US |
dc.contributor | Research Centre of Textiles for Future Fashion | en_US |
dc.contributor | Research Institute for Sports Science and Technology | en_US |
dc.creator | Peng, J | en_US |
dc.creator | Zhou, Y | en_US |
dc.creator | Mok, PY | en_US |
dc.date.accessioned | 2025-02-20T04:09:55Z | - |
dc.date.available | 2025-02-20T04:09:55Z | - |
dc.identifier.issn | 0167-8655 | en_US |
dc.identifier.uri | http://hdl.handle.net/10397/111360 | - |
dc.language.iso | en | en_US |
dc.publisher | Elsevier | en_US |
dc.rights | © 2025 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by- nc-nd/4.0/). | en_US |
dc.rights | The following publication Peng, J., Zhou, Y., & Mok, P. Y. (2025). A cross-feature interaction network for 3D human pose estimation. Pattern Recognition Letters, 189, 175-181 is available at https://doi.org/10.1016/j.patrec.2025.01.016. | en_US |
dc.subject | 3D human pose estimation | en_US |
dc.subject | Cross-attention | en_US |
dc.subject | Graph convolutional network (GCN) | en_US |
dc.subject | Self-attention | en_US |
dc.title | A cross-feature interaction network for 3D human pose estimation | en_US |
dc.type | Journal/Magazine Article | en_US |
dc.identifier.spage | 175 | en_US |
dc.identifier.epage | 181 | en_US |
dc.identifier.volume | 189 | en_US |
dc.identifier.doi | 10.1016/j.patrec.2025.01.016 | en_US |
dcterms.abstract | The task of estimating 3D human poses from single monocular images is challenging because, unlike video sequences, single images can hardly provide any temporal information for the prediction. Most existing methods attempt to predict 3D poses by modeling the spatial dependencies inherent in the anatomical structure of the human skeleton, yet these methods fail to capture the complex local and global relationships that exist among various joints. To solve this problem, we propose a novel Cross-Feature Interaction Network to effectively model spatial correlations between body joints. Specifically, we exploit graph convolutional networks (GCNs) to learn the local features between neighboring joints and the self-attention structure to learn the global features among all joints. We then design a cross-feature interaction (CFI) module to facilitate cross-feature communications among the three different features, namely the local features, global features, and initial 2D pose features, aggregating them to form enhanced spatial representations of human pose. Furthermore, a novel graph-enhanced module (GraMLP) with parallel GCN and multi-layer perceptron is introduced to inject the skeletal knowledge of the human body into the final representation of 3D pose. Extensive experiments on two datasets (Human3.6M (Ionescu et al., 2013) and MPI-INF-3DHP (Mehta et al., 2017)) show the superior performance of our method in comparison to existing state-of-the-art (SOTA) models. The code and data are shared at https://github.com/JihuaPeng/CFI-3DHPE | en_US |
dcterms.accessRights | open access | en_US |
dcterms.bibliographicCitation | Pattern recognition letters, Mar. 2025, v. 189, p. 175-181 | en_US |
dcterms.isPartOf | Pattern recognition letters | en_US |
dcterms.issued | 2025-03 | - |
dc.identifier.scopus | 2-s2.0-85216872510 | - |
dc.identifier.eissn | 1872-7344 | en_US |
dc.description.validate | 202502 bcwh | en_US |
dc.description.oa | Version of Record | en_US |
dc.identifier.FolderNumber | OA_TA | - |
dc.description.fundingSource | Others | en_US |
dc.description.fundingText | Hong Kong Polytechnic University; Laboratory for Artificial Intelligence in Design under InnoHK Research Clusters, Hong Kong SAR. | en_US |
dc.description.pubStatus | Published | en_US |
dc.description.TA | Elsevier (2025) | en_US |
dc.description.oaCategory | TA | en_US |
Appears in Collections: | Journal/Magazine Article |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
1-s2.0-S0167865525000157-main.pdf | 1.78 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.