Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/112581
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorSchool of Fashion and Textilesen_US
dc.contributorResearch Centre of Textiles for Future Fashionen_US
dc.creatorPeng, Jen_US
dc.creatorZhou, Yen_US
dc.creatorMok, PYen_US
dc.date.accessioned2025-04-17T06:34:40Z-
dc.date.available2025-04-17T06:34:40Z-
dc.identifier.issn0178-2789en_US
dc.identifier.urihttp://hdl.handle.net/10397/112581-
dc.language.isoenen_US
dc.publisherSpringeren_US
dc.rights© The Author(s) 2024en_US
dc.rightsThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.en_US
dc.rightsThe following publication Peng, J., Zhou, Y. & Mok, P.Y. EHFusion: an efficient heterogeneous fusion model for group-based 3D human pose estimation. Vis Comput 41, 5323–5345 (2025) is available at https://doi.org/10.1007/s00371-024-03724-5.en_US
dc.subject3D human pose estimationen_US
dc.subjectEfficient networken_US
dc.subjectFeature fusionen_US
dc.subjectTopology-based grouping strategyen_US
dc.titleEHFusion : an efficient heterogeneous fusion model for group-based 3D human pose estimationen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage5323en_US
dc.identifier.epage5345en_US
dc.identifier.volume41en_US
dc.identifier.issue8en_US
dc.identifier.doi10.1007/s00371-024-03724-5en_US
dcterms.abstractStimulated by its important applications in animation, gaming, virtual reality, augmented reality, and healthcare, 3D human pose estimation has received considerable attention in recent years. To improve the accuracy of 3D human pose estimation, most approaches have converted this challenging task into a local pose estimation problem by dividing the body joints of the human body into different groups based on the human body topology. The body joint features of different groups are then fused to predict the overall pose of the whole body, which requires a joint feature fusion scheme. Nevertheless, the joint feature fusion schemes adopted in existing methods involve the learning of extensive parameters and hence are computationally very expensive. This paper reports a new topology-based grouped method ‘EHFusion’ for 3D human pose estimation, which involves a heterogeneous feature fusion (HFF) module that integrates grouped pose features. The HFF module reduces the computational complexity of the model while achieving promising accuracy. Moreover, we introduce motion amplitude information and a camera intrinsic embedding module to provide better global information and 2D-to-3D conversion knowledge, thereby improving the overall robustness and accuracy of the method. In contrast to previous methods, the proposed new network can be trained end-to-end in one single stage. Experimental results not only demonstrate the advantageous trade-offs between estimation accuracy and computational complexity achieved by our method but also showcase the competitive performance in comparison with various existing state-of-the-art methods (e.g., transformer-based) when evaluated on two public datasets, Human3.6M and HumanEva. The data and code are available at doi:10.5281/zenodo.11113132en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationVisual computer, June 2025, v. 41, no. 8, p. 5323–5345en_US
dcterms.isPartOfVisual computeren_US
dcterms.issued2025-06-
dc.identifier.scopus2-s2.0-85210491231-
dc.identifier.eissn1432-2315en_US
dc.description.validate202504 bcchen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_TA-
dc.description.fundingSourceOthersen_US
dc.description.fundingTextHong Kong Polytechnic University (Project code: CD6P); Laboratory for Artificial Intelligence in Design (Project Code: RP1-1) under InnoHK Research Clusters, Hong Kong Special Administrative Region; National Natural Science Foundation of China (Grant No. 62202241); Jiangsu Province Natural Science Foundation for Young Scholars, China (Grant No. BK20210586)en_US
dc.description.pubStatusPublisheden_US
dc.description.TASpringer Nature (2024)en_US
dc.description.oaCategoryTAen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
s00371-024-03724-5.pdf2.31 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.