Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/108839
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorSchool of Fashion and Textilesen_US
dc.contributorLaboratory for Artificial Intelligence in Design (AiDLab)en_US
dc.creatorDing, Yen_US
dc.creatorMok, PYen_US
dc.date.accessioned2024-08-27T04:41:18Z-
dc.date.available2024-08-27T04:41:18Z-
dc.identifier.issn0941-0643en_US
dc.identifier.urihttp://hdl.handle.net/10397/108839-
dc.language.isoenen_US
dc.publisherSpringer UKen_US
dc.rights© The Author(s) 2024en_US
dc.rightsThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.en_US
dc.rightsThe following publication Ding, Y., Mok, P.Y. Improved 3D human face reconstruction from 2D images using blended hard edges. Neural Comput & Applic 36, 14967–14987 (2024) is available at https://doi.org/10.1007/s00521-024-09868-8.en_US
dc.subject3D face reconstructionen_US
dc.subjectBlended hard edgesen_US
dc.subjectComputer graphicsen_US
dc.subjectDeep neural networken_US
dc.titleImproved 3D human face reconstruction from 2D images using blended hard edgesen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.spage14967en_US
dc.identifier.epage14987en_US
dc.identifier.volume36en_US
dc.identifier.issue24en_US
dc.identifier.doi10.1007/s00521-024-09868-8en_US
dcterms.abstractThis study reports an effective and robust edge-based scheme for the reconstruction of 3D human faces from input of single images, addressing drawbacks of existing methods in case of large face pose angles or noisy input images. Accurate 3D face reconstruction from 2D images is important, as it can enable a wide range of applications, such as face recognition, animations, games and AR/VR systems. Edge features extracted from 2D images contain wealthy and robust 3D geometric information, which were used together with landmarks for face reconstruction purpose. However, the accurate reconstruction of 3D faces from contour features is a challenging task, since traditional edge or contour detection algorithms introduce a great deal of noise, which would adversely affect the reconstruction. This paper reports on the use of a hard-blended face contour feature from a neural network and a Canny edge extractor for face reconstruction. The quantitative results indicate that our method achieves a notable improvement in face reconstruction with a Euclidean distance error of 1.64 mm and a normal vector distance error of 1.27 mm when compared to the ground truth, outperforming both traditional and other deep learning-based methods. These metrics show particularly significant advancements, especially in face shape reconstruction under large pose angles. The method also achieved higher accuracy and robustness on in-the-wild images under conditions of blurring, makeup, occlusion and poor illumination.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationNeural computing and applications, Aug. 2024, v. 36, no. 24, p. 14967-14987en_US
dcterms.isPartOfNeural computing and applicationsen_US
dcterms.issued2024-08-
dc.identifier.scopus2-s2.0-85192858050-
dc.identifier.eissn1433-3058en_US
dc.description.validate202408 bcchen_US
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_TA-
dc.description.fundingSourceRGCen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextLaboratory for Artificial Intelligence in Design; InnoHK Research Cluster, Hong Kong Special Administrative Regionen_US
dc.description.pubStatusPublisheden_US
dc.description.TASpringer Nature (2024)en_US
dc.description.oaCategoryTAen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
s00521-024-09868-8.pdf2.91 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

129
Citations as of Feb 9, 2026

Downloads

34
Citations as of Feb 9, 2026

SCOPUSTM   
Citations

4
Citations as of May 8, 2026

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.