Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/116953
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Computing-
dc.creatorYu, C-
dc.creatorZhang, Z-
dc.creatorLi, H-
dc.creatorLiu, C-
dc.date.accessioned2026-01-21T03:54:17Z-
dc.date.available2026-01-21T03:54:17Z-
dc.identifier.urihttp://hdl.handle.net/10397/116953-
dc.language.isoenen_US
dc.publisherMDPI AGen_US
dc.rightsCopyright: © 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).en_US
dc.rightsThe following publication Yu, C., Zhang, Z., Li, H., & Liu, C. (2025). Enhancing 3D Face Recognition: Achieving Significant Gains via 2D-Aided Generative Augmentation. Sensors, 25(16), 5049 is available at https://doi.org/10.3390/s25165049.en_US
dc.subject3D face recognitionen_US
dc.subjectGenerative augmentationen_US
dc.subjectPattern recognitionen_US
dc.titleEnhancing 3D face recognition : achieving significant gains via 2D-aided generative augmentationen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume25-
dc.identifier.issue16-
dc.identifier.doi10.3390/s25165049-
dcterms.abstractThe development of deep learning-based 3D face recognition has been constrained by the limited availability of large-scale 3D facial datasets, which are costly and labor-intensive to acquire. To address this challenge, we propose a novel 2D-aided framework that reconstructs 3D face geometries from abundant 2D images, enabling scalable and cost-effective data augmentation for 3D face recognition. Our pipeline integrates 3D face reconstruction with normal component image encoding and fine-tunes a deep face recognition model to learn discriminative representations from synthetic 3D data. Experimental results on four public benchmarks, i.e., the BU-3DFE, FRGC v2, Bosphorus, and BU-4DFE databases, demonstrate competitive rank-1 accuracies of 99.2%, 98.4%, 99.3%, and 96.5%, respectively, despite the absence of real 3D training data. We further evaluate the impact of alternative reconstruction methods and empirically demonstrate that higher-fidelity 3D inputs improve recognition performance. While synthetic 3D face data may lack certain fine-grained geometric details, our results validate their effectiveness for practical recognition tasks under diverse expressions and demographic conditions. This work provides an efficient and scalable paradigm for 3D face recognition by leveraging widely available face images, offering new insights into data-efficient training strategies for biometric systems.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationSensors, Aug. 2025, v. 25, no. 16, 5049-
dcterms.isPartOfSensors-
dcterms.issued2025-08-
dc.identifier.scopus2-s2.0-105014289205-
dc.identifier.pmid40871909-
dc.identifier.eissn1424-8220-
dc.identifier.artn5049-
dc.description.validate202601 bcch-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.fundingSourceOthersen_US
dc.description.fundingTextThis research was funded by the Natural Science Foundation of Shaanxi Province (2024JC-ZDXM-49).en_US
dc.description.pubStatusPublisheden_US
dc.description.oaCategoryCCen_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
sensors-25-05049.pdf706.25 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

SCOPUSTM   
Citations

1
Citations as of May 8, 2026

WEB OF SCIENCETM
Citations

1
Citations as of Apr 23, 2026

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.