Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/106997
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineeringen_US
dc.creatorZeng, Xen_US
dc.creatorLi, Den_US
dc.creatorZhang, Yen_US
dc.creatorLam, KMen_US
dc.date.accessioned2024-06-07T00:59:31Z-
dc.date.available2024-06-07T00:59:31Z-
dc.identifier.isbn978-981-10-7301-4en_US
dc.identifier.isbn978-981-10-7302-1 (eBook)en_US
dc.identifier.issn1865-0929en_US
dc.identifier.urihttp://hdl.handle.net/10397/106997-
dc.descriptionSecond CCF Chinese Conference on Computer Vision, CCCV 2017, Tianjin, China, October 11-14, 2017en_US
dc.language.isoenen_US
dc.publisherSpringeren_US
dc.rights© Springer Nature Singapore Pte Ltd. 2017en_US
dc.rightsThis version of the proceeding paper has been accepted for publication, after peer review (when applicable) and is subject to Springer Nature’s AM terms of use(https://www.springernature.com/gp/open-research/policies/accepted-manuscript-terms), but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: http://dx.doi.org/10.1007/978-981-10-7302-1_3.en_US
dc.subject3D morphable modelen_US
dc.subject3DDFAen_US
dc.subjectDataseten_US
dc.subjectPore-scale facial featuresen_US
dc.subjectPSIFTen_US
dc.titlePore-scale facial features matching under 3D morphable model constrainten_US
dc.typeConference Paperen_US
dc.identifier.spage29en_US
dc.identifier.epage39en_US
dc.identifier.volume772en_US
dc.identifier.doi10.1007/978-981-10-7302-1_3en_US
dcterms.abstractSimilar to irises and fingerprints, pore-scale facial features are effective features for distinguishing human identities. Recently, the local feature extraction based on deep network architecture has been proposed, which needs a large dataset for training. However, there are no large databases for pore-scale facial features. Actually, it is hard to set up a large pore-scale facial-feature dataset, because the images from existing high-resolution face databases are uncalibrated and nonsynchronous, and human faces are nonrigid. To solve this problem, we propose a method to establish a large pore-to-pore correspondence dataset. We adopt Pore Scale-Invariant Feature Transform (PSIFT) to extract pore-scale facial features from face images, and use 3D Dense Face Alignment (3DDFA) to obtain a fitted 3D morphable model, which is constrained by matching keypoints. From our experiments, a large pore-to-pore correspondence dataset, including 17,136 classes of matched pore-keypoint pairs, is established.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationCommunications in computer and information science, 2017, v. 772, p. 29-39en_US
dcterms.isPartOfCommunications in computer and information scienceen_US
dcterms.issued2017-
dc.identifier.scopus2-s2.0-85038004108-
dc.relation.conferenceCCF Chinese Conference on Computer Vision [CCCV]en_US
dc.identifier.eissn1865-0937en_US
dc.description.validate202405 bcchen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberEIE-0775-
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS9609900-
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Lam_Pore-Scale_Facial_Features.pdfPre-Published version3.42 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

115
Last Week
8
Last month
Citations as of Dec 21, 2025

Downloads

69
Citations as of Dec 21, 2025

SCOPUSTM   
Citations

2
Citations as of Dec 19, 2025

WEB OF SCIENCETM
Citations

2
Citations as of Dec 18, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.