Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/106905
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electrical and Electronic Engineeringen_US
dc.creatorAnimasahun, IOen_US
dc.creatorLam, KMen_US
dc.date.accessioned2024-06-07T00:58:46Z-
dc.date.available2024-06-07T00:58:46Z-
dc.identifier.isbn978-1-5106-3835-8en_US
dc.identifier.isbn978-1-5106-3836-5 (electronic)en_US
dc.identifier.issn0277-786Xen_US
dc.identifier.urihttp://hdl.handle.net/10397/106905-
dc.descriptionInternational Workshop on Advanced Imaging Technology (IWAIT) 2020, 5-7 January 2020, Yogyakarta, Indonesiaen_US
dc.language.isoenen_US
dc.publisherSPIE - International Society for Optical Engineeringen_US
dc.rights© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this publication for a fee or for commercial purposes, and modification of the contents of the publication are prohibited.en_US
dc.rightsThe following publication I. O. Animasahun and Kin-Man Lam "Deep residual convolutional neural network with curriculum learning for source camera identification", Proc. SPIE 11515, International Workshop on Advanced Imaging Technology (IWAIT) 2020, 1151533 (1 June 2020) is available at https://doi.org/10.1117/12.2566890.en_US
dc.subjectCurriculum learningen_US
dc.subjectDeep learningen_US
dc.subjectPhoto-response non-uniformityen_US
dc.subjectResidual convolutional neural networken_US
dc.subjectSource camera identificationen_US
dc.titleDeep residual convolutional neural network with curriculum learning for source camera identificationen_US
dc.typeConference Paperen_US
dc.identifier.volume11515en_US
dc.identifier.doi10.1117/12.2566890en_US
dcterms.abstractSource camera identification is a fundamental area in forensic science, which deals with attributing a photo to the camera device that has captured it. It provides useful information for further forensic analysis, and also in the verification of evidential images involving child pornography cases. Source camera identification is a difficult task, especially in cases involving small-sized query images. Recently, many deep learning-based methods have been developed for camera identification, by learning the camera processing pipeline directly from the images of the camera under consideration. However, most of the proposed methods have considerably good identification accuracy for identifying the camera models, but less accurate results on individual or instance-based source camera identification. In this paper, we propose to train an accurate deep residual convolutional neural network (ResNet), with the use of curriculum learning (CL) and preprocessed noise residues of camera images, so as to suppress contamination of camera fingerprints and extract highly discriminative features for camera identification. The proposed ResNet consists of five convolutional layers and two fully connected layers with residual connections. For the curriculum learning in this paper, we propose a manual and an automatic curriculum learning algorithm. Furthermore, after training the proposed ResNet with CL, the flattened output of the last convolutional layer is extracted to form the deep features, which are then used to learn one-vs-rest linear support vector machines for predicting the camera classes. Experimental results on 10 cameras from the Dresden database show the efficiency and accuracy of the proposed methods, when compared with some existing state-of-the-art methods.en_US
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationProceedings of SPIE : the International Society for Optical Engineering, 2020, v. 11515, 1151533en_US
dcterms.isPartOfProceedings of SPIE : the International Society for Optical Engineeringen_US
dcterms.issued2020-
dc.identifier.scopus2-s2.0-85086631139-
dc.relation.conferenceInternational Workshop on Advanced Imaging Technology [IWAIT]en_US
dc.identifier.eissn1996-756Xen_US
dc.identifier.artn1151533en_US
dc.description.validate202405 bcchen_US
dc.description.oaAccepted Manuscripten_US
dc.identifier.FolderNumberEIE-0195-
dc.description.fundingSourceSelf-fundeden_US
dc.description.pubStatusPublisheden_US
dc.identifier.OPUS26683749-
dc.description.oaCategoryGreen (AAM)en_US
Appears in Collections:Conference Paper
Files in This Item:
File Description SizeFormat 
Animasahun_Deep_Residual_Convolutional.pdfPre-Published version937.19 kBAdobe PDFView/Open
Open Access Information
Status open access
File Version Final Accepted Manuscript
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

71
Last Week
3
Last month
Citations as of Nov 9, 2025

Downloads

40
Citations as of Nov 9, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.