Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/106905
| DC Field | Value | Language |
|---|---|---|
| dc.contributor | Department of Electrical and Electronic Engineering | en_US |
| dc.creator | Animasahun, IO | en_US |
| dc.creator | Lam, KM | en_US |
| dc.date.accessioned | 2024-06-07T00:58:46Z | - |
| dc.date.available | 2024-06-07T00:58:46Z | - |
| dc.identifier.isbn | 978-1-5106-3835-8 | en_US |
| dc.identifier.isbn | 978-1-5106-3836-5 (electronic) | en_US |
| dc.identifier.issn | 0277-786X | en_US |
| dc.identifier.uri | http://hdl.handle.net/10397/106905 | - |
| dc.description | International Workshop on Advanced Imaging Technology (IWAIT) 2020, 5-7 January 2020, Yogyakarta, Indonesia | en_US |
| dc.language.iso | en | en_US |
| dc.publisher | SPIE - International Society for Optical Engineering | en_US |
| dc.rights | © (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this publication for a fee or for commercial purposes, and modification of the contents of the publication are prohibited. | en_US |
| dc.rights | The following publication I. O. Animasahun and Kin-Man Lam "Deep residual convolutional neural network with curriculum learning for source camera identification", Proc. SPIE 11515, International Workshop on Advanced Imaging Technology (IWAIT) 2020, 1151533 (1 June 2020) is available at https://doi.org/10.1117/12.2566890. | en_US |
| dc.subject | Curriculum learning | en_US |
| dc.subject | Deep learning | en_US |
| dc.subject | Photo-response non-uniformity | en_US |
| dc.subject | Residual convolutional neural network | en_US |
| dc.subject | Source camera identification | en_US |
| dc.title | Deep residual convolutional neural network with curriculum learning for source camera identification | en_US |
| dc.type | Conference Paper | en_US |
| dc.identifier.volume | 11515 | en_US |
| dc.identifier.doi | 10.1117/12.2566890 | en_US |
| dcterms.abstract | Source camera identification is a fundamental area in forensic science, which deals with attributing a photo to the camera device that has captured it. It provides useful information for further forensic analysis, and also in the verification of evidential images involving child pornography cases. Source camera identification is a difficult task, especially in cases involving small-sized query images. Recently, many deep learning-based methods have been developed for camera identification, by learning the camera processing pipeline directly from the images of the camera under consideration. However, most of the proposed methods have considerably good identification accuracy for identifying the camera models, but less accurate results on individual or instance-based source camera identification. In this paper, we propose to train an accurate deep residual convolutional neural network (ResNet), with the use of curriculum learning (CL) and preprocessed noise residues of camera images, so as to suppress contamination of camera fingerprints and extract highly discriminative features for camera identification. The proposed ResNet consists of five convolutional layers and two fully connected layers with residual connections. For the curriculum learning in this paper, we propose a manual and an automatic curriculum learning algorithm. Furthermore, after training the proposed ResNet with CL, the flattened output of the last convolutional layer is extracted to form the deep features, which are then used to learn one-vs-rest linear support vector machines for predicting the camera classes. Experimental results on 10 cameras from the Dresden database show the efficiency and accuracy of the proposed methods, when compared with some existing state-of-the-art methods. | en_US |
| dcterms.accessRights | open access | en_US |
| dcterms.bibliographicCitation | Proceedings of SPIE : the International Society for Optical Engineering, 2020, v. 11515, 1151533 | en_US |
| dcterms.isPartOf | Proceedings of SPIE : the International Society for Optical Engineering | en_US |
| dcterms.issued | 2020 | - |
| dc.identifier.scopus | 2-s2.0-85086631139 | - |
| dc.relation.conference | International Workshop on Advanced Imaging Technology [IWAIT] | en_US |
| dc.identifier.eissn | 1996-756X | en_US |
| dc.identifier.artn | 1151533 | en_US |
| dc.description.validate | 202405 bcch | en_US |
| dc.description.oa | Accepted Manuscript | en_US |
| dc.identifier.FolderNumber | EIE-0195 | - |
| dc.description.fundingSource | Self-funded | en_US |
| dc.description.pubStatus | Published | en_US |
| dc.identifier.OPUS | 26683749 | - |
| dc.description.oaCategory | Green (AAM) | en_US |
| Appears in Collections: | Conference Paper | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| Animasahun_Deep_Residual_Convolutional.pdf | Pre-Published version | 937.19 kB | Adobe PDF | View/Open |
Page views
71
Last Week
3
3
Last month
Citations as of Nov 9, 2025
Downloads
40
Citations as of Nov 9, 2025
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



