Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/91469
PIRA download icon_1.1View/Download Full Text
DC FieldValueLanguage
dc.contributorDepartment of Electronic and Information Engineering-
dc.creatorLiu, Y-
dc.creatorZou, Z-
dc.creatorYang, Y-
dc.creatorLaw, NFB-
dc.creatorBharath, AA-
dc.date.accessioned2021-11-03T06:53:56Z-
dc.date.available2021-11-03T06:53:56Z-
dc.identifier.urihttp://hdl.handle.net/10397/91469-
dc.language.isoenen_US
dc.publisherMolecular Diversity Preservation International (MDPI)en_US
dc.rights© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).en_US
dc.rightsThe following publication Liu, Y.; Zou, Z.; Yang, Y.; Law, N.-F.B.; Bharath, A.A. Efficient Source Camera Identification with Diversity-Enhanced Patch Selection and Deep Residual Prediction. Sensors 2021, 21, 4701 is available at https://doi.org/10.3390/s21144701en_US
dc.subjectConvolutional neural networken_US
dc.subjectDeep learningen_US
dc.subjectImage forensicsen_US
dc.subjectImaging sensorsen_US
dc.subjectSource camera identificationen_US
dc.titleEfficient source camera identification with diversity-enhanced patch selection and deep residual predictionen_US
dc.typeJournal/Magazine Articleen_US
dc.identifier.volume21-
dc.identifier.issue14-
dc.identifier.doi10.3390/s21144701-
dcterms.abstractSource camera identification has long been a hot topic in the field of image forensics. Besides conventional feature engineering algorithms developed based on studying the traces left upon shooting, several deep-learning-based methods have also emerged recently. However, identification performance is susceptible to image content and is far from satisfactory for small image patches in real demanding applications. In this paper, an efficient patch-level source camera identification method is proposed based on a convolutional neural network. First, in order to obtain improved robustness with reduced training cost, representative patches are selected according to multiple criteria for enhanced diversity in training data. Second, a fine-grained multiscale deep residual prediction module is proposed to reduce the impact of scene content. Finally, a modified VGG network is proposed for source camera identification at brand, model, and instance levels. A more critical patch-level evaluation protocol is also proposed for fair performance comparison. Abundant experimental results show that the proposed method achieves better results as compared with the state-of-the-art algorithms.-
dcterms.accessRightsopen accessen_US
dcterms.bibliographicCitationSensors, July 2021, v. 21, no. 14, 4701-
dcterms.isPartOfSensors-
dcterms.issued2021-07-
dc.identifier.scopus2-s2.0-85109331482-
dc.identifier.eissn1424-8220-
dc.identifier.artn4701-
dc.description.validate202110 bcvc-
dc.description.oaVersion of Recorden_US
dc.identifier.FolderNumberOA_Scopus/WOSen_US
dc.description.pubStatusPublisheden_US
Appears in Collections:Journal/Magazine Article
Files in This Item:
File Description SizeFormat 
sensors-21-04701-v2.pdf20.54 MBAdobe PDFView/Open
Open Access Information
Status open access
File Version Version of Record
Access
View full-text via PolyU eLinks SFX Query
Show simple item record

Page views

48
Last Week
0
Last month
Citations as of Apr 21, 2024

Downloads

16
Citations as of Apr 21, 2024

SCOPUSTM   
Citations

26
Citations as of Apr 26, 2024

WEB OF SCIENCETM
Citations

20
Citations as of Apr 25, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.