Please use this identifier to cite or link to this item:
Title: Steganalysis via deep residual network
Authors: Wu, ST
Zhong, SH
Liu, Y 
Keywords: Steganalysis
Convolutional neural network
Deep residual network
Residual learning
Issue Date: 2016
Publisher: Institute of Electrical and Electronics Engineers
Source: 2016 IEEE 22nd International Conference on Parallel and Distributed Systems (ICPADS), Dec 13-16, 2016, Wuhan, People's Republic of China, p. 1233-1236 How to cite?
Abstract: Recent studies have demonstrated that a well designed deep convolutional neural network (CNN) model achieves competitive performances on detecting the presence of secret message in digital images, compared with the classical rich model based steganalysis. In this paper, we propose to investigate a category of very deep CNN model-the deep residual network (DRN), for steganalysis. DRN is suitable for steganalysis from two aspects. For the first, the DRN model usually contains a large number of network layers, which proves to be effective to capture the complex statistics of digital images. For the second, DRN's residual learning (ResL) method actively strengthens the signal coming from secret messages, which is extremely beneficial for the discrimination between cover images and stego images. Comprehensive experiments on standard dataset show that the DRN model achieves very low detection error rates for the state of arts steganographic algorithms. It also outperforms the classical rich model method and several recently proposed CNN based methods.
Description: 22nd IEEE International Conference on Parallel and Distributed Systems (ICPADS), Wuhan, PEOPLES R CHINA, DEC 13-16, 2016
ISBN: 978-1-5090-4457-3
ISSN: 1521-9097
DOI: 10.1109/ICPADS.2016.165
Appears in Collections:Conference Paper

View full-text via PolyU eLinks SFX Query
Show full item record


Last Week
Last month
Citations as of Feb 17, 2019

Page view(s)

Last Week
Last month
Citations as of Feb 19, 2019

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.