Back to results list
Please use this identifier to cite or link to this item:
|Title:||Output sensitivity of MLPs derived from statistical expectation||Authors:||Zeng, Xiaoqin||Keywords:||Neural networks (Computer science)
Hong Kong Polytechnic University -- Dissertations
|Issue Date:||2002||Publisher:||The Hong Kong Polytechnic University||Abstract:||The sensitivity of a neural network's output to its parameter perturbation is an important issue in the design and implementation of neural networks. What will be the effects of parameter perturbation on the output of neural networks? How does one measure the degree of the response of neural networks due to parameter perturbation? The objective of this dissertation is to analyse and quantify the sensitivity of the most popular and general feedforward neural network - the Multilayer Perceptron (MLP) to input and weight perturbations.
Based on the structural features of the MLP, a bottom-up approach is followed to study the sensitivity of the MLP. The sensitivity for each neuron is computed in an order from the first layer to the last. Then the results of the neurons in a layer are collected to form the sensitivity for that layer. Finally the sensitivity of the output layer is defined as the sensitivity for the entire neural network.
Sensitivity is defined as the mathematical expectation of output deviations due to input and weight deviations with respect to overall input and weight values in a given continuous interval. An analytical expression that is a function of input and weight deviations is approximately derived for the sensitivity of a single neuron. Two algorithms are then presented to compute the sensitivity for an entire neural network. By analyzing the derived analytical formula and executing one of the given algorithms, some significant observations on the behavior of the MLP under input and weight perturbations are discovered, which can be used as guidelines to aid the design of an MLP. As intuitively expected, the sensitivity increases with input and weight perturbations, but the increase has an upper bound that is determined by the structural configuration of the MLP, namely the number of neurons per layer and the number of layers. There exists an optimal value for the number of neurons in a layer, which yields the highest sensitivity value. The sensitivity decreases with the number of layers increasing, and the decrease almost levels off when the number becomes large.
Similarly a quantified sensitivity measure to input deviation is developed for a specific MLP with fixed weight and thus fixed network architecture. Based on the derived analytical expressions, two algorithms are given for the computations of the sensitivity of a single neuron and the sensitivity of an entire neural network. The sensitivity measure is a useful means to evaluate the networks' performance such as error-tolerance and generalization capabilities.
The applications of the sensitivity analysis to hardware design, and the sensitivity measure to the selection of weights for a more robust MLP are discussed.
|Description:||viii, 98 leaves : ill. ; 30 cm.
PolyU Library Call No.: [THS] LG51 .H577P COMP 2002 Zeng
|URI:||http://hdl.handle.net/10397/1028||Rights:||All rights reserved.|
|Appears in Collections:||Thesis|
Show full item record
Files in This Item:
|b16165949_link.htm||For PolyU Users||179 B||HTML||View/Open|
|b16165949_ir.pdf||For All Users (Non-printable)||1.95 MB||Adobe PDF||View/Open|
Citations as of Oct 15, 2018
Citations as of Oct 15, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.