1 FEEDFORWARD NEURAL NETWORK
The input layer of neural network is fed with the samples. These samples will be multiplied with the corresponding weights and added up. The bias will be added to the resulting sum and this sum will be passed to the activation function. Each neuron in the network is associated with the threshold value so neuron will be activated if and only if generated value is greater than threshold value. Neuron check for its activation through various activation functions. There are various activation functions such as Threshold function, Piece wise linear function, Linear function, Sigmoid function, Hyperbolic tangent function. The output of activation function is fed to the next layer of the neural network. This process is carried out until the last layer is reached.
Since feedforward and backpropagation algorithm uses labelled data, it is called supervised learning. The output of the last layer will be compared with the target output and error for each neuron in the output layer is calculated.
FEEDFORWARD ALGORITHM
The
input layer of neural network is fed with samples. These sample values will be multiplied with the
corresponding weights and added up. The bias will be added to the
resulting sum and this sum will be passed to the activation function.
The
output of activation function is fed to the next layer of the neural
network. This process is carried out until the last layer is
reached.
Neurons
of one layer are connected to neurons present in the previous layer
with edges. Each edge is associated with corresponding weight. And
each neuron is associated with a threshold value called bias. So at
each neuron input value will be multiplied with the weight value of
corresponding edge connecting the neuron of present layer to the
neurons of previous layer and these multiplied values are added to
produce the result. This result is passed through the sigmoid
function which is one of the activation function
In
mathematical terms, Neuron may be described as,
Ni
=
+ bias
=
Output
= Ø ( Ni – αi
Where,
represents
the input signal
represents
the weights associated with edge connecting neurons of present layer
to the neurons present in the previous layer.
Ni
represents the sum of all weights and corresponding inputs.
αi
is threshold or bias associated with corresponding neuron.
Ø
Represents the activation
function (Here it is sigmoid function).
This
process is carried out at each neuron of each layer until the last
layer of neural network is reached. The output what we get during the
training process is called
actual
output which is compared with the target output and backpropagation
is carried out.
2. BACK PROPAGATION IN NEURAL NETWORK
Backpropagation
algorithm is applied after the feedforward algorithm in order to
propagate the errors in other direction of feed forward and adjust
the weights to overcome that error. After the Feedforward the output
of the neural network is compared with the target output. The
difference between expected output of a neuron and actual output of
the same neuron gives the error of that neuron.
Backpropagation
algorithm is applied after the feedforward algorithm in order to
propagate the errors in other direction of feed forward and adjust
the weights to overcome that error. After the Feedforward the output
of the neural network is compared with the target output. The
difference between expected output of a neuron and actual output of
the same neuron gives the error of that neuron.
Error
is calculated at each neuron of each layer. This error is used to
update the weights of edges connecting present layer and previous
layer. This error propagation is carried out until first layer of
Neural network is reached.
This
can be defined in mathematical terms as:
Error
calculation for output layer neurons:
OUTPUT_ERROR
= (TARGET – ACTUAL) * sigmoid_derivative (ACTUAL)
Error
calculation for hidden layer neurons:
HIDDEN_ERROR
j
=
Updating
the weights during backpropagation
Wij
= Wij + (LEARNING RATE * ERROR * INPUT)
During
the training period feedforward and backpropagation algorithms are
implemented iteratively until error becomes minimum. Each iteration
is called epoch. Learning rate specifies the rate at which neural
network learns to classify the objects into their respective classes.
The
solutions obtained with deeper neural networks correspond to
solutions that perform worse than the solutions obtained for networks
with 1 or 2 hidden layers. As the architecture gets deeper, it
becomes more difficult to obtain good generalization using a Deep NN.
In
2006 Hinton discovered that much better results could be achieved in
deeper architectures when each layer (RBM) is pre-trained with an
unsupervised learning algorithm (Contrastive Divergence). Then the
Network can be trained in a supervised way using backpropagation in
order to "fine-tune" the weights.
No comments:
Post a Comment