java neural network
Forum  |  Blog  |  Wiki  
Get Java Neural Network Framework Neuroph at SourceForge.net. Fast, secure and Free Open Source software downloads
      

SHUTTLE LANDING CONTROL USING NEURAL NETWORKS - PART 2

Extension of an example of a multivariate data type classification problem using Neuroph framework.

 

by Vukmirović Igor, Faculty of Organizational Sciences, University of Belgrade

an experiment for Intelligent Systems course

Introduction
Here you will see how different training parameters and different number of hidden neurons in network effect it's end results.

Impact of learning rate on number of iterations.
 

We are going to insert different values for learning rate and see how it effects number of iterations. For other parameters we are going to use ones that are proven to be most consistent, except momentum, for use of this experiment it will be 0.

Network Type: Multi Layer Perceptron 
Training Algorithm: Backpropagation with Momentum 
Number of inputs: 
Number of outputs: 2
Hidden neurons: 6

Training Parameters: 
Momentum: 0
Max. Error: 0.01

Training attempt Learning rate Number of iterations
1 0,1 132
2 0,2 94
3 0,3 96
4 0,4 101
5 0,5 105
6 0,6 106
7 0,7 104
8 0,8 99
9 0,9 96
10 1 93
 

Impact of learning rate on number of iterations

 

Impact of learning rate on number of iterations

 

In most cases there were 95 to 105 iterations. For learning rate of 0.1 there were 132 interactions, and that was greatest number recorded in this experiment. For learning rate of 1 there were just 93 iteration.

 

Impact of learning rate on total mean square error.

We are going to insert different values for learning rate and see how it effects total mean square error. For other parameters we are going to use ones that are proven to be most consistent, except momentum, for use of this experiment it will be 0.

Network Type: Multi Layer Perceptron 
Training Algorithm: Backpropagation with Momentum 
Number of inputs: 
Number of outputs: 2
Hidden neurons: 6

Training Parameters: 
Momentum: 0
Max. Error: 0.01

Training attempt Learning rate Total mean square error
1 0,1 0,06861395544837
2 0,2 0,09033748770713
3 0,3 0,08106019624750
4 0,4 0,06386512316945
5 0,5 0,05287338695675
6 0,6 0,04860949375386
7 0,7 0,04861549680713
8 0,8 0,05080276034260
9 0,9 0,05348953267341
10 1 0,05590211292437

Impact of learning rate on total mean square error

 

Impact of learning rate on total mean square error

 

For learning rate of 0.2 total mean square error is greatest, with value approximately 0.09. From 0.2 to 0.6 total mean square error is dropping to its smallest value, approximately 0.048.

Impact of momentum on number of iterations.

 

We are going to insert different values for momentum and see how it effects number of iterations. For other parameters we are going to use ones that are proven to be most consistent:

Network Type: Multi Layer Perceptron 
Training Algorithm: Backpropagation with Momentum 
Number of inputs: 
Number of outputs: 2
Hidden neurons: 6

Training Parameters: 
Learning Rate: 0.2
Max. Error: 0.01

Training attempt Momentum Number of iterations
1 0 92
2 0,1 85
3 0,2 80
4 0,3 74
5 0,4 71
6 0,5 71
7 0,6 73
8 0,7 51
9 0,8 47
10 0,9 129
 

Impact of momentum on number of iterations

 

Impact of momentum on number of iterations

 

As you can see, there is declining tendency for momentum from 0 to 0.5. For 0.8 number of iterations is smallest(47) and for 0.9 is greatest(129). Experiment was repeated, and results were similar.

Impact of momentum on Total Mean Square Error.
We are going to insert different values for momentum and see how it effects Total Mean Square Error. For other parameters we are going to use ones that are proven to be most consistent:

Network Type: Multi Layer Perceptron 
Training Algorithm: Backpropagation with Momentum 
Number of inputs: 
Number of outputs: 2
Hidden neurons: 6

Training Parameters: 
Learning Rate: 0.2
Max. Error: 0.01

Training attempt Momentum Total mean square error
1 0 0,09041595936371
2 0,1 0,09515410915804
3 0,2 0,10007560849024
4 0,3 0,10373917729273
5 0,4 0,10181380011508
6 0,5 0,08784543199670
7 0,6 0,06157708865990
8 0,7 0,05066793032616
9 0,8 0,05743022371770
10 0,9 0,09059142601683
 

Impact of momentum on Total Mean Square Error

 

 

Impact of momentum on Total Mean Square Error

 

Most values for total mean square error are in interval from 0.5 to 1. For momentum greater than 0.9 total mean square error is rising dramatically. Smallest total mean square error of approximately 0.5 is achived for momentum of 0.7.

Impact of hidden neurons on number of iterations.

Now, we are going to test how different number of neurons in hidden layer influence number of iterations. For other parameters we are going to use ones that are proven to be most consistent:

Network Type: Multi Layer Perceptron 
Training Algorithm: Backpropagation with Momentum 
Number of inputs: 
Number of outputs: 2

Training Parameters: 
Momentum: 0.7
Learning Rate: 0.7
Max. Error: 0.01

Training attempt Hidden neurons Number of iterations
1 1 114
2 2 68
3 3 95
4 4 77
5 5 56
6 6 63
7 7 52
8 8 51
9 9 52
10 10 59

Impact of hidden neurons on number of iterations

 

Impact of hidden neurons on number of iterations

 

If we put just one neuron there will be 114 iterations, that number will fall to 51 for 8 neurons.

Impact of hidden neurons on total mean square error.

Now, we are going to test how different number of neurons in hidden layer influence total mean square error. For other parameters we are going to use ones that are proven to be most consistent:

Network Type: Multi Layer Perceptron 
Training Algorithm: Backpropagation with Momentum 
Number of inputs: 
Number of outputs: 2

Training Parameters: 
Momentum: 0.7
Learning Rate: 0.7
Max. Error: 0.01

Training attempt Hidden neurons Total mean square error
1 1 0,11741442039490
2 2 0,06186297553699
3 3 0,05767838071473
4 4 0,05173917618650
5 5 0,04748564562858
6 6 0,03993498463889
7 7 0,04696816748057
8 8 0,04839351412351
9 9 0,05088351944208
10 10 0,04858326213179

Impact of hidden neurons on total mean square error

 

Impact of hidden neurons on total mean square error

 

Highest total mean square error of approximately 0.11 is achieved with one neuron in hidden layer. Adding 5 more neurons to hidden layer reduce mean square error to approximately 0,039, which is its lowest value.

Impact of maximum error on number of iterations.

We are going to insert different values for maximum error and see how it effects number of iterations. For other parameters we are going to use ones that are proven to be most consistent:

Network Type: Multi Layer Perceptron 
Training Algorithm: Backpropagation with Momentum 
Number of inputs: 
Number of outputs: 2
Hidden neurons: 6

Training Parameters: 
Momentum: 0.7
Learning Rate: 0.7

Training attempt Maximum error Number of iterations
1 0,05 6
2 0,04 7
3 0,03 20
4 0,025 32
5 0,02 40
6 0,015 44
7 0,01 70

Impact of maximum error on number of iterations

 

Impact of maximum error on number of iterations

 

This experiment showed that if we reduce maximum error, number of iterations will rise significantly.


See also:
Multi Layer Perceptron Tutorial

      Java Get Powered      Java Get Powered                           Get Java Neural Network Framework Neuroph at SourceForge.net. Fast, secure and Free Open Source software downloads