java neural network
Forum  |  Blog  |  Wiki  
Get Java Neural Network Framework Neuroph at SourceForge.net. Fast, secure and Free Open Source software downloads
      

SHUTTLE LANDING CONTROL USING NEURAL NETWORKS

An example of a multivariate data type classification problem using Neuroph framework

by Nikola Kosanovic, Faculty of Organizational Sciences, University of Belgrade

an experiment for Intelligent Systems course

Introduction
Neural networks have seen an explosion of interest over the last few years, and are being successfully applied across an extraordinary range of problem domains, in areas as diverse as finance, medicine, engineering, geology and physics.
In this experiment it will be shown how neural networks and Neuroph Studio are used when it comes to problems of classification. Several architectures will be tried out, and it will be determined which ones represent a good solution to the problem, and which ones do not. Classification is a task that is often encountered in every day life. A classification process involves assigning objects into predefined groups or classes based on a number of observed attributes related to those objects. Although there are some more traditional tools for classification, such as certain statistical procedures, neural networks have shown to be an effective solution for this type of problems. There is a number of advantages to using neural networks - they are data driven, they are self-adaptive, they can approximate any function - linear as well as non-linear (which is quite important in this case because groups often cannot be divided by linear functions). Neural networks classify objects rather simply - they take data as input, derive rules based on those data, and make decisons.
Introduction to the problem
We will use Neuroph framework for training the neural network that uses Shuttle Landing Control data set.

Main goal of this experiment is to train neural network for determining the conditions under which an autolanding would be preferable to manual control of the spacecraft.

Attribute Information:

  1. Class: non-auto , auto (0,1)
  2. Stabilty: stab , xstab (1,2)
  3. Error: XL , LX , MM , SS (1,2,3,4)
  4. Sign: pp , nn (1,2)
  5. Wind: head , tail (1,2)
  6. Magnitude: low , medium , strong , out of range (1,2,3,4)
  7. Visibility: yes , no (1,2)
Procedure of training a neural network
In order to train a neural network, there are six steps to be made:
  1. Normalize the data
  2. Create a Neuroph project
  3. Creating a Training Set
  4. Create a neural network
  5. Train the network
  6. Test the network to make sure that it is trained properly
Step 1. Data Normalization
In order to train neural network this data set have to be normalized. Normalization implies that all values from the data set should take values in the range from 0 to 1.
For that purpose it would be used the following formula:

Where:

X – value that should be normalized
Xn – normalized value
Xmin – minimum value of X
Xmax – maximum value of X

Last 2 digits of data set represent class. 1 0 represent non-auto (advise using manual control ), 0 1 auto (advise using automatic control).
Step 2. Creating a new Neuroph project

We create a new project in Neuroph Studio by clicking File > New Project, then we choose Neuroph project and click 'Next' button.


In a new window we define project name and location. After that we click 'Finish' and a new project is created and will appear in projects window, on the left side of Neuroph Studio.


Step 3. Creating a Training Set
To create training set, in main menu we choose Training > New Training Set to open training set wizard. Then we enter name of training set and number of inputs and outputs. In this case it will be 6 inputs and 2 outputs and we will set type of training to be supervised as the most common way of neural network training.

After clicking 'Next' we need to insert data into training set table. Training set can be created in two ways. You can either create training set by entering elements as input and desired output values of neurons in input and output label,

or you can create training set by choosing an option load file.

The first method of data entry is time consuming, and there is also a risk to make a mistake when entering data. Since we already have training set we will choose second way. Click on Choose File and find file named PremierLeagueresults.txt. Then select tab as values separator. In our case values have been separated with tab. In some other case values of data set can be separated on the other way. When finished, click on 'Load'.

Then, we click 'Load' and all data will be loaded into table. We can see that this table has 8 columns, first 6 of them represents inputs, and last 2 of them represents outputs from our data set.After clicking 'Finish' new training set will appear in our project.

To be able to decide which is the best solution for our problem we will create several neural networks, with different sets of parameters, and most of them will be based on this training set.

Training attempt 1
Step 1.1 Creating a neural network
We create a new neural network by clicking right click on project and then New > Neural Network. Then we define neural network name and type. We will choose 'Multy Layer Perceptron' type.

A multilayer perceptron is a feedforward artificial neural network model that maps sets of input data onto a set of appropriate output. It consists of multiple layers of nodes in a directed graph, with each layer fully connected to the next one. Except for the input nodes, each node is a neuron with nonlinear activation function Multylauer perceptron utilizes a supervised learning technique called backpropagation for training the network. It is a modification of the standard linear perceptron, which can distinguish data that is not linearly separable.

In new Multi Layer Perceptron dialog enter number of neurons. The number of input and output units is defined by the problem, so you need to enter 6 as number of input neurons and 2 as number of output neurons. The number of hidden units to use is far from clear. If too many hidden neurons are used, the network will be unable to model complex data, resulting in a poor fit. If too few hidden neurons are used, then training will become excessively long and the network may overfit. How about the number of hidden layers? For most problems, one hidden layer is normally sufficient. Therefore, we will choose one hidden layer. The goal is try to quickly find the smallest network that converges and then refine the answer by working back from there. Because of that, we will start with 1 hidden neurons and if the network fails to converge after reasonable period, we will restart training up to ten times, thus ensuring that it has not fallen into local minimum. If the network still fails to converge we will add another hidden neuron and repeat procedure. Further, we check option 'Use Bias Neuron'. Bias neurons are added to neural networks to help them learn patterns. A bias neuron is nothing more than a neuron that has a constant output of 1. Because the bias neurons have a constant output of one they are not connected to the previous layer. The value of 1, which is called the bias activation, can be set to values other than 1. However, 1 is the most common bias activation. If your values in the data set are in the interval between -1 and 1, choose Tanh transfer function. In our data set, values are in the interval between 0 and 1, so we used Sigmoid transfer function. As learning rule choose Backpropagation With Momentum. Backpropagation With Momentum algorithm shows a much higher rate of convergence than the Backpropagation algorithm. Choose Dynamic Backpropagation algorithm if you have to training dynamic neural network, which contain both feedforward and feedback connections between the neural layers.

Next, we click 'Finish' and the first neural network is created. In the picture below we can see the graph view of this neural network.

Step 1.2 Train the neural network

After we have created training set and neural network we can train neural network. First, we select training set, click 'Train', and then we have to set learning parameters for training. We will train the network with 70% of all data.

Next thing we should do is determine the values of learning parameters, learning rate and momentum. Learning rate is one of the parameters which governs how fast a neural network learns and how effective the training is. Let us assume that the weight of some synapse in the partially trained network is 0.2. When the network is introduced with a new training sample, the training algorithm demands the synapse to change its weight to 0.7 (say) so that it can learn the new sample appropriately. If we update the weight straightaway, the neural network will definitely learn the new sample, but it tends to forget all the samples it had learnt previously. This is because the current weight (0.2) is a result of all the learning that it has undergone so far. So we do not directly change the weight to 0.7. Instead, we increase it by a fraction (say 25%) of the required change. So, the weight of the synapse gets changed to 0.3 and we move on to the next training sample. Proceeding this way, all the training samples are trained in some random order. Learning rate is a value ranging from zero to unity. Choosing a value very close to zero, requires a large number of training cycles. This makes the training process extremely slow. On the other hand, if the learning rate is very large, the weights diverge and the objective error function heavily oscillates and the network reaches a state where no useful training takes place. The momentum parameter is used to prevent the system from converging to a local minimum or saddle point. A high momentum parameter can also help to increase the speed of convergence of the system. However, setting the momentum parameter too high can create a risk of overshooting the minimum, which can cause the system to become unstable. A momentum coefficient that is too low cannot reliably avoid local minima, and can also slow down the training of the system.

In this first case a maximum error will be 0.05, learning rate 0.2 and momentum 0.2.

Then we click on the 'Train' button and the training process starts.

After 13 iterations Total Net Error drop down to a specified level of 0.05 which means that training process was successful and that now we can test this neural network.

Step 1.3 Test the neural network

We will test the network with 30% of all data.and it is the part of dataset which haven't been trained,so we can se can network generalize the problem. Test neural network by clicking on the 'Test' button, and then we can see testing results. In the results we can see that the Total Mean Square Error is 0.37667493535689267. That is bad.

The final part of testing this network is testing it with several input values. To do that, we will select 4 random input values from our data set. Those are:
Test number Stabilty Error Sign Wind Magnitude Visibilty Non-auto Auto
1 0.25 0.75 0.50 0.50 1.00 0.25 1.00 0.00
2 0.25 1.00 0.25 0.50 1.00 0.25 1.00 0.00
3 0.50 0.75 0.25 0.50 1.00 0.25 1.00 0.00
4 0.50 0.25 0.25 0.50 1.00 0.50 0.00 1.00

We test the network with each of theese inputs.The result is:

Test number Non-auto Auto
1 0.5728 0.4927
2 0.2158 0.7632
3 0.7449 0.2494
4 0.8643 0.1276

good guess with small error
good guess with big error
bad guess

This solution is not good,it has been guessed 3 of 4 times but always with big error. So we are going to try something else.

Training attempt 2
Step 2.1 Train the neural network
So let we try something else. Network will be same but we will update stopping criteria by decrease max error at 0.025. In network window click Randomize button and then click Train button. We will train the network with 70% of all data.

                

After 47 iterations Total Net Error drop down to a specified level of 0.025 which means that training process was successful and that now we can test this neural network.

Step 2.2 Test the neural network

We will test the network with 30% of all data.nd it is the part of dataset which haven't been trained,so we can se can network generalize the problem. Test neural network by clicking on the 'Test' button, and then we can see testing results. In the results we can see that the Total Mean Square Error is 0.2454602805269903. That is better result than in training attempt 1,but it is good yet.

Now we try to test with random values. We use 4 random input values:
Test number Stabilty Error Sign Wind Magnitude Visibilty Non-auto Auto
1 0.25 0.75 0.25 0.50 0.50 0.25 0.00 1.00
2 0.25 0.50 0.25 0.25 0.50 0.50 0.00 1.00
3 0.25 1.00 0.25 0.25 1.00 0.25 1.00 0.00
4 0.50 1.00 0.25 0.50 0.25 0.25 1.00 0.00

We test the network with each of theese inputs. TWe test the network with each of theese inputs.The result is:

Test number Non-auto Auto
1 0.2805 0.7192
2 0.0563 0.9435
3 0.1366 0.8631
4 0.7292 0.2709

good guess with small error
good guess with big error
bad guess

This solution is better,it has been guessed again 3 of 4 times,one with small error but again it is not good. So we are going to try something else.

Training attempt 3
Step 3.1 Train the neural network
So let we try something else. Network will be same but we will update stopping criteria by decrease max error at 0.01,we will set learning rate at 0.2 and momentum at 0.5. In network window click Randomize button and then click Train button.We will train the network with 70% of all data.

                

After 75 iterations Total Net Error drop down to a specified level of 0.01 which means that training process was successful and that now we can test this neural network.

Step 3.2 Test the neural network

We will test the network with 30% of all data.nd it is the part of dataset which haven't been trained,so we can se can network generalize the problem. Test neural network by clicking on the 'Test' button, and then we can see testing results. In the results we can see that the Total Mean Square Error is 0.10233255873760677. This is much better.

Now we try to test with random values. We use 4 random input values:
Test number Stabilty Error Sign Wind Magnitude Visibilty Non-auto Auto
1 0.25 0.25 0.50 0.25 1.00 0.25 1.00 0.00
2 0.25 0.25 0.25 0.25 0.25 0.50 0.00 1.00
3 0.25 1.00 0.50 0.25 1.00 0.25 1.00 0.00
4 0.50 0.25 0.50 0.25 0.75 0.50 0.00 1.00

We test the network with each of theese inputs. We test the network with each of theese inputs.The result is:

Test number Non-auto Auto
1 0.9883 0.0117
2 0.0055 0.9945
3 0.3448 0.6553
4 0.8249 0.1750

good guess with small error
good guess with big error
bad guess

This solution is better,now it has been guessed 2 of 4 times,but each time with small error. So we are going to try something else,to get better result.

Training attempt 4
Step 4.1 Train the neural network
So let we try something else. Network will be same but we will set learning rate at 0.5 and momentum at 0.7. In network window click Randomize button and then click Train button.We will train the network with 70% of all data.

                

After 74 iterations Total Net Error drop down to a specified level of 0.01 which means that training process was successful and that now we can test this neural network.

Step 4.2 Test the neural network

We will test the network with 30% of all data.nd it is the part of dataset which haven't been trained,so we can se can network generalize the problem. Test neural network by clicking on the 'Test' button, and then we can see testing results. In the results we can see that the Total Mean Square Error is 0.129965506993632762. But when we look all results we see that there are many errors that are below 0.02

Now we try to test with random values. We use 4 random input values:
Test number Stabilty Error Sign Wind Magnitude Visibilty Non-auto Auto
1 0.50 0.25 0.25 0.50 0.75 0.50 0.00 1.00
2 0.50 0.75 0.50 0.25 1.00 0.25 1.00 0.00
3 0.25 0.25 0.50 0.50 0.50 0.50 0.00 1.00
4 0.50 1.00 0.25 0.50 0.75 0.25 1.00 0.00

We test the network with each of theese inputs. We test the network with each of theese inputs.The result is:

Test number Non-auto Auto
1 0.9836 0.0164
2 0.9918 0.0082
3 0.016 0.984
4 0.9917 0.0083

good guess with small error
good guess with big error
bad guess

It has been guessed 3 of 4 times and each time with small error. But there are still some big errors,so we are going to try to rid of them. So we are going to try something else,to get better result.

Training attempt 5
Step 5.1 Creating a neural network

Now,we are going to try different architecture we are going to change number of hidden neurons. We will create network with 2 hidden neurons.We create a new neural network by clicking right click on project and then New > Neural Network. Then we define neural network name and type. We will choose 'Multy Layer Perceptron' type.

In new Multi Layer Perceptron dialog enter number of neurons. The number of input and output units is defined by the problem, so you need to enter 6 as number of input neurons and 2 as number of output neurons and 2 hidden neurons.than click finish.

Next, we click 'Finish' and the first neural network is created. In the picture below we can see the graph view of this neural network.

Step 5.1 Train the neural network
We will set learning rate at 0.5 and momentum at 0.7. In network window click Randomize button and then click Train button.We will train the network with 70% of all data.

                

After 73 iterations Total Net Error drop down to a specified level of 0.01 which means that training process was successful and that now we can test this neural network.

Step 5.2 Test the neural network

We will test the network with 30% of all data.nd it is the part of dataset which haven't been trained,so we can se can network generalize the problem. Test neural network by clicking on the 'Test' button, and then we can see testing results. In the results we can see that the Total Mean Square Error is 0.12006169644017126.It is slice better.

Now we try to test with random values. We use 4 random input values:
Test number Stabilty Error Sign Wind Magnitude Visibilty Non-auto Auto
1 0.25 1.0 0.50 0.25 1.00 0.25 1.00 0.00
2 0.25 0.50 0.25 0.25 0.50 0.50 0.00 1.00
3 0.50 0.25 0.50 0.25 1.00 0.50 0.00 1.00
4 0.50 0.75 0.50 0.25 0.25 0.25 1.00 0.00

We test the network with each of theese inputs. We test the network with each of theese inputs.The result is:

Test number Non-auto Auto
1 0.7587 0.2412
2 0.013 0.985
3 0.9745 0.0245
4 0.9956 0.0043

good guess with small error
good guess with big error
bad guess

There are still some big errors,but when we look at all errors we can see that many errors are below 0.01 so this solution can be use in this problem but there are some errors that are above 0.5 so we are going to try to rid of this errors that are above 0.5. So we are going to try something else,to get better result.

Training attempt 6
Step 6.1 Train the neural network
We will set learning rate at 0.7 and momentum at 0.5. In network window click Randomize button and then click Train button.We will train the network with 70% of all data.

                

After 82 iterations Total Net Error drop down to a specified level of 0.01 which means that training process was successful and that now we can test this neural network.

Step 6.2 Test the neural network

We will test the network with 30% of all data.nd it is the part of dataset which haven't been trained,so we can se can network generalize the problem. Test neural network by clicking on the 'Test' button, and then we can see testing results. In the results we can see that the Total Mean Square Error is 0.09857284722137614.It is slice better.

Now we try to test with random values. We use 4 random input values:
Test number Stabilty Error Sign Wind Magnitude Visibilty Non-auto Auto
1 0.25 0.25 0.25 0.50 1.00 0.25 1.00 0.00
2 0.25 0.25 0.25 0.25 0.25 0.50 0.00 1.00
3 0.50 1.00 0.25 0.25 0.25 0.50 0.00 1.00
4 0.50 0.50 0.50 0.50 0.50 0.50 0.00 1.00

We test the network with each of theese inputs. We test the network with each of theese inputs.The result is:

Test number Non-auto Auto
1 0.9956 0.0040
2 0.0022 0.9975
3 0.0023 0.9974
4 0.0050 0.9944

good guess with small error
good guess with big error
bad guess

It has been guessed 4 of 4 times and each time with small error. But when we look at all errors we can see that many errors are below 0.01 so this solution can be use in this problem but there are some errors that are above 0.5 so we are going to try to rid of this errors that are above 0.5. Like we had seen this solution also is good but it has some very big errors that we are trying to rid of. So we are going to try samething else.

Training attempt 7
Step 7.1 Creating a neural network

Now,we are going to try different architecture we are going to change number of hidden neurons. We will create network with 4 hidden neurons.We create a new neural network by clicking right click on project and then New > Neural Network. Then we define neural network name and type. We will choose 'Multy Layer Perceptron' type.

In new Multi Layer Perceptron dialog enter number of neurons. The number of input and output units is defined by the problem, so you need to enter 6 as number of input neurons and 2 as number of output neurons and 4 hidden neurons.

Next, we click 'Finish' and the first neural network is created. In the picture below we can see the graph view of this neural network.

Step 7.2 Train the neural network
We will set learning rate at 0.6 and momentum at 0.5. In network window click Randomize button and then click Train button.We will train the network with 70% of all data.

                

After 55 iterations Total Net Error drop down to a specified level of 0.01 which means that training process was successful and that now we can test this neural network.

Step 7.3 Test the neural network

We will test the network with 30% of all data.nd it is the part of dataset which haven't been trained,so we can se can network generalize the problem. Test neural network by clicking on the 'Test' button, and then we can see testing results. In the results we can see that the Total Mean Square Error is 0.04999265153032986.This is the best result so far.

Now we try to test with random values. We use 4 random input values:
Test number Stabilty Error Sign Wind Magnitude Visibilty Non-auto Auto
1 0.50 0.75 0.50 0.50 0.75 0.25 1.00 0.00
2 0.25 0.50 0.25 0.25 0.25 0.50 0.00 1.00
3 0.25 1.00 0.25 0.25 1.00 0.25 1.00 0.00
4 0.50 0.75 0.50 0.25 1.00 0.25 1.00 0.00

We test the network with each of theese inputs. We test the network with each of theese inputs.The result is:

Test number Non-auto Auto
1 0.9924 0.0080
2 0.0003 0.9997
3 0.9125 0.0874
4 0.9934 0.0062

good guess with small error
good guess with big error
bad guess

When we look at all errors we can see that many errors are below 0.01 and only few errors above 0.1 so this solution can be use in this problem. But we are going to try to find better one.

Training attempt 8
Step 8.1 Train the neural network
We will use same networ and set learning rate at 0.2 and momentum at 0.7. In network window click Randomize button and then click Train button.We will train the network with 70% of all data.

                

After 102 iterations Total Net Error drop down to a specified level of 0.01 which means that training process was successful and that now we can test this neural network.

Step .2 Test the neural network

We will test the network with 30% of all data.nd it is the part of dataset which haven't been trained,so we can se can network generalize the problem. Test neural network by clicking on the 'Test' button, and then we can see testing results. In the results we can see that the Total Mean Square Error is 0.05112283494859043..

Now we try to test with random values. We use 4 random input values:
Test number Stabilty Error Sign Wind Magnitude Visibilty Non-auto Auto
1 0.50 0.75 0.50 0.25 0.50 0.25 1.00 0.00
2 0.25 0.75 0.25 0.25 1.00 0.25 1.00 0.00
3 0.50 0.25 0.50 0.25 0.75 0.50 0.00 1.00
4 0.25 0.75 0.50 0.25 1.00 0.25 1.00 0.00

We test the network with each of theese inputs. We test the network with each of theese inputs.The result is:

Test number Non-auto Auto
1 0.9993 0.0008
2 0.9521 0.0543
3 0.3345 0.6654
4 0.9662 0.0337

good guess with small error
good guess with big error
bad guess

Result of this network is similar to network from attempt 7,so this network can also be used..

Training attempt 9
Step 9.1 Creating a neural network

Now,we are going to try different architecture we are going to change number of hidden neurons. We will create network with 5 hidden neurons.We create a new neural network by clicking right click on project and then New > Neural Network. Then we define neural network name and type. We will choose 'Multy Layer Perceptron' type.

In new Multi Layer Perceptron dialog enter number of neurons. The number of input and output units is defined by the problem, so you need to enter 6 as number of input neurons and 2 as number of output neurons and 5 hidden neurons.

Next, we click 'Finish' and the first neural network is created. In the picture below we can see the graph view of this neural network.

Step 9.2 Train the neural network
We will set learning rate at 0.5 and momentum at 0.7. In network window click Randomize button and then click Train button.We will train the network with 70% of all data.

                

After 44 iterations Total Net Error drop down to a specified level of 0.01 which means that training process was successful and that now we can test this neural network.

Step 9.3 Test the neural network

We will test the network with 30% of all data,and it is the part of dataset which haven't been trained,so we can se can network generalize the problem. Test neural network by clicking on the 'Test' button, and then we can see testing results. In the results we can see that the Total Mean Square Error is 0.04517436605979505.This is the best result so far.

Now we try to test with random values. We use 4 random input values:
Test number Stabilty Error Sign Wind Magnitude Visibilty Non-auto Auto
1 0.25 0.25 0.25 0.25 0.25 0.25 1.00 0.00
2 0.50 0.50 0.50 0.50 0.50 0.50 0.00 1.00
3 0.25 1.00 0.25 0.50 1.00 0.50 0.00 1.00
4 0.50 0.25 0.50 0.25 0.75 0.25 1.00 0.00

We test the network with each of theese inputs. We test the network with each of theese inputs.The result is:

Test number Non-auto Auto
1 0.9698 0.0306
2 0.0097 0.9909
3 0.0016 0.0874
4 0.9603 0.0381

good guess with small error
good guess with big error
bad guess

When we look at all errors we can see that many errors are below 0.01 and only few errors above 0.1 so this solution can be use in this problem. But we are going to try to find better one.

Training attempt 10
Step 10.1 Creating a neural network

Now,we are going to try different architecture we are going to change number of hidden neurons. We will create network with 6 hidden neurons.We create a new neural network by clicking right click on project and then New > Neural Network. Then we define neural network name and type. We will choose 'Multy Layer Perceptron' type.

In new Multi Layer Perceptron dialog enter number of neurons. The number of input and output units is defined by the problem, so you need to enter 6 as number of input neurons and 2 as number of output neurons and 6 hidden neurons.

Next, we click 'Finish' and the first neural network is created. In the picture below we can see the graph view of this neural network.

Step 10.2 Train the neural network
We will set learning rate at 0.2 and momentum at 0.7. In network window click Randomize button and then click Train button.We will train the network with 70% of all data.

                

After 89 iterations Total Net Error drop down to a specified level of 0.01 which means that training process was successful and that now we can test this neural network.

Step 10.3 Test the neural network

We will test the network with 30% of all data,and it is the part of dataset which haven't been trained,so we can se can network generalize the problem. Test neural network by clicking on the 'Test' button, and then we can see testing results. In the results we can see that the Total Mean Square Error is 0.04644489094460792.This is the best result so far.

Now we try to test with random values. We use 4 random input values:
Test number Stabilty Error Sign Wind Magnitude Visibilty Non-auto Auto
1 0.25 0.75 0.50 0.25 1.00 0.25 1.00 0.00
2 0.25 1.00 0.25 0.25 1.00 0.25 1.00 0.00
3 0.50 0.75 0.25 0.50 1.00 0.25 1.00 0.00
4 0.25 0.25 0.50 0.50 0.50 0.50 0.00 1.00

We test the network with each of theese inputs. We test the network with each of theese inputs.The result is:

Test number Non-auto Auto
1 0.9245 0.0753
2 0.9855 0.0143
3 0.8958 0.1040
4 0.0058 0.9942

good guess with small error
good guess with big error
bad guess

When we look at all errors we can see that many errors are below 0.01 and only few errors above 0.1 so this solution can be use in this problem. But we are going to try to find better one.

Training attempt 11
Step 11.1 Creating a neural network

Now,we are going to try different architecture we are going to change number of hidden neurons. We will create network with 6 hidden neurons.We create a new neural network by clicking right click on project and then New > Neural Network. Then we define neural network name and type. We will choose 'Multy Layer Perceptron' type.

In new Multi Layer Perceptron dialog enter number of neurons. The number of input and output units is defined by the problem, so you need to enter 6 as number of input neurons and 2 as number of output neurons and 8 hidden neurons.

Next, we click 'Finish' and the first neural network is created. In the picture below we can see the graph view of this neural network.

Step 11.2 Train the neural network
We will set learning rate at 0.2 and momentum at 0.7. In network window click Randomize button and then click Train button.We will train the network with 70% of all data.

                

After 66 iterations Total Net Error drop down to a specified level of 0.01 which means that training process was successful and that now we can test this neural network.

Step 11.3 Test the neural network

We will test the network with 30% of all data,and it is the part of dataset which haven't been trained,so we can se can network generalize the problem. Test neural network by clicking on the 'Test' button, and then we can see testing results. In the results we can see that the Total Mean Square Error is 0.050471755388395885.This is the best result so far.

Now we try to test with random values. We use 4 random input values:
Test number Stabilty Error Sign Wind Magnitude Visibilty Non-auto Auto
1 0.50 0.75 0.50 0.50 0.50 0.25 1.00 0.00
2 0.50 0.25 0.50 0.25 0.50 0.50 0.00 1.00
3 0.50 1.00 0.25 0.25 1.00 0.25 1.00 0.00
4 0.25 1.00 0.25 0.25 0.50 0.25 0.00 1.00

We test the network with each of theese inputs. We test the network with each of theese inputs.The result is:

Test number Non-auto Auto
1 0.9968 0.0030
2 0.2569 0.7632
3 0.9984 0.0015
4 0.0568 0.9433

good guess with small error
good guess with big error
bad guess

When we look at all errors we can see that many errors are below 0.01 and only few errors above 0.1 so this solution can be use in this problem. But we are going to try to find better one.

Training attempt 12
Step 12.1 Creating a neural network

Now,we are going to try different architecture we are going to change number of hidden neurons. We will create network with 6 hidden neurons.We create a new neural network by clicking right click on project and then New > Neural Network. Then we define neural network name and type. We will choose 'Multy Layer Perceptron' type.

In new Multi Layer Perceptron dialog enter number of neurons. The number of input and output units is defined by the problem, so you need to enter 6 as number of input neurons and 2 as number of output neurons and 4 hidden neurons in two layers.

Next, we click 'Finish' and the first neural network is created. In the picture below we can see the graph view of this neural network.

Step 12.2 Train the neural network
We will set learning rate at 0.4 and momentum at 0.6. In network window click Randomize button and then click Train button.We will train the network with 70% of all data.

                

After 62 iterations Total Net Error drop down to a specified level of 0.01 which means that training process was successful and that now we can test this neural network.

Step 12.3 Test the neural network

We will test the network with 30% of all data,and it is the part of dataset which haven't been trained,so we can se can network generalize the problem. Test neural network by clicking on the 'Test' button, and then we can see testing results. In the results we can see that the Total Mean Square Error is 0.02299459345270722.This is the best result so far.

Now we try to test with random values. We use 4 random input values:
Test number Stabilty Error Sign Wind Magnitude Visibilty Non-auto Auto
1 0.50 1.00 0.25 0.25 0.75 0.50 0.00 1.00
2 0.25 0.75 0.25 0.25 0.75 0.25 1.00 0.00
3 0.25 0.50 0.50 0.50 1.00 0.25 1.00 0.00
4 0.25 0.75 0.50 0.50 0.25 0.25 1.00 0.00

We test the network with each of theese inputs. We test the network with each of theese inputs.The result is:

Test number Non-auto Auto
1 0.0785 0.9314
2 0.8555 0.1564
3 0.9001 0.0997
4 0.9864 0.0154

good guess with small error
good guess with big error
bad guess

When we look at all errors we can see that many errors are below 0.01 and only few errors above 0.1 so this solution can be use in this problem. But we are going to try to find better one.

Training attempt 13
Step 13.1 Creating a neural network

Now,we are going to try different architecture we are going to change number of hidden neurons. We will create network with 6 hidden neurons.We create a new neural network by clicking right click on project and then New > Neural Network. Then we define neural network name and type. We will choose 'Multy Layer Perceptron' type.

In new Multi Layer Perceptron dialog enter number of neurons. The number of input and output units is defined by the problem, so you need to enter 6 as number of input neurons and 2 as number of output neurons and 5 hidden neurons in two layers.

Next, we click 'Finish' and the first neural network is created. In the picture below we can see the graph view of this neural network.

Step 13.2 Train the neural network
We will set learning rate at 0.4 and momentum at 0.7. In network window click Randomize button and then click Train button.We will train the network with 70% of all data.

                

After 17 iterations Total Net Error drop down to a specified level of 0.01 which means that training process was successful and that now we can test this neural network.

Step 13.3 Test the neural network

We will test the network with 30% of all data,and it is the part of dataset which haven't been trained,so we can se can network generalize the problem. Test neural network by clicking on the 'Test' button, and then we can see testing results. In the results we can see that the Total Mean Square Error is 0.023684957908283455.This is the best result so far.

Now we try to test with random values. We use 4 random input values:
Test number Stabilty Error Sign Wind Magnitude Visibilty Non-auto Auto
1 0.50 1.00 0.25 0.25 1.00 0.25 1.00 0.00
2 0.50 0.50 0.50 0.25 0.50 0.25 0.00 1.00
3 0.25 0.75 0.50 0.50 0.50 0.50 0.00 1.00
4 0.50 1.00 0.25 0.25 0.50 0.25 1.00 0.00

We test the network with each of theese inputs. We test the network with each of theese inputs.The result is:

Test number Non-auto Auto
1 0.9928 0.0067
2 0.1024 0.8985
3 0.0435 0.9564
4 0.9491 0.0517

good guess with small error
good guess with big error
bad guess

When we look at all errors we can see that many errors are below 0.01 and only few errors above 0.1 so this solution can be use in this problem.

Conclusion

During this experiment, we have created several different architectures of neural networks. We wanted to find out what is the most important thing to do during the neural network training in order to get the best results.

Nine different solutions tested in this experiment have shown that the choice of the number of hidden neurons is crucial to the effectiveness of a neural network. We have concluded that one layer of hidden neurons is enough in this case. Also, the experiment showed that the success of a neural network is very sensitive to parameters chosen in the training process. The learning rate must not be too high, and the maximum error must not be too low. Next, the results have shown that the total mean square error does not reflect directly the success of a network training - it can sometimes be misleading, and individual errors made for every input must be observed.

Final results of our experiment are given in the table below.

Training attempt Number neurons Maximum error Learning rate Momentum Number of iterations Total mean square error 4 random inputs test - number of guesses with small error
1 6,1,2 0.05 0.2 0.2 13 0.377 0/4
2 6,1,2 0.025 0.2 0.2 47 0.245 1/4
3 6,2,1 0.01 0.2 0.5 75 0.102 2/4
4 6,2,1 0.01 0.5 0.7 74 0.129 3/4
5 6,2,2 0.01 0.5 0.7 73 0.120 2/4
6 6,2,2 0.01 0.7 0.5 82 0.098 4/4
7 6,4,2 0.01 0.6 0.5 55 0.049 4/4
8 6,4,2 0.01 0.2 0.7 102 0.051 3/4
9 6,5,2 0.01 0.5 0.7 44 0.045 4/4
10 6,6,2 0.01 0.2 0.7 89 0.046 4/4
11 6,8,2 0.01 0.2 0.7 66 0.050 4/4
12 6,2 2,2 0.01 0.4 0.6 62 0.023 4/4
13 6,3 2,2 0.01 0.4 0.7 17 0.023 4/4

Download
See also:
Multi Layer Perceptron Tutorial

      Java Get Powered      Java Get Powered                           Get Java Neural Network Framework Neuroph at SourceForge.net. Fast, secure and Free Open Source software downloads