Wednesday, September 9, 2009

Activity 16 Neural Networks

Do you remember when I said machines are stupid? Well, I am not the first or the only one to believe so. Genius scientists and programmers that came before us also believed that machines and computers are sorely lacking when compared to the human brain. Just like in our pattern recognition activities (14 and 15), after creating our feature vectors we still needed to implement a formula so that the computer would classify the objects. So the brilliant scientists and programmers of the 19th century developed the “Neural Network”. The “Neural Network” works by mimicking how our own brain works; that is each neuron takes in multiple weighted inputs, processes the information, and then passes its output to a succeeding neuron. A collection of many interconnected neurons forms a “Neural Network.” The main advantage of using “Neural Network” over LDA and minimum distance classification is that we do not need to apply various classification rules and formulas.

Although I said that we do not use formulas with “Neural Network”, strictly speaking we still do. However this various calculations happen or exists within the structure of the network and the user won’t ever encounter it. We use “Neural Network” by simply giving it examples of a class and telling it that that particular example is a member of a specific class. For a given amount of time or iterations the “Neural Network” will adjusts the input weights of each neuron to minimize its error (identifying an object as a member of the wrong class). After all the iterations the “Neural Network” has already “learned” and will identify an object as a member of any class it has learned.

In this activity we will use Neural Networks from the ANN toolbox of SciLab to do pattern recognition. Our input to the neural network is the feature vectors we have from activities 14 and 15. We train the neural network by giving it 9 examples of each of the 4 classes and telling it which class each object belongs to (Activity 14 Table 1). Also we test the accuracy of the “Neural Network” network by letting it classify 40 more objects, one for each class, and counting how many it classified correctly. We test the accuracy of classification for different learning rates of the “Neural Network” from 0.01 to 0.99.

Table 1. Classification result and percent accuracy at different learning rates.
(click to enlarge)
Table 1 shows the results and accuracy of classification of the “Neural Network”. We see that as expected as we increase learning rate the accuracy of classification also increases and reaches a maximum of 100% at a learning rate of 0.23 to 0.55. The red box over table 1 highlights the area with 100% accuracy. However, as we increase the learning rate beyond 0.55 the accuracy slightly decrease to 97.5%. This is not expected since a higher learning rate should result in better classification. The classification accuracy versus learning rate is plotted in figure 1 and this trend of increase and slight decrease is observed much better.
Figure 1. Classification accuracy versus Neural Network learning rate. Red line highlights range with 100% accuracy

It would have been better to do multiple trials at different seed values to determine the standard deviation of the classification accuracy for each learning rate. If we have this information we might find out that this decrease in accuracy is not significant and is within the standard deviation. It would have also been better if we explored the effect of number of learning iterations but this was not explored due to time constraints.I would like to thank Kaye for reminding me of what to do in this activity. I give myself a grade of 10.

Main Reference:
Maricor Soriano, A16 – Neural Networks, AP186 2008
Cole’s AP186 Blog

No comments:

Post a Comment