A Feed Forward Artificial Neural Network from first principles.
A Feed Forward Artificial Neural Network using numpy with variable number of hidden layers. This is tested on the MNIST dataset.
git clone https://github.com/ryan-tabar/An-Artificial-Neural-Network
pip install -r requirements.txt
python MyANN.py
I wanted to have a go at creating an Artificial Neural Network (ANN) from first principles to better understand how it works. Therefore, I am not using machine learning libraries such as Tensorflow or PyTorch. This is so I can learn the underlying maths thats involved.
The class takes 4 __init__
arguments:
my_ANN = NeuralNetwork(input_nodes=784, hidden_nodes=38, output_nodes=10, hidden_layers=2)
To train the network is to call upon the .train
method:
my_ANN.train(inputs, targets, learn_rate=0.1, epochs=1)
inputs
and targets
must be a numpy arrayepochs
= number of times to train the same training setTo use the trained network is to call upon the .feed_forward
method:
prediction = my_ANN.feed_forward(inputs)
inputs
must be a numpy arrayThe following are the parameters that have made the ANN achieve an accuracy of 92-93% on the 10,000 test images:
All the maths I’ve worked out can be found in the word document in the Maths folder or click the following: https://github.com/ryan-tabar/An-Artificial-Neural-Network/blob/master/Maths/NeuralNetworkEquations.docx
Here are the screeenshots of that document: