Model neurons as elixir processes in feed forward network
A non-traditional approach to modelling a neural network.
Here I use elixir to model each neuron as a seperate asynchrounous process. Neuron receive signals, compute an output, receive an error signal and updates weights, all as a completely self contained process. As opposed to the more traditional approach of updating an entire layer at once as a matrix operation.
This is much slower than matrix operations performed on a GPU, but its also (I hope) much easier to understand.
Just use the mix task:
mix learn
1) Wait for all input neurons to send it activities (remember these connections)
1) Compute its own activity based on received activities
1) “weight” the activity for each outgoing connection (using a different weight per output connection)
1) Send the weighted activities to outputs
1) Wait for all output responses. (IE the “error” of each connected output neuron)
1) Compute the neurons own “error” based on the error of each of the received outputs, the corresponding weight, and the last activity that was sent
1) Send the “error” back to open connections from step 1
1) Compute the error amount for each of the outgoing weights
1) Update weights
https://www.youtube.com/watch?v=Z8jzCvb62e8&list=PLoRl3Ht4JOcdU872GhiYWf6jwrk_SNhz9&index=13
/
\ |
\|/
/|\ | < axon
| \| /
| |/
| | }
| > (neuron)
| }
0
/ | \
/|\/|\/|\ < dendrites
A neuron receives activity from other neurons connected at it dendrites and sends that activity to more neurons connected along its axon. As such, activities travel UP the neuron.
The activity the neuron sends along its axon is different at each synapse (aka connection) on the axon.
The activities are calculated in the following way:
The weights or connection strengths at the synapses on the axon start off random and then are “learned”.
Learning happens similarly to actioning, but in reverse:
Then, we continue to send the error backwards through the network: