项目作者: kcoms555

项目描述 :
C++ implementation of machine learning, forward and backward propagation.
高级语言: C++
项目地址: git://github.com/kcoms555/ML-in-cpp.git
创建时间: 2020-09-02T16:29:16Z
项目社区:https://github.com/kcoms555/ML-in-cpp

开源协议:MIT License

下载


ML-in-cpp

This is my toy cpp project implementing machine learning such as forward and backward propagation.
It contains input, target batch files to train AND, OR and XOR operations.

Installation

1. Download or clone it

  1. git clone https://github.com/kcoms555/ML-in-cpp

2. Go to the ML-in-cpp directory

  1. cd ML-in-cpp

3. compile source files

  1. make

or

  1. g++ -o bin/runner source/runner.cpp
  2. g++ -o bin/trainer source/trainer.cpp

Example

Train XOR data set

image (https://en.wikipedia.org/wiki/Exclusive_or)
1. Write a input batch for XOR
Open ‘data/XOR.input’ and write them

  1. 2 1 4 <-- row, column, count. It means there are four (row) by (column) matrices
  2. 0 0 <-- It will create a 2 by 1 matrix ((0), (0)). It always takes columns first.
  3. 0 1
  4. 1 0
  5. 1 1

2. Write a target batch for XOR
Open ‘data/XOR.target’ and write them

  1. 1 1 4
  2. 0
  3. 1
  4. 1
  5. 0

3. Set configuration file
Open ‘bin/conf’ and set input as ../data/XOR.input and set target as ../data/XOR.input.
Set weights, biases, activation functions, learning rate and repetition as you want.

  1. input = ../data/XOR.input
  2. target = ../data/XOR.target
  3. w = 1 2 7 <- sets 2 by 7 weight matrix in first layer
  4. w = 2 7 1
  5. b = 1 7 1 <- sets 7 by 1 bias matrix in first layer
  6. b = 2 1 1
  7. func = 1 tanh <- sets activate function of first layer as 'Hyperbolic tangent'
  8. func = 2 x3
  9. lr = 0.01 <- sets learning rate as 0.01
  10. repeat = 10000 <- sets training repetition
  11. ls = 2 <- sets the layer size. In case of single layer, set its value as 1.
  12. print_csv = false
  13. show_cost = true
  14. load = true
  15. save = true

The neural network will be built as shown in the picture below.
image

4. train and run
Go to ./bin and execute trainer and runner
pi_PI4_-__ML-in-cpp_bin-2020-10-16-01-52-26

Performance benchmarking

output3

  • Tensorflow 1.11.0 with Raspberry Pi 3 B+, running the sample code functionally identical to the XOR learning above : 46.753s
  • ML-in-cpp with Raspberry Pi 3 B+, running the XOR learning above : 6.040s

A configuration file of ML-in-cpp and the python(tensorflow) test code are in the test/

Because of random initialization, they are not printing same costs. If you want to see that they print the same costs, initialize the weights with same values. You can do it by editting MATRIX::RAND in source/IO.cpp to the specified value and random_normal() in test/XOR.py to fill() with the specified value you want.

ML-in-cpp is much faster than tensorflow 1.11.0 in that case by 7.74 times