项目作者: kim-marcel

项目描述 :
A very basic Java Neural Network Library.
高级语言: Java
项目地址: git://github.com/kim-marcel/basic_neural_network.git
创建时间: 2018-03-04T13:28:46Z
项目社区:https://github.com/kim-marcel/basic_neural_network

开源协议:MIT License

下载


Basic Neural Network Library

This is a very basic Java Neural Network library based on the one built by Daniel Shiffman in this playlist using the Efficient Java Matrix Library (EJML).

The library can also be used with Processing. Just download the jar-file (see below) and drag it into your sketch.

If you want to learn more about Neural Networks check out these YouTube-playlists:

Features

  • Neural Network with variable amounts of inputs, hidden nodes and outputs
  • Multiple hidden layers
  • Activation functions: Sigmoid, Tanh, ReLu
  • Adjustable learning rate
  • Fully connected
  • Support for genetic algorithms: copy-, mutate-, and merge-functionality
  • Save the weights and biases of a NN to a JSON-file
  • Generate a NeuralNetwork-object from a JSON-file

Getting Started

This section describes how a working copy of this project can be set up on your local machine for testing and development purposes. If you just want to use the library you can skip this part.

Prerequisites

Maven has to be installed:

Installing

  1. mvn install

All the dependencies specified in pom.xml will be installed.

Building

  1. mvn package

In the directory /target of the project two jar-files will be generated. One with all the dependencies included and one without.

Use the library

Constructors:

  1. // Neural network with 2 inputs, 1 hidden layer, 4 hidden nodes and 1 output
  2. NeuralNetwork nn0 = new NeuralNetwork(2, 4, 1);
  3. // Neural network with 2 inputs, 2 hidden layers, 4 hidden nodes and 1 output
  4. NeuralNetwork nn1 = new NeuralNetwork(2, 2, 4, 1);

Train and guess:

  1. // Train the neural network with a training dataset (inputs and expected outputs)
  2. nn.train(trainingDataInputs, trainingDataTargets);
  3. // Guess for the given testing data is returned as an array (double[])
  4. nn.guess(testingData);

Read and write from/to file:

  1. // Reads from a (previously generated) JSON-file the nn-Data and returns a NeuralNetwork-object
  2. NeuralNetwork myNN = NeuralNetwork.readFromFile();
  3. // Load from a specifiy file (by default it will look for a file called "nn_data.json")
  4. NeuralNetwork myNN = NeuralNetwork.readFromFile("my_nn_data.json");
  5. // Writes a JSON-file with the current "state" (weights and biases) of the NN
  6. myNN.writeToFile();
  7. // Specify a custom file name (by default it will be saved as "nn_data.json")
  8. myNN.writeToFile("my_nn_data");

Adjust the learning rate:

  1. // Set the learning rate (Initially the learning rate is 0.1)
  2. nn.setLearningRate(0.01);
  3. // Get current learning rate
  4. nn.getLearningRate();

Use different activation functions:

  1. // Set the activation function (By default Sigmoid will be used)
  2. nn.setLearningRate(ActivationFunction.TANH);
  3. // Get name of currently used activation function
  4. nn.getActivationFunctionName();

Use this library with genetic algorithms:

  1. // Make an exact and "independent" copy of a Neural Network
  2. NeuralNetwork nn2 = nn1.copy();
  3. // Merge the weights and biases of two Neural Networks with a ratio of 50:50
  4. NeuralNetwork merged = nnA.merge(nnB);
  5. // Merge the weights and biases of two Neural Networks with a custom ratio (here: 20:80)
  6. NeuralNetwork merged = nnA.merge(nnB, 0.2);
  7. // Mutate the weights and biases of a Neural Network with custom probability
  8. nn.mutate(0.1);

More detailed examples can be found below.

Download

If you want to use this library you can download v0.5 here or check the release tab of this repository for other versions.

Examples

If you want you can add your own projects, that were build with this library, to this list. Please send me a pull request.

TODO

  • implement softmax
  • add more functionality for genetic algorithms (e.g. different merge functions,…)
  • JUnit-tests
  • Javadoc documentation
  • weights and biases should get normalized
  • more examples

If you have any other suggestions on what should be done, feel free to open an issue or add it to this list.

If you want to contribute by implementing any of these features or your own ideas please do so and send me a pull request.

Libraries & Tools used in this project