A set of ML algorithms focused on low-end hardware with bit neural networks.
A package focused on using bit operations with deep learning techniques for improving performance and reducing memory consumption in order to be executed on low-end hardware.
Check this simple example bellow:
using Flux
using TinyML
layers = (BitDense(2,1),)
chain = Chain(layers...)
fitness(chain::Chain) = 1/(abs(chain([3,2]-0.5)))
set = Genetic.TrainingSet(chain, layers, fitness)
Genetic.train!(set, maxFitness=100.0)
With only a few lines you can create a model trained with reinforcement learning methods. It is simple and intuitive to get it!
Even if you are not seeking for running your models in low-end hardware, this package may fit for your purposes if you want faster convergence or binary weights in your model.
By using binary weights, the search space is reduced to only two values, 0 and 1, instead of regular models who are comprehended between -Inf and +Inf.
Mean time: 841.554 ns
finput = rand(640)
Mean time: 9.662 μs
* **Memory consumption:** By using BitDense you can achieve up to 32x less memory usage when comparing to regular Dense. Check an example bellow:
Size: 164192 bytes
Size: 5228 bytes
* **Convergence:** When training a model using BitDense instead of Dense, we could improve the convergence time by reducing the search space. Check an example bellow:
Float Snake: 16.05 generations
Bit Snake: 5.24 generations
```