项目作者: timhealz

项目描述 :
Neural Network Library
高级语言: Python
项目地址: git://github.com/timhealz/soma.git
创建时间: 2018-03-27T13:04:16Z
项目社区:https://github.com/timhealz/soma

开源协议:

下载


Soma

This repository holds the code for neural networks I’ve implemented for my Neural Networks course at JHU. FFBP.py and Boltzmann.py are objects used to implement Feed Forward Back Propagation Networks and Boltzmann Machines, respectively.

Feed Forward Back Propagation (FFBP) Multilayer Perceptron Network

A Feed Forward Back Propagation network can be created and trained to map inputs to outputs by learning from labeled data. The network takes the number of inputs and a “structure” of the network to initialize. For example, FFBP.Network(2, [4, 4, 1]) would initialize a three layer network comprised of 4 Perceptrons in the first layer, 4 Perceptrons in the second layer, and 1 Perceptron in the output layer, with two inputs.

As an example, this network will be implemented to detect households with a high Targeted Advertising Code Assignment (TACA) score, using Local Affluent Code (LAC) and Size of Wallet (SOW) metrics as inputs. A sample of 20 labeled input output pairs will be used to train and test the network:

Input (LAC, SOW) Output (TACA)
[1.98, 10] 0
[1.80, 10] 1
[1.05, 160] 2
[1.45, 180] 1
[1.80, 80] 1
[1.96, 110] 1
[0.4, 40] 2
[2.05, 130] 1
[0.90, 10] 1
[2.5, 60] 0
[1.6, 105] 2
[1.05, 196] 1
[0.52, 105] 2
[1.80, 32] 1
[2.3, 106] 0
[2.4, 151] 1
[2.5, 170] 1
[0.50, 150] 2
[1.1, 35] 1
[0.85, 70] 2

Data Preprocessing
Before feeding the data into the neural network, the SOW will be mean normalized in an attempt to improve performance.

  1. data = [[[1.98, 10], 0],
  2. [[1.80, 10], 1],
  3. [[1.05, 160], 2],
  4. [[1.45, 180], 1],
  5. [[1.80, 80], 1],
  6. [[1.96, 110], 1],
  7. [[0.4, 40], 2],
  8. [[2.05, 130], 1],
  9. [[0.90, 10], 1],
  10. [[2.5, 60], 0],
  11. [[1.6, 105], 2],
  12. [[1.05, 196], 1],
  13. [[0.52, 105], 2],
  14. [[1.80, 32], 1],
  15. [[2.3, 106], 0],
  16. [[2.4, 151], 1],
  17. [[2.5, 170], 1],
  18. [[0.50, 150], 2],
  19. [[1.1, 35], 1],
  20. [[0.85, 70], 2]]
  21. SOW = [input_output[0][1] for input_output in data]
  22. min_sow = min(SOW); max_sow = max(SOW)
  23. SOW_normalized = [(x - min_sow) / (max_sow - min_sow) for x in SOW]
  24. for i, input_output in enumerate(data):
  25. input_output[0][1] = SOW_normalized[i]

Training
The network will be trained using stochastic gradient descent for 5000 iterations using the second half of the sample data set. The first half of the sample data set will be used to test the network.

  1. import FFBP
  2. x = FFBP.Network(2, [4, 2, 1])
  3. test = data[0:10]
  4. train = data[10:20]
  5. iterations = 5000
  6. eta = 1
  7. logs = []
  8. for i in range(iterations):
  9. set = []
  10. for input_output in train:
  11. x.FFBP(input_output[0], input_output[1], eta)
  12. set.append(int(round(x.network[2][0].activation_value, 0)))
  13. logs.append(set)
  14. print('Desired Outputs:')
  15. print([input_output[1] for input_output in train])
  16. print('Network Output after 5000 iterations:')
  17. print(logs[iterations-1])
  18. print('Network Weights:')
  19. print(x.weights)

Output

  1. Desired Outputs:
  2. [2, 1, 2, 1, 0, 1, 1, 2, 1, 2]
  3. Network Output after 5000 iterations:
  4. [2, 1, 2, 1, 0, 1, 1, 2, 1, 2]
  5. Network Weights:
  6. [[array([ 4.54734725, 10.37713741]), array([ 3.40678089, -15.08760999]), array([ 14.31164215, 0.73187886]), array([-2.91337309, -1.38728593])], [array([-6.04151 , -5.53864698, -3.63134687, -4.14375287]), array([-3.57092122, -2.41132437, -5.88344684, 0.07598259])], [array([ 4.70773123, -0.06983132])]]

Testing

  1. outs = []
  2. for input_output in test:
  3. x.FFBP(input_output[0], input_output[1], eta)
  4. outs.append(int(round(x.network[2][0].activation_value,0)))
  5. print('Desired Outputs:')
  6. print([input_output[1] for input_output in test])
  7. print('Test Data Outputs:')
  8. print(outs)
  9. def differences(a, b):
  10. return sum(i == j for i, j in zip(a, b))
  11. matches = differences(outs, desired_outputs)
  12. print('Matches: ' + str(matches))

Output

  1. Desired Outputs:
  2. [0, 1, 2, 1, 1, 1, 2, 1, 1, 0]
  3. Test Data Outputs:
  4. [0, 0, 2, 1, 1, 1, 2, 1, 0, 0]
  5. Matches: 8

Results
After 5000 iterations of training, the network is able to separate 100% of the values correctly. After feeding the test data into the network, it was able to correctly determine the TACA value at an 80% success rate.

Boltzmann Machine with Simulated Annealing for the Traveling Salesman Problem

A Boltzmann Machine can be created and trained to solve the Traveling Salesman Problem (TSP), which optimizes to find the shortest route connecting a list of of locations. It takes a list of location names and a distance matrix as its arguments. The row and column indices in the matrix represent each of the locations, and thus, each entry represents the distance between location. For example, in the table below, entry (0,1) represents the distance between New York and Los Angeles, which is 2451. This matrix is symmetric, since the distance from New York to Los Angeles is the same as the distance from Los Angeles to New York, or (0,1) = (1,0).

  1. import Boltzmann as boltz
  2. import numpy as np
  3. distances = np.array(
  4. [[ 0, 2451, 713, 1018, 1631, 1374, 2408, 213, 2571, 875, 1420, 2145, 1972], # New York
  5. [2451, 0, 1745, 1524, 831, 1240, 959, 2596, 403, 1589, 1374, 357, 579], # Los Angeles
  6. [ 713, 1745, 0, 355, 920, 803, 1737, 851, 1858, 262, 940, 1453, 1260], # Chicago
  7. [1018, 1524, 355, 0, 700, 862, 1395, 1123, 1584, 466, 1056, 1280, 987], # Minneapolis
  8. [1631, 831, 920, 700, 0, 663, 1021, 1769, 949, 796, 879, 586, 371], # Denver
  9. [1374, 1240, 803, 862, 663, 0, 1681, 1551, 1765, 547, 225, 887, 999], # Dallas
  10. [2408, 959, 1737, 1395, 1021, 1681, 0, 2493, 678, 1724, 1891, 1114, 701], # Seattle
  11. [ 213, 2596, 851, 1123, 1769, 1551, 2493, 0, 2699, 1038, 1605, 2300, 2099], # Boston
  12. [2571, 403, 1858, 1584, 949, 1765, 678, 2699, 0, 1744, 1645, 653, 600], # San Francisco
  13. [ 875, 1589, 262, 466, 796, 547, 1724, 1038, 1744, 0, 679, 1272, 1162], # St. Louis
  14. [1420, 1374, 940, 1056, 879, 225, 1891, 1605, 1645, 679, 0, 1017, 1200], # Houston
  15. [2145, 357, 1453, 1280, 586, 887, 1114, 2300, 653, 1272, 1017, 0, 504], # Phoenix
  16. [1972, 579, 1260, 987, 371, 999, 701, 2099, 600, 1162, 1200, 504, 0]] # Salt Lake City
  17. )
  18. cities = ['New York', 'Los Angeles', 'Chicago', 'Minneapolis', 'Denver', 'Dallas', 'Seattle',
  19. 'Boston', 'San Francisco', 'St. Louis', 'Houston', 'Phoenix', 'Salt Lake City' ]
  20. x = boltz.Boltzmann(cities, distances)
  21. x.train(500)

With output:

  1. Optimized Route: Dallas -> St. Louis -> New York -> Boston -> Chicago -> Minneapolis -> Denver
  2. -> Salt Lake City -> Seattle -> San Francisco -> Los Angeles -> Phoenix -> Houston -> Dallas
  3. Distance: 7293

This example is captured in the examples/Boltzmann_example_2.py script