项目作者: deep-diver

项目描述 :
TensorFlow Implementation of state-of-the-art models since 2012
高级语言: Python
项目地址: git://github.com/deep-diver/DeepModels.git
创建时间: 2018-08-23T11:47:03Z
项目社区:https://github.com/deep-diver/DeepModels

开源协议:

下载


DeepModels

This repository is mainly for implementing and testing state-of-the-art deep learning models since 2012 when AlexNet has emerged. It will provide pre-trained models on each dataset later.

In order to try with state-of-the-art deep learning models, datasets to be fed into and training methods should be also come along. This repository comes with three main parts, Dataset, Model, and Trainer to ease this process.

Dataset and model should be provided to a trainer, and then the trainer knows how to run training, resuming where the last training is left off, and transfer learning.

Dependencies

  • numpy >= 1.14.5
  • scikit-image >= 0.12.3
  • tensorflow >= 1.6
  • tqdm >= 4.11.2
  • urllib3 >= 1.23
  1. # install all the requirements.
  2. pip install -r requirements.txt

Testing Environment

  • macOS High Sierra (10.13.6) + eGPU encloosure (Akitio Node) + NVIDIA GTX 1080Ti
  • floydhub + NVIDIA TESLA K80, + NVIDIA TESLA V100
  • GCP cloud ML engine + NVIDIA TESLA K80, + NVIDIA TESLA P100, + NVIDIA TESLA V100

Pre-defined Classes

Datasets

  • MNIST
    • 10 classes of handwritten digits images in size of 28x28
    • 60,000 training images, 10,000 testing images
  • CIFAR-10
    • 10 classes of colored images in size of 32x32
    • 50,000 training images, 10,000 testing images
    • 6,000 images per class
  • CIFAR-100
    • 100 classes of colored images in size of 32x32
    • 600 images per class
    • 500 training images, 100 testing images per class
  • Things to be added

Models

Trainers

  • ClfTrainer: Trainer for image classification like ILSVRC

Pre-trained accuracy (coming soon)

  • AlexNet
  • VGG
  • Inception V1 (GoogLeNet)

Example Usage Code Blocks

Define hyper-parameters

  1. learning_rate = 0.0001
  2. epochs = 1
  3. batch_size = 64

Train from nothing

  1. from dataset.cifar10_dataset import Cifar10
  2. from models.googlenet import GoogLeNet
  3. from trainers.clftrainer import ClfTrainer
  4. inceptionv1 = GoogLeNet()
  5. cifar10_dataset = Cifar10()
  6. trainer = ClfTrainer(inceptionv1, cifar10_dataset)
  7. trainer.run_training(epochs, batch_size, learning_rate,
  8. './inceptionv1-cifar10.ckpt')

Train from where left off

  1. from dataset.cifar10_dataset import Cifar10
  2. from models.googlenet import GoogLeNet
  3. from trainers.clftrainer import ClfTrainer
  4. inceptionv1 = GoogLeNet()
  5. cifar10_dataset = Cifar10()
  6. trainer = ClfTrainer(inceptionv1, cifar10_dataset)
  7. trainer.resume_training_from_ckpt(epochs, batch_size, learning_rate,
  8. './inceptionv1-cifar10.ckpt-1', './new-inceptionv1-cifar10.ckpt')

Transfer Learning

  1. from dataset.cifar100_dataset import Cifar100
  2. from models.googlenet import GoogLeNet
  3. from trainers.clftrainer import ClfTrainer
  4. inceptionv1 = GoogLeNet()
  5. cifar10_dataset = Cifar100()
  6. trainer = ClfTrainer(inceptionv1, cifar10_dataset)
  7. trainer.run_transfer_learning(epochs, batch_size, learning_rate,
  8. './new-inceptionv1-cifar10.ckpt-1', './inceptionv1-ciafar100.ckpt')

Testing

  1. from dataset.cifar100_dataset import Cifar100
  2. from models.googlenet import GoogLeNet
  3. from trainers.clftrainer import ClfTrainer
  4. # prepare images to test
  5. images = ...
  6. inceptionv1 = GoogLeNet()
  7. cifar10_dataset = Cifar100()
  8. trainer = ClfTrainer(inceptionv1, cifar10_dataset)
  9. results = trainer.run_testing(images, './inceptionv1-ciafar100.ckpt-1')

Basic Workflow

  1. Define/Instantiate a dataset
  2. Define/Instantiate a model
  3. Define/Instantiate a trainer with the dataset and the model
  4. Begin training/resuming/transfer learning

References