项目作者: developmentseed

项目描述 :
在卫星图像上训练和测试SegNet神经网络
高级语言: JavaScript
项目地址: git://github.com/developmentseed/skynet-train.git
创建时间: 2016-06-09T17:23:10Z
项目社区:https://github.com/developmentseed/skynet-train

开源协议:Other

下载


SegNet training and testing scripts

These scripts are for use in training and testing the SegNet neural
network
, particularly with
OpenStreetMap + Satellite Imagery training data generated by
skynet-data.

Contributions are very welcome!

Quick start

The quickest and easiest way to use these scripts is via the
developmentseed/skynet-train docker image, but note that to make this work
with a GPU—necessary for reasonable training times—-you will need a machine
set up to use nvidia-docker. (The
start_instance
script uses docker-machine to spin up an AWS EC2 g2 instance and set it up with
nvidia-docker. The start_spot_instance
script does the same thing but creates a spot
instance instead of an on demand one.)

  1. Create a training dataset with skynet-data.
  2. Run:
  1. nvidia-docker run \
  2. -v /path/to/training/dataset:/data \
  3. -v /path/to/training/output:/output \
  4. -e AWS_ACCESS_KEY_ID=... \
  5. -e AWS_SECRET_ACCESS_KEY=... \
  6. developmentseed/skynet-train:gpu \
  7. --sync s3://your-bucket/training/blahbla

This will kick off a training run with the given data. Every 10000 iterations,
the model will be snapshotted and run on the test data, the training “loss”
will be plotted, and all of this uploaded to s3. (Omit the --sync argument
and AWS creds to skip the upload.)

Each batch of test results includes a view.html file that shows a bare-bones
viewer allowing you to browse the results on a map and compare model outputs to
the ground truth data. Use it like:

Customize the training run with these params:

  1. --model MODEL # segnet or segnet_basic, defaults to segnet
  2. --output OUTPUT # directory in which to output training assets
  3. --data DATA # training dataset
  4. [--fetch-data FETCH_DATA] # s3 uri from which to download training data into DATA
  5. [--snapshot SNAPSHOT] # snapshot frequency
  6. [--cpu] # sets cpu mode
  7. [--gpu [GPU [GPU ...]]] # set gpu devices to use
  8. [--display-frequency DISPLAY_FREQUENCY] # frequency of logging output (affects granularity of plots)
  9. [--iterations ITERATIONS] # total number of iterations to run
  10. [--crop CROP] # crop trianing images to CROPxCROP pixels
  11. [--batch-size BATCH_SIZE] # batch size (adjust this up or down based on GPU size. defaults to 6 for segnet and 16 for segnet_basic)
  12. [--sync SYNC]

Monitoring

On an instance where training is happening, expose a simple monitoring page with:

  1. docker run --rm -it -v /mnt/training:/output -p 80:8080 developmentseed/skynet-monitor

Details

Prerequisites / Dependencies:

  • Node and Python
  • As of now, training SegNet requires building the caffe-segnet fork fork of Caffe.
  • Install node dependencies by running npm install in the root directory of this repo.

Set up model definition

After creating a dataset with the skynet-data
scripts, set up the model prototxt definition files by running:

  1. segnet/setup-model --data /path/to/dataset/ --output /path/to/training/workdir

Also copy segnet/templates/solver.prototxt to the training work directory, and
edit it to (a) point to the right paths, and (b) set up the learning
“hyperparameters”.

(NOTE: this is hard to get right at first; when we post links to a couple of
pre-trained models, we’ll also include a copy of the solver.prototxt we used as
a reference / starting point.)

Train

Download the pre-trained VGG weights VGG_ILSVRC_16_layers.caffemodel from
http://www.robots.ox.ac.uk/~vgg/research/very_deep/

From your training work directory, run

  1. $CAFFE_ROOT/build/tools/caffe train -gpu 0 -solver solver.txt \
  2. -weights VGG_ILSVRC_16_layers.caffemodel \
  3. 2>&1 | tee train.log

You can monitor the training with:

  1. segnet/util/plot_training_log.py train.log --watch

This will generate and continually update a plot of the “loss” (i.e., training
error) which should gradually decrease as training progresses.

Testing the Trained Network

  1. segnet/run_test --output /path/for/test/results/ --train /path/to/segnet_train.prototxt --weights /path/to/snapshots/segnet_blahblah_iter_XXXXX.caffemodel --classes /path/to/dataset/classes.json

This script essentially carries out the instructions outlined here:
http://mi.eng.cam.ac.uk/projects/segnet/tutorial.html

Inference

After you have a trained and tested network, you’ll often want to use it to predict over a larger area. We’ve included scripts for running this process locally or on AWS.

Local Inference

To run predictions locally you’ll need:

  • Raster imagery (as either a GeoTIFF or a VRT)
  • A line delimited list of XYZ tile indices to predict on (e.g. 49757-74085-17. These can be made with geodex)
  • A skynet model, trained weights, and class definitions ( .prototxt, .caffemodel, .json)

To run:

  1. docker run -v /path/to/inputs:/inputs -v /path/to/model:/model -v /path/to/output/:/inference \
  2. developmentseed:/skynet-run:local-gpu /inputs/raster.tif /inputs/tiles.txt \
  3. --model /model/segnet_deploy.prototxt
  4. --weights /model/weights.caffemodel
  5. --classes /model/classes.json
  6. --output /inference

If you are running on a CPU, use the :local-cpu docker image and add --cpu-only as a final flag to the above command.

The predicted rasters and vectorized geojson outputs will be located in /inference (and the corresponding mounted volume)

AWS Inference

TODO: for now, see command line instructions in segnet/queue.py and segnet/batch_inference.py

GPU

These scripts were originally developed for use on an AWS g2.2xlarge instance. For support on newer GPUs, it may be required to:

  • use a newer NVIDIA driver
  • use a newer version of CUDA. To support CUDA8+, you can use the docker images tagged with :cuda8. They are built off an updated caffe-segnet fork with support for cuDNN5.