项目作者: shicai

项目描述 :
Caffe实施Google的MobileNets(v1和v2)
高级语言: Python
项目地址: git://github.com/shicai/MobileNet-Caffe.git
创建时间: 2017-04-27T07:29:23Z
项目社区:https://github.com/shicai/MobileNet-Caffe

开源协议:BSD 3-Clause "New" or "Revised" License

下载


MobileNet-Caffe

Introduction

This is a Caffe implementation of Google’s MobileNets (v1 and v2). For details, please read the following papers:

Pretrained Models on ImageNet

We provide pretrained MobileNet models on ImageNet, which achieve slightly better accuracy rates than the original ones reported in the paper.

The top-1/5 accuracy rates by using single center crop (crop size: 224x224, image size: 256xN):

Network Top-1 Top-5 sha256sum Architecture
MobileNet v1 70.81 89.85 8d6edcd3 (16.2 MB) netscope, netron
MobileNet v2 71.90 90.49 a3124ce7 (13.5 MB) netscope, netron

Evaluate Models with a single image

Evaluate MobileNet v1:

python eval_image.py --proto mobilenet_deploy.prototxt --model mobilenet.caffemodel --image ./cat.jpg

Expected Outputs:

  1. 0.42 - 'n02123159 tiger cat'
  2. 0.08 - 'n02119022 red fox, Vulpes vulpes'
  3. 0.07 - 'n02119789 kit fox, Vulpes macrotis'
  4. 0.06 - 'n02113023 Pembroke, Pembroke Welsh corgi'
  5. 0.06 - 'n02123045 tabby, tabby cat'

Evaluate MobileNet v2:

python eval_image.py --proto mobilenet_v2_deploy.prototxt --model mobilenet_v2.caffemodel --image ./cat.jpg

Expected Outputs:

  1. 0.26 - 'n02123159 tiger cat'
  2. 0.22 - 'n02124075 Egyptian cat'
  3. 0.15 - 'n02123045 tabby, tabby cat'
  4. 0.04 - 'n02119022 red fox, Vulpes vulpes'
  5. 0.02 - 'n02326432 hare'

Finetuning on your own data

Modify deploy.prototxt and save it as your train.prototxt as follows:
Remove the first 5 input/input_dim lines, and add Image Data layer in the beginning like this:

  1. layer {
  2. name: "data"
  3. type: "ImageData"
  4. top: "data"
  5. top: "label"
  6. include {
  7. phase: TRAIN
  8. }
  9. transform_param {
  10. scale: 0.017
  11. mirror: true
  12. crop_size: 224
  13. mean_value: [103.94, 116.78, 123.68]
  14. }
  15. image_data_param {
  16. source: "your_list_train_txt"
  17. batch_size: 32 # your batch size
  18. new_height: 256
  19. new_width: 256
  20. root_folder: "your_path_to_training_data_folder"
  21. }
  22. }

Remove the last prob layer, and add Loss and Accuracy layers in the end like this:

  1. layer {
  2. name: "loss"
  3. type: "SoftmaxWithLoss"
  4. bottom: "fc7"
  5. bottom: "label"
  6. top: "loss"
  7. }
  8. layer {
  9. name: "top1/acc"
  10. type: "Accuracy"
  11. bottom: "fc7"
  12. bottom: "label"
  13. top: "top1/acc"
  14. include {
  15. phase: TEST
  16. }
  17. }
  18. layer {
  19. name: "top5/acc"
  20. type: "Accuracy"
  21. bottom: "fc7"
  22. bottom: "label"
  23. top: "top5/acc"
  24. include {
  25. phase: TEST
  26. }
  27. accuracy_param {
  28. top_k: 5
  29. }
  30. }

MobileNet in this repo has been used in the following projects, we recommend you to take a look:

Updates (Feb. 5, 2018)

  • Add pretrained MobileNet v2 models (including deploy.prototxt and weights)
  • Hold pretrained weights in this repo
  • Add sha256sum code for pretrained weights
  • Add some code snippets for single image evaluation
  • Uncomment engine: CAFFE used in mobilenet_deploy.prototxt
  • Add params (lr_mult and decay_mult) for Scale layers of mobilenet_deploy.prototxt
  • Add prob layer for mobilenet_deploy.prototxt