Video-friendly caffe -- comes with the most recent version of Caffe (as of Jan 2019), a video reader, 3D(ND) pooling layer, and an example training script for C3D network and UCF-101 data
This is 3D convolution (C3D) and video reader implementation in the latest Caffe (Dec 2016). The original Facebook C3D implementation is branched out from Caffe on July 17, 2014 with git commit b80fc86, and has not been rebased with the original Caffe, hence missing out quite a few new features in the recent Caffe. I therefore pulled in C3D concept and an accompanying video reader and applied to the latest Caffe, and will try to rebase this repo with the upstream whenever there is a new important feature. This repo is rebased on 99bd997, on Aug`21 2018.
Please reach me for any feedback or question.
Check out the original Caffe readme for Caffe-specific information.
refactor
branch is a recent re-work, based on the original Caffe and Nd convolution and pooling with cuDNN PR. This is a cleaner, less-hacky implementation of 3D convolution/pooling than the master
branch, and is supposed to more stable than the master
branch. So, feel free to try this branch. One missing feature in the refactor
branch (yet) is the python wrapper.
In addition to prerequisites for Caffe, video-caffe depends on cuDNN. It is known to work with CuDNN verson 4 and 5, but it may need some efforts to build with v3.
Makefile.config
point to the right paths for CUDA and CuDNN.CUDNN_INCLUDE
and CUDNN_LIBRARY
are correct. If not, you may want something like cmake -DCUDNN_INCLUDE="/your/path/to/include" -DCUDNN_LIBRARY="/your/path/to/lib" ${video-caffe-root}
.Key steps to build video-caffe are:
git clone git@github.com:chuckcho/video-caffe.git
cd video-caffe
mkdir build && cd build
cmake ..
make all -j8
make install
make runtest
Look at ${video-caffe-root}/examples/c3d_ucf101/c3d_ucf101_train_test.prototxt
for how 3D convolution and pooling are used. In a nutshell, use NdConvolution
or NdPooling
layer with {kernel,stride,pad}_shape
that specifies 3D shapes in (L x H x W) where L
is the temporal length (usually 16).
...
# ----- video/label input -----
layer {
name: "data"
type: "VideoData"
top: "data"
top: "label"
video_data_param {
source: "examples/c3d_ucf101/c3d_ucf101_train_split1.txt"
batch_size: 50
new_height: 128
new_width: 171
new_length: 16
shuffle: true
}
include {
phase: TRAIN
}
transform_param {
crop_size: 112
mirror: true
mean_value: 90
mean_value: 98
mean_value: 102
}
}
...
# ----- 1st group -----
layer {
name: "conv1a"
type: "NdConvolution"
bottom: "data"
top: "conv1a"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
kernel_shape { dim: 3 dim: 3 dim: 3 }
stride_shape { dim: 1 dim: 1 dim: 1 }
pad_shape { dim: 1 dim: 1 dim: 1 }
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
...
layer {
name: "pool1"
type: "NdPooling"
bottom: "conv1a"
top: "pool1"
pooling_param {
pool: MAX
kernel_shape { dim: 1 dim: 2 dim: 2 }
stride_shape { dim: 1 dim: 2 dim: 2 }
}
}
...
Scripts and training files for C3D training on UCF-101 are located in examples/c3d_ucf101/.
Steps to train C3D on UCF-101:
unrar x UCF101.rar
${video-caffe-root}/examples/c3d_ucf101/extract_UCF-101_frames.sh
.${video-caffe-root}/examples/c3d_ucf101/c3d_ucf101_{train,test}_split1.txt
to correctly point to UCF-101 videos or directories that contain extracted frames.${video-caffe-root}/examples/c3d_ucf101/c3d_ucf101_train_test.prototxt
to your taste or HW specification. Especially batch_size
may need to be adjusted for the GPU memory.cd ${video-caffe-root} && examples/c3d_ucf101/train_ucf101.sh
(optionally use --gpu
to use multiple GPU’s)${video-caffe-root}/tools/extra/plot_training_loss.sh
to get training loss / validation accuracy (top1/5) plot. It’s pretty hacky, so look at the file to meet your need.A typical training will yield the following loss and top-1 accuracy:
A pre-trained model is available (downloadable link) for UCF101 (trained from scratch), achieving top-1 accuracy of ~47%.
Caffe is released under the BSD 2-Clause license.