AffordanceNet - Multiclass Instance Segmentation Framework - ICRA 2018
By Thanh-Toan Do*, Anh Nguyen*, Ian Reid (* equal contribution)
Caffe
Hardware
[Optional] For robotic demo
$AffordanceNet_ROOT
folder.Caffe
and pycaffe
:cd $AffordanceNet_ROOT/caffe-affordance-net
# Now follow the Caffe installation instructions: http://caffe.berkeleyvision.org/installation.html
# If you're experienced with Caffe and have all of the requirements installed and your Makefile.config in place, then simply do:
make -j8 && make pycaffe
cd $AffordanceNet_ROOT/lib
make
$AffordanceNet_ROOT
'$AffordanceNet_ROOT/pretrained/AffordanceNet_200K.caffemodel
After successfully completing installation, you’ll be ready to run the demo.
Export pycaffe path:
export PYTHONPATH=$AffordanceNet_ROOT/caffe-affordance-net/python:$PYTHONPATH
Demo on static images:
cd $AffordanceNet_ROOT/tools
python demo_img.py
(Optional) Demo on depth camera (such as Asus Xtion):
cd $AffordanceNet_ROOT/tools
python demo_asus.py
380
, 381
in demo_asus.py
). Currently, we select the bottle
and its grasp
affordance.We train AffordanceNet on IIT-AFF dataset
$AffordanceNet_ROOT
folder.$AffordanceNet_ROOT/data/cache
, $AffordanceNet_ROOT/data/imagenet_models
, and $AffordanceNet_ROOT/data/VOCdevkit2012
.Train AffordanceNet:
cd $AffordanceNet_ROOT
./experiments/scripts/faster_rcnn_end2end.sh [GPU_ID] [NET] [--set ...]
./experiments/scripts/faster_rcnn_end2end.sh 0 VGG16 pascal_voc
pascal_voc
alias although we’re training using the IIT-AFF dataset.$AffordanceNet_ROOT/data/VOCdevkit2012
folder).$AffordanceNet_ROOT/data/cache
folder): For each object in the image, we need to create a mask and save as a .sm file. See $AffordanceNet_ROOT/utils
for details.If you find AffordanceNet useful in your research, please consider citing:
@inproceedings{AffordanceNet18,
title={AffordanceNet: An End-to-End Deep Learning Approach for Object Affordance Detection},
author={Do, Thanh-Toan and Nguyen, Anh and Reid, Ian},
booktitle={International Conference on Robotics and Automation (ICRA)},
year={2018}
}
If you use IIT-AFF dataset, please consider citing:
@inproceedings{Nguyen17,
title={Object-Based Affordances Detection with Convolutional Neural Networks and Dense Conditional Random Fields},
author={Nguyen, Anh and Kanoulas, Dimitrios and Caldwell, Darwin G and Tsagarakis, Nikos G},
booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year={2017},
}
MIT License
This repo used a lot of source code from Faster-RCNN