项目作者: shenyunhang

项目描述 :
A Part Power Set Model for Scale-Free Person Retrieval
高级语言: Python
项目地址: git://github.com/shenyunhang/PPS.git
创建时间: 2019-05-28T08:29:38Z
项目社区:https://github.com/shenyunhang/PPS

开源协议:Apache License 2.0

下载


A Part Power Set Model for Scale-Free Person Retrieval

By Yunhang Shen, Rongrong Ji, Xiaopeng Hong, Feng Zheng, Xiaowei Guo, Yongjian Wu, Feiyue Huang.

IJCAI 2019 Paper

This project is based on Detectron.

Introduction

PPS is an end-to-end part power set model with multi-scale features, which captures the discriminative parts of pedestrians from global to local, and from coarse to fine, enabling part-based scale-free person re-ID.
In particular, PPS first factorize the visual appearance by enumerating $k$-combinations for all $k$ of $n$ body parts to exploit rich global and partial information to learn discriminative feature maps.
Then, a combination ranking module is introduced to guide the model training with all combinations of body parts, which alternates between ranking combinations and estimating an appearance model.
To enable scale-free input, we further exploit the pyramid architecture of deep networks to construct multi-scale feature maps with a feasible amount of extra cost in term of memory and time.

License

PPS is released under the Apache 2.0 license. See the NOTICE file for additional details.

Citing PPS

If you find PPS useful in your research, please consider citing:

  1. @inproceedings{PPS_2019_IJCAI,
  2. author = {Shen, Yunhang and Ji, Rongrong and Hong, Xiaopeng and Zheng, Feng and Guo, Xiaowei and Wu, Yongjian and Huang, Feiyue},
  3. title = {A Part Power Set Model for Scale-Free Person Retrieval},
  4. booktitle = {International Joint Conference on Artificial Intelligence (IJCAI)},
  5. year = {2019},
  6. }

Installation

Requirements:

  • NVIDIA GPU, Linux, Python2
  • Caffe2 in pytorch v1.0.1, various standard Python packages, and the COCO API; Instructions for installing these dependencies are found below

Caffe2

Clone the pytorch repository:

  1. # pytorch=/path/to/clone/pytorch
  2. git clone https://github.com/pytorch/pytorch.git $pytorch
  3. cd $pytorch
  4. git checkout v1.0.1
  5. git submodule update --init --recursive

Install Python dependencies:

  1. pip install -r $pytorch/requirements.txt

Build caffe2:

  1. cd $pytorch && mkdir -p build && cd build
  2. cmake ..
  3. sudo make install

Other Dependencies

Install the COCO API:

  1. # COCOAPI=/path/to/clone/cocoapi
  2. git clone https://github.com/cocodataset/cocoapi.git $COCOAPI
  3. cd $COCOAPI/PythonAPI
  4. # Install into global site-packages
  5. make install
  6. # Alternatively, if you do not have permissions or prefer
  7. # not to install the COCO API into global site-packages
  8. python setup.py install --user

Note that instructions like # COCOAPI=/path/to/install/cocoapi indicate that you should pick a path where you’d like to have the software cloned and then set an environment variable (COCOAPI in this case) accordingly.

Install the pycococreator:

  1. pip install git+git://github.com/waspinator/pycococreator.git@0.2.0

PPS

Clone the PPS repository:

  1. # PPS=/path/to/clone/PPS
  2. git clone https://github.com/shenyunhang/PPS.git $PPS
  3. cd $PPS

Install Python dependencies:

  1. pip install -r requirements.txt

Set up Python modules:

  1. make

Build the custom operators library:

  1. mkdir -p build && cd build
  2. cmake .. -DCMAKE_CXX_FLAGS="-isystem $pytorch/third_party/eigen -isystem $/pytorch/third_party/cub"
  3. make

Dataset Preparation

Please follow this to transform the original datasets (Market1501, DukeMTMC-reID and CUHK03) to PCB format.

After that, we assume that your dataset copy at ~/Dataset has the following directory structure:

  1. market1501
  2. |_ images
  3. | |_ <im-1-name>.jpg
  4. | |_ ...
  5. | |_ <im-N-name>.jpg
  6. |_ partitions.pkl
  7. |_ train_test_split.pkl
  8. |_ ...
  9. duke
  10. |_ images
  11. | |_ <im-1-name>.jpg
  12. | |_ ...
  13. | |_ <im-N-name>.jpg
  14. |_ partitions.pkl
  15. |_ train_test_split.pkl
  16. |_ ...
  17. cuhk03
  18. |_ detected
  19. | |_ images
  20. | |_ <im-1-name>.jpg
  21. | |_ ...
  22. | |_ <im-N-name>.jpg
  23. | |_ partitions.pkl
  24. |_ labeled
  25. | |_ images
  26. | |_ <im-1-name>.jpg
  27. | |_ ...
  28. | |_ <im-N-name>.jpg
  29. | |_ partitions.pkl
  30. |_ re_ranking_train_test_split.pkl
  31. |_ ...
  32. ...

Generate the COCO Json files, which is used in Detectron:

  1. cd $PPS
  2. python tools/bpm_to_coco.py

You may need to modify the paths of datasets in tools/bpm_to_coco.py if you put datasets in different locations.

After that, check that you have trainval.json and test.json for each datatset in their corresponding locations.

Create symlinks:

  1. cd $PPS/detectron/datasets/data/
  2. ln -s ~/Dataset/market1501 market1501
  3. ln -s ~/Dataset/duke duke
  4. ln -s ~/Dataset/cuhk03 cuhk03

Model Preparation

Download ResNet50 model (ResNet-50-model.caffemodel and ResNet-50-deploy.prototxt) from this link

  1. cd $PPS
  2. mkdir -p ~/Dataset/model
  3. python tools/pickle_caffe_blobs_keep_bn.py --prototxt /path/to/ResNet-50-deploy.prototxt --caffemodel /path/to/ResNet-50-model.caffemodel --output ~/Dataset/model/R-50_BN.pkl

Noted that this requires to instal caffe1 separately, as caffe1 specific proto is removed in pytorch v1.0.1.
See this.

You can download what I have transformed for the project from this link.

You may also need to modify the below config files to point TRAINING.WEIGHTS to R-50_BN.pkl.

Quick Start: Using PPS

market1501

  1. CUDA_VISIBLE_DEVICES=0 ./scripts/train_reid.sh --cfg configs/market1501/pps_crm_triplet_R-50_1x.yaml OUTPUT_DIR experiments/pps_crm_triplet_market1501_`date +'%Y-%m-%d_%H-%M-%S'`

duke

  1. CUDA_VISIBLE_DEVICES=0 ./scripts/train_reid.sh --cfg configs/duke/pps_crm_triplet_R-50_1x.yaml OUTPUT_DIR experiments/pps_crm_triplet_duke_`date +'%Y-%m-%d_%H-%M-%S'`

cuhk03

  1. CUDA_VISIBLE_DEVICES=0 ./scripts/train_reid.sh --cfg configs/cuhk03/pps_crm_triplet_R-50_1x.yaml OUTPUT_DIR experiments/pps_crm_triplet_cuhk03_`date +'%Y-%m-%d_%H-%M-%S'`