Code Monkey home page Code Monkey logo

pps's Introduction

A Part Power Set Model for Scale-Free Person Retrieval

By Yunhang Shen, Rongrong Ji, Xiaopeng Hong, Feng Zheng, Xiaowei Guo, Yongjian Wu, Feiyue Huang.

IJCAI 2019 Paper

This project is based on Detectron.

Introduction

PPS is an end-to-end part power set model with multi-scale features, which captures the discriminative parts of pedestrians from global to local, and from coarse to fine, enabling part-based scale-free person re-ID. In particular, PPS first factorize the visual appearance by enumerating $k$-combinations for all $k$ of $n$ body parts to exploit rich global and partial information to learn discriminative feature maps. Then, a combination ranking module is introduced to guide the model training with all combinations of body parts, which alternates between ranking combinations and estimating an appearance model. To enable scale-free input, we further exploit the pyramid architecture of deep networks to construct multi-scale feature maps with a feasible amount of extra cost in term of memory and time.

License

PPS is released under the Apache 2.0 license. See the NOTICE file for additional details.

Citing PPS

If you find PPS useful in your research, please consider citing:

@inproceedings{PPS_2019_IJCAI,
    author = {Shen, Yunhang and Ji, Rongrong and Hong, Xiaopeng and Zheng, Feng and Guo, Xiaowei and Wu, Yongjian and Huang, Feiyue},
    title = {A Part Power Set Model for Scale-Free Person Retrieval},
    booktitle = {International Joint Conference on Artificial Intelligence (IJCAI)},
    year = {2019},
}   

Installation

Requirements:

  • NVIDIA GPU, Linux, Python2
  • Caffe2 in pytorch v1.0.1, various standard Python packages, and the COCO API; Instructions for installing these dependencies are found below

Caffe2

Clone the pytorch repository:

# pytorch=/path/to/clone/pytorch
git clone https://github.com/pytorch/pytorch.git $pytorch
cd $pytorch
git checkout v1.0.1
git submodule update --init --recursive

Install Python dependencies:

pip install -r $pytorch/requirements.txt

Build caffe2:

cd $pytorch && mkdir -p build && cd build
cmake ..
sudo make install

Other Dependencies

Install the COCO API:

# COCOAPI=/path/to/clone/cocoapi
git clone https://github.com/cocodataset/cocoapi.git $COCOAPI
cd $COCOAPI/PythonAPI
# Install into global site-packages
make install
# Alternatively, if you do not have permissions or prefer
# not to install the COCO API into global site-packages
python setup.py install --user

Note that instructions like # COCOAPI=/path/to/install/cocoapi indicate that you should pick a path where you'd like to have the software cloned and then set an environment variable (COCOAPI in this case) accordingly.

Install the pycococreator:

pip install git+git://github.com/waspinator/[email protected]

PPS

Clone the PPS repository:

# PPS=/path/to/clone/PPS
git clone https://github.com/shenyunhang/PPS.git $PPS
cd $PPS

Install Python dependencies:

pip install -r requirements.txt

Set up Python modules:

make

Build the custom operators library:

mkdir -p build && cd build
cmake .. -DCMAKE_CXX_FLAGS="-isystem $pytorch/third_party/eigen -isystem $/pytorch/third_party/cub"
make

Dataset Preparation

Please follow this to transform the original datasets (Market1501, DukeMTMC-reID and CUHK03) to PCB format.

After that, we assume that your dataset copy at ~/Dataset has the following directory structure:

market1501
|_ images
|  |_ <im-1-name>.jpg
|  |_ ...
|  |_ <im-N-name>.jpg
|_ partitions.pkl
|_ train_test_split.pkl
|_ ...
duke
|_ images
|  |_ <im-1-name>.jpg
|  |_ ...
|  |_ <im-N-name>.jpg
|_ partitions.pkl
|_ train_test_split.pkl
|_ ...
cuhk03
|_ detected
|  |_ images
|     |_ <im-1-name>.jpg
|     |_ ...
|     |_ <im-N-name>.jpg
|  |_ partitions.pkl
|_ labeled
|  |_ images
|     |_ <im-1-name>.jpg
|     |_ ...
|     |_ <im-N-name>.jpg
|     |_ partitions.pkl
|_ re_ranking_train_test_split.pkl
|_ ...
...

Generate the COCO Json files, which is used in Detectron:

cd $PPS
python tools/bpm_to_coco.py

You may need to modify the paths of datasets in tools/bpm_to_coco.py if you put datasets in different locations.

After that, check that you have trainval.json and test.json for each datatset in their corresponding locations.

Create symlinks:

cd $PPS/detectron/datasets/data/
ln -s ~/Dataset/market1501 market1501
ln -s ~/Dataset/duke duke
ln -s ~/Dataset/cuhk03 cuhk03

Model Preparation

Download ResNet50 model (ResNet-50-model.caffemodel and ResNet-50-deploy.prototxt) from this link

cd $PPS
mkdir -p ~/Dataset/model
python tools/pickle_caffe_blobs_keep_bn.py --prototxt /path/to/ResNet-50-deploy.prototxt --caffemodel /path/to/ResNet-50-model.caffemodel --output ~/Dataset/model/R-50_BN.pkl

Noted that this requires to instal caffe1 separately, as caffe1 specific proto is removed in pytorch v1.0.1. See this.

You can download what I have transformed for the project from this link.

You may also need to modify the below config files to point TRAINING.WEIGHTS to R-50_BN.pkl.

Quick Start: Using PPS

market1501

CUDA_VISIBLE_DEVICES=0 ./scripts/train_reid.sh --cfg configs/market1501/pps_crm_triplet_R-50_1x.yaml OUTPUT_DIR experiments/pps_crm_triplet_market1501_`date +'%Y-%m-%d_%H-%M-%S'`

duke

CUDA_VISIBLE_DEVICES=0 ./scripts/train_reid.sh --cfg configs/duke/pps_crm_triplet_R-50_1x.yaml OUTPUT_DIR experiments/pps_crm_triplet_duke_`date +'%Y-%m-%d_%H-%M-%S'`

cuhk03

CUDA_VISIBLE_DEVICES=0 ./scripts/train_reid.sh --cfg configs/cuhk03/pps_crm_triplet_R-50_1x.yaml OUTPUT_DIR experiments/pps_crm_triplet_cuhk03_`date +'%Y-%m-%d_%H-%M-%S'`

pps's People

Watchers

 avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.