Code Monkey home page Code Monkey logo

reid-mgn's Introduction

Multiple Granularity Network

Implement of paper:Learning Discriminative Features with Multiple Granularities for Person Re-Identification

Dependencies

  • Python >= 3.5
  • PyTorch >= 0.4.0
  • torchvision
  • scipy
  • numpy
  • scikit_learn

Current Result

Re-Ranking backbone mAP rank1 rank3 rank5 rank10
yes resnet50 94.33 95.58 97.54 97.92 98.46
no resnet50 86.15 94.95 97.42 98.07 98.93

Data

The data structure would look like:

data/
    bounding_box_train/
    bounding_box_test/
    query/

Market1501

Download from here

DukeMTMC-reID

Download from here

CUHK03

  1. Download cuhk03 dataset from "http://www.ee.cuhk.edu.hk/~xgwang/CUHK_identification.html"
  2. Unzip the file and you will get the cuhk03_release dir include cuhk-03.mat
  3. Download "cuhk03_new_protocol_config_detected.mat" from "https://github.com/zhunzhong07/person-re-ranking/tree/master/evaluation/data/CUHK03" and put it with cuhk-03.mat. We need this new protocol to split the dataset.
python utils/transform_cuhk03.py --src <path/to/cuhk03_release> --dst <path/to/save>

NOTICE:You need to change num_classes in network depend on how many people in your train dataset! e.g. 751 in Market1501

Weights

Pretrained weight download from google drive or baidu drive password:mrl5

Train

You can specify more parameters in opt.py

python main.py --mode train --data_path <path/to/Market-1501-v15.09.15> 

Evaluate

Use pretrained weight or your trained weight

python main.py --mode evaluate --data_path <path/to/Market-1501-v15.09.15> --weight <path/to/weight_name.pt> 

Visualize

Visualize rank10 query result of one image(query from bounding_box_test)

Extract features will take a few munutes, or you can save features as .mat file for multiple uses

image

python main.py --mode vis --query_image <path/to/query_image> --weight <path/to/weight_name.pt> 

Citation

@ARTICLE{2018arXiv180401438W,
    author = {{Wang}, G. and {Yuan}, Y. and {Chen}, X. and {Li}, J. and {Zhou}, X.},
    title = "{Learning Discriminative Features with Multiple Granularities for Person Re-Identification}",
    journal = {ArXiv e-prints},
    year = 2018,
}

reid-mgn's People

Contributors

gnayuohz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

reid-mgn's Issues

Error?

Why does your model have only one reduction? Reductions between branches in the paper are not shared.

    fg_p1 = self.reduction(zg_p1).squeeze(dim=3).squeeze(dim=2)
    fg_p2 = self.reduction(zg_p2).squeeze(dim=3).squeeze(dim=2)
    fg_p3 = self.reduction(zg_p3).squeeze(dim=3).squeeze(dim=2)
    f0_p2 = self.reduction(z0_p2).squeeze(dim=3).squeeze(dim=2)
    f1_p2 = self.reduction(z1_p2).squeeze(dim=3).squeeze(dim=2)
    f0_p3 = self.reduction(z0_p3).squeeze(dim=3).squeeze(dim=2)
    f1_p3 = self.reduction(z1_p3).squeeze(dim=3).squeeze(dim=2)
    f2_p3 = self.reduction(z2_p3).squeeze(dim=3).squeeze(dim=2)

help me to resolve this error

triplwtloss is giving dimension error. Please help me to fix this error.

File "main.py", line 50, in train
loss = self.loss(outputs, labels)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/STREID/loss.py", line 14, in forward
Triplet_Loss = [triplet_loss(output, labels) for output in outputs[1:4]]
File "/content/STREID/loss.py", line 14, in
Triplet_Loss = [triplet_loss(output, labels) for output in outputs[1:4]]
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/STREID/loss.py", line 57, in forward
dist = torch.pow(inputs, 2).sum(dim=1, keepdim=True).expand(n, n)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

Is the model structure different from the original paper?

In the paper, global features from each part for softmax loss are 2048d. In your code , you did reductions to 256d. In network.py, the bottle neck for classification is always 256, although same layers named with '2048'. Is that a mistake or did you do that for a reason?

RuntimeError: CUDA error: device-side assert triggered

main.train()
loss = self.loss(outputs, labels)
File "/toolscnn/env_pyt1.1.0_py3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/MGN/loss.py", line 18, in forward
CrossEntropy_Loss = sum(CrossEntropy_Loss) / len(CrossEntropy_Loss)
RuntimeError: CUDA error: device-side assert triggered

reduction branches share weights?

In the description of Figure 2, there is a sentence "Notice that the 1 × 1 convolutions for dimension reduction and fully connected layers for identity prediction in each branch DO NOT share weights with each other."

I think your code share the weights.

Results from default settings

I have trained and evaluated using your default setting, results below can be taken as a reference by others.

start evaluate
[With    Re-Ranking] mAP: 0.9293 rank1: 0.9483 rank3: 0.9676 rank5: 0.9736 rank10: 0.9795
[Without Re-Ranking] mAP: 0.8544 rank1: 0.9376 rank3: 0.9700 rank5: 0.9783 rank10: 0.9863

pretrained weight

Hello,thanks for your contributation!
Because I can't download from Google,can you updata your weight on BaiDuYun or send it to me by email?My email is [email protected].
Thank you!

The performance on DukeMTMC-reID set is extremely lower than you report.

The performance on DukeMTMC-reID set is extremely lower than you report.

[With Re-Ranking] mAP: 0.3625 rank1: 0.4654 rank3: 0.5274 rank5: 0.5597 rank10: 0.6154
[Without Re-Ranking] mAP: 0.2323 rank1: 0.3968 rank3: 0.5004 rank5: 0.5498 rank10: 0.6185

Above is the results on Dukeset. I tested it with weights you shared.
However, the performance on MARKET1501 is reproducible with the pre-trained weights.
So I suspect that this is caused by the absence of appropriate weights.
Anyway, could you share the weight file that is trained for DukeMTMC-reID?

The result is so terrible

Hi! Thinks for your contribution.I find that I have a very terrible result,map is 0.0048 after 350 epochs,could you give me some advice?

RuntimeError: operation does not have an identity.

Hi when data set is changes Im getting below error. would you please help me to fix ths.

(base) mca@Rnd-Comp-05:~/Downloads/content/ReID-MGN-master$ python main.py --mode train --data_path /home/mca/Downloads/LTCC_ReID

epoch 1
/home/mca/anaconda3/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:136: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
/home/mca/Downloads/content/ReID-MGN-master/utils/TripletLoss.py:31: UserWarning: This overload of addmm_ is deprecated:
addmm_(Number beta, Number alpha, Tensor mat1, Tensor mat2)
Consider using one of the following signatures instead:
addmm_(Tensor mat1, Tensor mat2, *, Number beta, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.)
dist.addmm_(1, -2, inputs, inputs.t())
total loss:16.37 Triplet_Loss :2.75 CrossEntropy_Loss:6.81 Traceback (most recent call last):
File "main.py", line 155, in
main.train()
File "main.py", line 48, in train
loss = self.loss(outputs, labels)
File "/home/mca/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/mca/Downloads/content/ReID-MGN-master/loss.py", line 14, in forward
Triplet_Loss = [triplet_loss(output, labels) for output in outputs[1:4]]
File "/home/mca/Downloads/content/ReID-MGN-master/loss.py", line 14, in
Triplet_Loss = [triplet_loss(output, labels) for output in outputs[1:4]]
File "/home/mca/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/mca/Downloads/content/ReID-MGN-master/utils/TripletLoss.py", line 38, in forward
dist_an.append(dist[i][mask[i] == 0].min().unsqueeze(0))
RuntimeError: operation does not have an identity.

when dataset is changed, im getting error

Hi,
Im trying your method on different datraset, but getting bellow error.
please help me to fix

/content/ReID-MGN-master/ReID-MGN-master/utils/TripletLoss.py:56: UserWarning: This overload of addmm_ is deprecated:
addmm_(Number beta, Number alpha, Tensor mat1, Tensor mat2)
Consider using one of the following signatures instead:
addmm_(Tensor mat1, Tensor mat2, *, Number beta, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:1005.)
dist.addmm_(1, -2, inputs, inputs.t())
total loss:14.65 Triplet_Loss:2.70 CrossEntropy_Loss:5.98 Traceback (most recent call last):
File "main.py", line 155, in
main.train()
File "main.py", line 48, in train
loss = self.loss(outputs, labels)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/ReID-MGN-master/ReID-MGN-master/loss.py", line 14, in forward
Triplet_Loss = [triplet_loss(output, labels) for output in outputs[1:4]]
File "/content/ReID-MGN-master/ReID-MGN-master/loss.py", line 14, in
Triplet_Loss = [triplet_loss(output, labels) for output in outputs[1:4]]
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/ReID-MGN-master/ReID-MGN-master/utils/TripletLoss.py", line 63, in forward
dist_an.append(dist[i][mask[i] == 0].min().unsqueeze(0))
RuntimeError: operation does not have an identity.

training epoch

Thank you for sharing. The total training process lasts for 80 epochs in the paper. Why you should train the network for 400 epochs?

improve data.py

please note that data.py needs work for out of the box execution on pretrained weights due to the way path setting is done in it. I am grateful for you making and sharing :)

Hypermeter setting

Hi,

Did you get the current best results with the default setting in the opt.py file?

Thanks!

The results

Hi, thanks for your excellent work.

I directly use your code on Market Dataset without any additional changes.

I got only rank1 93.46% mAp 90.68% under 500 epoch, but you got 3% higher.

Can you please give some suggestion about how can I do?

Thank you very much sincerely.

A quetions about the initial distances for re-ranking.

 q_g_dist = np.dot(qf, np.transpose(gf)) 
 q_q_dist = np.dot(qf, np.transpose(qf))  
 g_g_dist = np.dot(gf, np.transpose(gf)) 
 dist = re_ranking(q_g_dist, q_q_dist, g_g_dist)

Thanks for sharing the codes, but I have a quetion when I read it. why the initial distance matrices for re-ranking are the correlation matrices of features, not the Mahalanobis distance ?

How to make the title of the result graph change color?

in main.py line.113 to line.137

I modified some programs but still can't get out.

I modified this code :

   for i in range(10):
        img_path = gallery_path[index[i]]
        
        print(img_path)

        ax = plt.subplot(1, 11, i + 2)
        ax.axis('off')
        plt.imshow(plt.imread(img_path))
        
        if img_path == gallery_label:
            ax.set_title('%d'%(i+1),color='green')
        else:
            ax.set_title('%d'%(i+1),color='red')
            
    fig.savefig("show.png")
    print('result saved to show.png')

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.