Code Monkey home page Code Monkey logo

unsup-parts's Introduction

Unsupervised Part Discovery from Contrastive Reconstruction

Subhabrata Choudhury, Iro Laina, Christian Rupprecht, Andrea Vedaldi

ProjectPage Conference arXiv

Setup

git clone https://github.com/subhc/unsup-parts.git
cd unsup-parts
conda env create --file environment.yml
conda activate unsup-parts
wget https://www.robots.ox.ac.uk/~vgg/research/unsup-parts/files/checkpoints.tar.gz
tar zxvf checkpoints.tar.gz

The project uses Weights & Biases for visualization, please update wandb_userid in train.py to your username

Data Preparation:

CUB-200-2011

  1. Download CUB_200_2011.tgz and segmentations.tgz from the CUB-200-2011 provided links.
  2. Download cachedir.tar.gz mentioned here.
  3. Create a directory named data with the following folder structure inside and extract the tars at the mentioned locations.
  4. Train a segmentation network to predict foreground masks for the test split, or download precalculated outputs: cub_supervisedlabels.tar.gz (17MB).
data
└── CUB  # extract CUB_200_2011.tgz, cub_supervisedlabels.tar.gz here
    ├── CUB_200_2011 # extract cachedir.tar.gz and segmentations.tgz here       
    │   ├── attributes
    │   ├── cachedir
    │   ├── images
    │   ├── parts
    │   └── segmentations
    └── supervisedlabels

Example

mkdir -p data/CUB/
cd data/CUB/
tar zxvf CUB_200_2011.tgz 
tar zxvf cub_supervised_labels.tar.gz 
cd CUB_200_2011
tar zxvf segmentations.tgz
tar zxvf cachedir.tar.gz

DeepFashion

  1. Create a directory named data with the folder structure below.
  2. Download the segmentation folder from the DeepFashion provided links.
  3. Extract img_highres_seg.zip inside segmentation Folder.
  4. Train a segmentation network to predict foreground masks for the test split, or download precalculated outputs: deepfashion_supervisedlabels.tar.gz (56MB).
data
└── DeepFashion
    └── In-shop Clothes Retrieval Benchmark  # extract deepfashion_supervisedlabels.tar.gz here
        ├── Anno  
        │   └── segmentation # extract img_highres_seg.zip here
        │       └── img_highres
        │           ├── MEN
        │           └── WOMEN
        └── supervisedlabels
            └── img_highres
                ├── MEN
                └── WOMEN

Example

mkdir -p data/DeepFashion/In-shop\ Clothes\ Retrieval\ Benchmark/Anno/
cd data/DeepFashion/In-shop\ Clothes\ Retrieval\ Benchmark/
wget https://www.robots.ox.ac.uk/~vgg/research/unsup-parts/files/deepfashion_supervisedlabels.tar.gz
tar zxvf deepfashion_supervisedlabels.tar.gz
cd Anno
# get the segmentation folder from the google drive link
cd segmentation
unzip img_highres_seg.zip

Training:

To train CUB:

python train.py dataset_name=CUB

To train DeepFashion:

python train.py dataset_name=DF

Evaluation:

You can find evaluation code in the evaluation folder.

Pretrained weights:

Description Size Link
CUB-200-2011 (pth) 181MB here
DeepFashion (pth) 181MB here
Both (tar.gz) 351MB here

Please move the pth files in the checkpoints/CUB and checkpoints/DeepFashion folders respectively.

Abstract:

The goal of self-supervised visual representation learning is to learn strong, transferable image representations, with the majority of research focusing on object or scene level. On the other hand, representation learning at part level has received significantly less attention. In this paper, we propose an unsupervised approach to object part discovery and segmentation and make three contributions. First, we construct a proxy task through a set of objectives that encourages the model to learn a meaningful decomposition of the image into its parts. Secondly, prior work argues for reconstructing or clustering pre-computed features as a proxy to parts; we show empirically that this alone is unlikely to find meaningful parts; mainly because of their low resolution and the tendency of classification networks to spatially smear out information. We suggest that image reconstruction at the level of pixels can alleviate this problem, acting as a complementary cue. Lastly, we show that the standard evaluation based on keypoint regression does not correlate well with segmentation quality and thus introduce different metrics, NMI and ARI, that better characterize the decomposition of objects into parts. Our method yields semantic parts which are consistent across fine-grained but visually distinct categories, outperforming the state of the art on three benchmark datasets. Code is available at the project page.

Citation

@inproceedings{choudhury21unsupervised,
 author = {Subhabrata Choudhury and Iro Laina and Christian Rupprecht and Andrea Vedaldi},
 booktitle = {Proceedings of Advances in Neural Information Processing Systems (NeurIPS)},
 title = {Unsupervised Part Discovery from Contrastive Reconstruction},
 year = {2021}
}

Acknowledgement

Code is largely based on SCOPS.

unsup-parts's People

Contributors

subhc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

unsup-parts's Issues

Apply for the code~

Thanks for your great research, I think that this is a very novel work and it can help me improve my research. Therefore, I want to apply the code for this project, this is my google mail: [email protected]. Thank you very much!

Looking forward to the code

Thanks for the great work and your effort in releasing the code! Is there an expected date that your code will be released? Thanks in advance.

Training poorly on custom data.

Hi, I am trying to train the network on a custom dataset of shoes. After training overnight the parts still don't looks great. Could you please give some pointers as to what could be improved? Would the fact that some shoe images are cropped at the corners affect the training? I see that the semantic loss doesn't really go down and am wondering why that would be.

image

image

Request for the evaluation code

Hello,

Thank you for your interesting work!

We are currently conducting extensive research based on your contributions. However, we are facing some challenges with the evaluation process with the PASCAL-Parts dataset.

Would it be possible for you to share the evaluation code used for the PASCAL-Parts dataset? Access to this code would be incredibly beneficial for our research.

Thank you for considering our request.

About the source code

Thank you for your interesting work and for sharing your code in the future. Is that possible to ask when you will upload the source code? Thank you very much.

Keypoint names or part names?

Hi @subhc ,
Thank you for your great work!
I wonder how to get the key point predicted labels or part labels from your code (especially, the file CUB_eval.ipynb)
Could you please help me with that?

Thank you very much
Tin

Unable to reproduce the result as paper reported

Hello, thank you for your inspiring work! I tried to re-run the code on all of the datasets, but the results were not as promising as those repored by paper.

For CUB and DeepFashion, I did not modify any of the codes, but the metrics on CUB were poor.
image

And for Pascal-Part, I modified the training hyper-params according to your supp file. And also for the fairness of evaluation, I trained a foreground segmentator (a DeepLabV2-ResNet50-2branch) and used the predicted mask at evaluation. I conducted training upon Car, Cat and Horse. The results on Horse were OK, but those on Cat and Car were really not good.

image

Could you please provide analysis upon why the re-run results on CUB were not good? Also, for Pascal-Part, are there any training details that is left out in papers, so that I did not reproduce the results? (BTW, could you provide supervised mask of PascalPart?)

Issue during training with utils.deepcluster_vgg16

Hello,

In the file models/feature_extraction.py line 46, you are importing vgg16 from utils.deepcluster_vgg16, but I am unable to find it anywhere. It is not under the utils directory in the root folder. I also searched the repo, but I could not find any mentions of deepcluster_vgg16. Can you please help me with this? Many thanks!

Inspirational work! Question about visual consistency loss

Hi, thanks for the great and very inspirational work. It helps me learn a lot from your def. of parts&object, motivation of your several different losses and ablation study!

But I am still a little confused about the visual consistency loss(in Sec.3.2), where you mentioned that the Gaussian model was used. However, I didn't find the Gaussian model in equation(4). Where do you utilize the Gaussian model? Could you generously send me(email: [email protected]) the code about how you implement the visual consistency loss in Sec.3.2? I really appreciate your work and help, thanks a lot!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.