Code Monkey home page Code Monkey logo

zs3's Introduction

Zero-Shot Semantic Segmentation

Paper

Zero-Shot Semantic Segmentation
Maxime Bucher, Tuan-Hung Vu , Matthieu Cord, Patrick Pérez
valeo.ai, France
Neural Information Processing Systems (NeurIPS) 2019

If you find this code useful for your research, please cite our paper:

@inproceedings{bucher2019zero,
  title={Zero-Shot Semantic Segmentation},
  author={Bucher, Maxime and Vu, Tuan-Hung and Cord, Mathieu and P{\'e}rez, Patrick},
  booktitle={NeurIPS},
  year={2019}
}

Abstract

Semantic segmentation models are limited in their ability to scale to large numbers of object classes. In this paper, we introduce the new task of zero-shot semantic segmentation: learning pixel-wise classifiers for never-seen object categories with zero training examples. To this end, we present a novel architecture, ZS3Net, combining a deep visual segmentation model with an approach to generate visual representations from semantic word embeddings. By this way, ZS3Net addresses pixel classification tasks where both seen and unseen categories are faced at test time (so called "generalized" zero-shot classification). Performance is further improved by a self-training step that relies on automatic pseudo-labeling of pixels from unseen classes. On the two standard segmentation datasets, Pascal-VOC and Pascal-Context, we propose zero-shot benchmarks and set competitive baselines. For complex scenes as ones in the Pascal-Context dataset, we extend our approach by using a graph-context encoding to fully leverage spatial context priors coming from class-wise segmentation maps.

Code

Pre-requisites

  • Python 3.6
  • Pytorch >= 1.0 or higher
  • CUDA 9.0 or higher

Installation

  1. Clone the repo:
$ git clone https://github.com/valeoai/ZS3
  1. Install this repository and the dependencies using pip:
$ pip install -e ZS3

With this, you can edit the ZS3 code on the fly and import function and classes of ZS3 in other project as well.

  1. Optional. To uninstall this package, run:
$ pip uninstall ZS3

You can take a look at the Dockerfile if you are uncertain about steps to install this project.

Datasets

Pascal-VOC 2012

  • Pascal-VOC 2012: Please follow the instructions here to download images and semantic segmentation annotations.

  • Semantic Boundaries Dataset: Please follow the instructions here to download images and semantic segmentation annotations. Use this train set, which excludes overlap with Pascal-VOC validation set.

The Pascal-VOC and SBD datasets directory should have this structure:

ZS3/data/VOC2012/    % Pascal VOC and SBD datasets root
ZS3/data/VOC2012/ImageSets/Segmentation/     % Pascal VOC splits
ZS3/data/VOC2012/JPEGImages/     % Pascal VOC images
ZS3/data/VOC2012/SegmentationClass/      % Pascal VOC segmentation maps
ZS3/data/VOC2012/benchmark_RELEASE/dataset/img      % SBD images
ZS3/data/VOC2012/benchmark_RELEASE/dataset/cls      % SBD segmentation maps
ZS3/data/VOC2012/benchmark_RELEASE/dataset/train_noval.txt       % SBD train set

Pascal-Context

  • Pascal-VOC 2010: Please follow the instructions here to download images.

  • Pascal-Context: Please follow the instructions here to download segmentation annotations.

The Pascal-Context dataset directory should have this structure:

ZS3/data/context/    % Pascal context dataset root
ZS3/data/context/train.txt     % Pascal context train split
ZS3/data/context/val.txt     % Pascal context val split
ZS3/data/context/full_annotations/trainval/     % Pascal context segmentation maps
ZS3/data/context/full_annotations/labels.txt     % Pascal context 459 classes
ZS3/data/context/classes-59.txt     % Pascal context 59 classes
ZS3/data/context/VOCdevkit/VOC2010/JPEGImages     % Pascal VOC images

Training

Pascal-VOC

Follow steps below to train your model:

  1. Train deeplabv3+ using Pascal VOC dataset and ResNet as backbone, pretrained on imagenet (weights here):
python train_pascal.py
  1. Train GMMN and finetune the last classification layer of the trained deeplabv3+ model:
python train_pascal_GMMN.py
  • Main options

    • imagenet_pretrained_path: Path to ImageNet pretrained weights.
    • resume: Path to deeplabv3+ weights.
    • exp_path: Path to saved logs and weights folder.
    • checkname: Name of the saved logs and weights folder.
    • seen_classes_idx_metric: List of idx of seen classes.
    • unseen_classes_idx_metric: List of idx of unseen classes.
  • Final deeplabv3+ and GMMN weights

Pascal-Context

Follow steps below to train your model:

  1. Train deeplabv3+ using Pascal Context dataset and ResNet as backbone, pretrained on imagenet (weights here):
python train_context.py
  1. Train GMMN and finetune the last classification layer of the trained deeplabv3+ model:
python train_context_GMMN.py
  • Main options

    • imagenet_pretrained_path: Path to ImageNet pretrained weights.
    • resume: Path to deeplabv3+ weights.
    • exp_path: Path to saved logs and weights folder.
    • checkname: Name of the saved logs and weights folder.
    • seen_classes_idx_metric: List of idx of seen classes.
    • unseen_classes_idx_metric: List of idx of unseen classes.
  • Final deeplabv3+ and GMMN weights

(2 bis). Train GMMN with graph context and finetune the last classification layer of the trained deeplabv3+ model:

python train_context_GMMN_GCNcontext.py
  • Main options

    • imagenet_pretrained_path: Path to ImageNet pretrained weights.
    • resume: Path to deeplabv3+ weights.
    • exp_path: Path to saved logs and weights folder.
    • checkname: Name of the saved logs and weights folder.
    • seen_classes_idx_metric: List of idx of seen classes.
    • unseen_classes_idx_metric: List of idx of unseen classes.
  • Final deeplabv3+ and GMMN with graph context weights

Testing

python eval_pascal.py
python eval_context.py
  • Main options
    • resume: Path to deeplabv3+ and GMMN weights.
    • seen_classes_idx_metric: List of idx of seen classes.
    • unseen_classes_idx_metric: List of idx of unseen classes.

Acknowledgements

License

ZS3Net is released under the Apache 2.0 license.

zs3's People

Contributors

gabrieldemarmiesse avatar maximebucher avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.