Code Monkey home page Code Monkey logo

afa's Introduction

Learning Affinity from Attention: End-to-End Weakly-Supervised Semantic Segmentation with Transformers

Code of CVPR 2022 paper: Learning Affinity from Attention: End-to-End Weakly-Supervised Semantic Segmentation with Transformers.

[arXiv] [Project] [Poster]


AFA flowchart

Abastract

Weakly-supervised semantic segmentation (WSSS) with image-level labels is an important and challenging task. Due to the high training efficiency, end-to-end solutions for WSSS have received increasing attention from the community. However, current methods are mainly based on convolutional neural networks and fail to explore the global information properly, thus usually resulting in incomplete object regions. In this paper, to address the aforementioned problem, we introduce Transformers, which naturally integrate global information, to generate more integral initial pseudo labels for end-to-end WSSS. Motivated by the inherent consistency between the self-attention in Transformers and the semantic affinity, we propose an Affinity from Attention (AFA) module to learn semantic affinity from the multi-head self-attention (MHSA) in Transformers. The learned affinity is then leveraged to refine the initial pseudo labels for segmentation. In addition, to efficiently derive reliable affinity labels for supervising AFA and ensure the local consistency of pseudo labels, we devise a Pixel-Adaptive Refinement module that incorporates low-level image appearance information to refine the pseudo labels. We perform extensive experiments and our method achieves 66.0% and 38.9% mIoU on the PASCAL VOC 2012 and MS COCO 2014 datasets, respectively, significantly outperforming recent end-to-end methods and several multi-stage competitors. Code will be made publicly available.

Preparations

VOC dataset

1. Download

wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
tar –xvf VOCtrainval_11-May-2012.tar

2. Download the augmented annotations

The augmented annotations are from SBD dataset. Here is a download link of the augmented annotations at DropBox. After downloading SegmentationClassAug.zip, you should unzip it and move it to VOCdevkit/VOC2012. The directory sctructure should thus be

VOCdevkit/
└── VOC2012
    ├── Annotations
    ├── ImageSets
    ├── JPEGImages
    ├── SegmentationClass
    ├── SegmentationClassAug
    └── SegmentationObject

COCO dataset

1. Download

wget http://images.cocodataset.org/zips/train2014.zip
wget http://images.cocodataset.org/zips/val2014.zip

After unzipping the downloaded files, for convenience, I recommand to organizing them in VOC style.

MSCOCO/
├── JPEGImages
│    ├── train
│    └── val
└── SegmentationClass
     ├── train
     └── val

2. Generating VOC style segmentation labels for COCO

To generate VOC style segmentation labels for COCO dataset, you could use the scripts provided at this repo. Or, just downloading the generated masks from Google Drive.

Create and activate conda environment

conda create --name py36 python=3.6
conda activate py36
pip install -r requirments.txt

Clone this repo

git clone https://github.com/rulixiang/afa.git
cd afa

Download Pre-trained weights

Download the ImageNet-1k pre-trained weights from the official SegFormer implementation and move them to pretrained/.

[Optional] Build python extension module

To use the regularized loss, you need to download and compile the python extension, which is provied here. This module is not necessary and only brings subtle improvement to the final performance on VOC according to the ablation.

Train

To start training, just run the scripts under launch/.

# train on voc
bash launch/run_sbatch_attn_reg.sh
# train on coco
bash launch/run_sbatch_attn_reg_coco.sh

You should get the training logs by running the above commands. Also, check our training log under logs/.

Results

The generated CAMs and semantic segmentation results on the DAVIS 2017 dataset. The model is trained on VOC 2012 dataset. For more results, please see the [Project page] or [Paper].

Visualization. Left: CAMs of the cls branch. Right: Prediction of the seg branch.
CAM Seg prediction

Citation

Please kindly cite our paper if you find it's helpful in your work.

@inproceedings{ru2022learning,
    title = {Learning Affinity from Attention: End-to-End Weakly-Supervised Semantic Segmentation with Transformers},
    author = {Lixiang Ru and Yibing Zhan and Baosheng Yu and Bo Du}
    booktitle = {CVPR},
    year = {2022},
  }

Acknowledgement

We use SegFormer and their pre-trained weights as the backbone, which is based on MMSegmentation. We heavily borrowed 1-stage-wseg to construct our PAR. Also, we use the Regularized Loss and the Random Walk Propagation in PSA. Many thanks to their brilliant works!

afa's People

Contributors

rulixiang avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.