Uncertainty-Aware Unsupervised Domain Adaptation in Object Detection
Updates
- 06/2021: check out our domain adaptation for panoptic segmentation paper Cross-View Regularization for Domain Adaptive Panoptic Segmentation (accepted to CVPR 2021). We design a domain adaptive panoptic segmentation network that exploits inter-style consistency and inter-task regularization for optimal domain adaptation in panoptic segmentation.Code avaliable.
- 06/2021: check out our domain generalization paper FSDR: Frequency Space Domain Randomization for Domain Generalization (accepted to CVPR 2021). Inspired by the idea of JPEG that converts spatial images into multiple frequency components (FCs), we propose Frequency Space Domain Randomization (FSDR) that randomizes images in frequency space by keeping domain-invariant FCs (DIFs) and randomizing domain-variant FCs (DVFs) only. Code avaliable.
- 06/2021: check out our domain adapation for sematic segmentation paper Scale variance minimization for unsupervised domain adaptation in image segmentation(accepted to Pattern Recognition 2021). We design a scale variance minimization (SVMin) method by enforcing the intra-image semantic structure consistency in the target domain. Code avaliable.
Paper
Uncertainty-Aware Unsupervised Domain Adaptation in Object Detection
Dayan Guan1, Jiaxing Huang1, Xiao Aoran1, Shijian Lu1, Yanpeng Cao2
1School of Computer Science Engineering, Nanyang Technological University, Singapore
2School of Mechanical Engineering, Zhejiang University, Hangzhou, China.
IEEE Transactions on Multimedia, 2021.
If you find this code useful for your research, please cite our paper:
@article{guan2021uncertainty,
title={Uncertainty-aware unsupervised domain adaptation in object detection},
author={Guan, Dayan and Huang, Jiaxing and Xiao, Aoran and Lu, Shijian and Cao, Yanpeng},
journal={IEEE Transactions on Multimedia},
year={2021},
publisher={IEEE}
}
Abstract
Unsupervised domain adaptive object detection aims to adapt detectors from a labelled source domain to an unlabelled target domain. Most existing works take a two-stage strategy that first generates region proposals and then detects objects of interest, where adversarial learning is widely adopted to mitigate the inter-domain discrepancy in both stages. However, adversarial learning may impair the alignment of well-aligned samples as it merely aligns the global distributions across domains. To address this issue, we design an uncertainty-aware domain adaptation network (UaDAN) that introduces conditional adversarial learning to align well-aligned and poorly-aligned samples separately in different manners. Specifically, we design an uncertainty metric that assesses the alignment of each sample and adjusts the strength of adversarial learning for well-aligned and poorly-aligned samples adaptively. In addition, we exploit the uncertainty metric to achieve curriculum learning that first performs easier image-level alignment and then more difficult instance-level alignment progressively. Extensive experiments over four challenging domain adaptive object detection datasets show that UaDAN achieves superior performance as compared with state-of-the-art methods.
Installation
conda env create -f environment.yaml
conda activate uadan
python setup.py build develop
pip install torchvision==0.2.1
Prepare Dataset
- Pascal VOC: Download Pascal VOC dataset at
UaDAN/dataset/voc
- Clipart1k: Download Clipart1k dataset at
UaDAN/dataset/clipart
and unzip it (Clipart1k dataset contains 1,000 comical images, in which 800 for training and 200 for validation.)
mv tools/dataset/clipart/ImageSets dataset/clipart
- Cityscapes: Download Cityscapes dataset at
UaDAN/dataset/cityscapes
- Mapillary Vista: Download Mapillary Vista dataset at
UaDAN/dataset/vistas
Pre-trained models
Pre-trained models can be downloaded here and put in UaDAN/pretrained_models
Evaluation
python tools/test_net.py --config-file "configs/UaDAN_Voc2Clipart.yaml" MODEL.WEIGHT "pretrained_models/UaDAN_Voc2Clipart.pth"
python tools/test_net.py --config-file "configs/UaDAN_City2Vistas.yaml" MODEL.WEIGHT "pretrained_models/UaDAN_City2Vistas.pth"
Training
python tools/train_net.py --config-file "configs/UaDAN_voc2clipart.yaml"
python tools/test_net_all.py --config-file "configs/UaDAN_voc2clipart.yaml"
Acknowledgements
This codebase is heavily borrowed from maskrcnn-benchmark and Domain-Adaptive-Faster-RCNN-PyTorch
Contact
If you have any questions, please contact: [email protected]