Code Monkey home page Code Monkey logo

vs-net's Introduction

VS-Net: Voting with Segmentation for Visual Localization

VS-Net: Voting with Segmentation for Visual Localization
Zhaoyang Huang*, Han Zhou*, Yijin Li, Bangbang Yang, Yan Xu, Xiaowei Zhou, Hujun Bao, Guofeng Zhang, Hongsheng Li
CVPR 2021

Requirements

We recommend Ubuntu 18.04, cuda 10.0, python 3.7.

conda env create -f environment.yaml
conda activate vsnet
cd scripts
sh environment.sh

Data

Given a 3D mesh for a scene, we first decompose meshes into patches and select a landmark on a patch. Then, we compute ground truth patch segmentation maps and pixel-voting maps according to camera poses for training images. Please refer to our paper for more details. We evaluate our method on the Microsoft 7scenes dataset and the Cambridge Landmarks dataset. You can download the 7scenes dataset from here and Cambridge Landmarks dataset from here. We provide our preprocessed 7scenes training segmentation images here and cambridge training segmentation images here. The voting images are generate with the following script.

python scripts/7scenes_preprocess.py --scene scene_name --root_dir /path/to/data
python scripts/cambridge_preprocess.py --scene scene_name --root_dir /path/to/data

In the end, you will get the following file tree.

7scenes_release
├── chess
│   ├── id2centers.json
│   ├── TrainSplit.txt
│   ├── TestSplit.txt
│   ├── seq-01
│   │   ├── frame-000000.color.png
│   │   ├── frame-000000.depth.png
│   │   ├── frame-000000.pose.txt
│   │   ├── frame-000000.seg.png
│   │   ├── frame-000000.vertex_2d.npy
│   │   ├── ...
│   ├── seq-02
│   ├── ...
├── heads
├── ...

Evaluation

We provide training params used in our paper in collected_confgs. The pretrained models could be found here. All data are tested on RTX 2070 and i7-9700K.

python eval.py --dataset {dataset}_loc --resume true --config /path/to/config.json
# Example
# python eval.py --dataset 7scenes_loc --resume true --config collected_configs/7scenes/r.json

Training

python train.py --dataset {dataset}_loc --scene {scene_name} --use-aug true --gpu-id gpu_device_id
# Example
# python train.py --dataset 7scenes_loc --scene heads --use-aug true --gpu-id 1

Acknowledgements

Thanks Hanqing Jiang and Liyang Zhou for their assistance of building corresponding 3D meshes. Our voting intersection code is built upon PVNet.

Citation

@inproceedings{huang2021vs,
  title={{VS-Net}: Voting with Segmentation for Visual Localization},
  author={Huang, Zhaoyang and Zhou, Han and Li, Yijin and Yang, Bangbang and Xu, Yan and Zhou, Xiaowei and Bao, Hujun and Zhang, Guofeng and Li, Hongsheng},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={6101--6111},
  year={2021}
}

Copyright

This work is affiliated with ZJU-SenseTime Joint Lab of 3D Vision and CUHK-SenseTime Joint Lab. Its intellectual property belongs to SenseTime Group Ltd.

Copyright SenseTime. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

vs-net's People

Contributors

asdiuzd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vs-net's Issues

how to get the segmentation image of training

Hi, thank you for your excellent work, I would like to know how to get the segmentation image of training, how to calculate the ground patch segmentation image according to the camera posture of the training image? Can you provide scripts? Thank you in advance.

question about sevenscenes_data

hi, i noticed that the original pose of sevenscenes data is under "camera-to-world" coordinate, so during your training, you converted to "world-to-camera" coordinate by using script " _pose_target = np.linalg.inv(_pose_target)".
but why you added script "_pose_target[0, 3] = _pose_target[0, 3] + 0.0245"?

Cambridge dataset

The link of Cambridge Landmarks dataset is invalid, Can u share me with your dataset?

How can we generate ".seg.png" and "id2centers.json"

Hey guy,

it is really nice work! Our group is trying to test your method on our datasets. I was wondering if you can provide the script to generate the ".seg.png" files and "id2centers.json" file of each dataset. Meanwhile, do you also have the pre-trained models for the CambridgeLandmarks dataset?

Best,
Shiming

Pre-trained models for Cambridge

Hi,

Thank you very much for this inspiring work and for releasing the implementation code.

I noticed that you give a link to the pre-trained models on 7Scenes, but not on the Cambridge dataset. Would it also be possible to get access to the pre-trained models on Cambridge? Thank you in advance!

test on robotcar

How can I test this algorithm on Robotcar dataset. Do I need to modify the data somehow?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.