Code Monkey home page Code Monkey logo

sensaturban-bev-seg3d's Introduction

SensatUrban-BEV-Seg3D

PWC

This is the official implementation of our BEV-Seg3D-Net, an efficient 3D semantic segmentation framework for Urban-scale point clouds like SensatUrban, Campus3D, etc.

Features of our framework/model:

  • leveraging various proven methods in 2D segmentation for 3D tasks
  • achieve competitive performance in the SensatUrban benchmark
  • fast inference process, about 1km^2 area per minute with RTX 3090.

To be done:

  • add more complex/efficient fusion models
  • add more backbone like ResNeXt, HRNet, DenseNet, etc.
  • add more novel projection methods like pointpillars

For technical details, please refer to:

Efficient Urban-scale Point Clouds Segmentation with BEV Projection
Zhenhong Zou, Yizhe Li, Xinyu Zhang

Please cite by:

@article{Zou2021EfficientUP,
  title={Efficient Urban-scale Point Clouds Segmentation with BEV Projection},
  author={Zhenhong Zou and Yizhe Li},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.09074}
}

(1) Setup

This code has been tested with Python 3.7, PyTorch 1.8, CUDA 11.0 on Ubuntu 16.04. PyTorch of earlier versions should be supported.

  • Clone the repository
git clone https://github.com/zouzhenhong98/SensatUrban-BEV-Seg3D.git & cd SensatUrban-BEV-Seg3D
  • Setup python environment
conda create -n bevseg python=3.7
source activate bevseg
pip install -r requirements.txt

(2) Preprocess

We provide various data analysis and preprocess methods for the SensatUrban dataset. (Part of the following steps are optional)

  • Before data generation, change the path_to_your_dataset in preprocess/point_EDA_31.py by:
Sensat = SensatUrbanEDA()
Sensat.root_dir = 'path_to_your_dataset'
Sensat.split = 'train' # change to 'test' for inference
  • Initialize the BEV projection arguments. We provide our optimal setting below, but you can set other values for analysis:
Sensat.grids_scale = 0.05
Sensat.grids_size = 25
Sensat.grids_step = 25
  • (Optional) If you want to test the sliding window points generator:
data_dir = os.path.join(self.root_dir, self.split)
ply_list = sorted(os.listdir(data_dir))[0]
ply_path = os.path.join(data_dir, ply_name)
ply_data = self.load_points(ply_path, reformat=True)
grids_data = self.grid_generator(ply_data, self.grids_size, self.grids_step, False) # return an Iterator
  • Calculating spatial overlap ratio in BEV projection:
Sensat.single_ply_analysis(Sensat.exp_point_overlay_count) # randomly select one ply file
Sensat.batch_ply_analysis(Sensat.exp_point_overlay_count) # for all ply files in the path
  • Calculating class overlap ratio in BEV projection, that means we ignore overlapped points belonging to the same category:
Sensat.single_ply_analysis(Sensat.exp_class_overlay_count) # randomly select one ply file
Sensat.batch_ply_analysis(Sensat.exp_class_overlay_count) # for all ply files in the path
  • Test BEV projection and 3D remapping with IoU index test (reflecting the consistency in 3D Segmentation and BEV Segmentation tasks):
Sensat.evaluate('offline', Sensat.map_offline_img2pts)
  • BEV data generation:
Sensat.batch_ply_analysis(Sensat.exp_gen_bev_projection)
  • Point Spatial Overlap Ratio Statistics at different projection scales

  • More BEV projection testing results refers to our sample images: completion test at imgs/completion_test, edge detection with different CV operators at imgs/edge_detection, rgb and label projection samples at imgs/projection_sample

(3) Training & Inference

We provide two basic multimodal fusion network developped from U-Net in the modeling folder, unet.py is the basic feature fusion, and uneteca.py is the attention fusion.

  • Change the path_to_your_dataset in mypath.py and dataloaders/init.py >>> 'cityscapes'

  • Train from sratch

python train.py --use-balanced-weights --batch-size 8 --base-size 500 --crop-size 500 --loss-type focal --epochs 200 --eval-interval 1
  • Change the save_dir in inference.py

  • Inference on test data

python inference.py --batch-size 8
  • Prediction Results Visualization (RGB, altitude, label, prediction)

(4) Evaluation

  • Remap your BEV prediction to 3D and evaluate in 3D benchmark in preprocess/point_EDA_31.py (following the previous initialization steps):
Sensat.evaluate_batch(Sensat.evaluate_batch_nn(Sensat.eval_offline_img2pts))

(5) Citation

If you find our work useful in your research, please consider citing: (Information is coming soon! We are asking the open-access term of the conference!)

(6) Acknowledgment

  • Part of our data processing code (read_ply and metrics) is developped based on https://github.com/QingyongHu/SensatUrban
  • Our code of neural network is developped based on a U-Net repo from the github, but unfortunately we are unable to recognize the raw github repo. Please tell us if you can help.

(7) Related Work

To learn more about our fusion segmentation methods, please refers to our previous work:

@article{Zhang2021ChannelAI,
    title={Channel Attention in LiDAR-camera Fusion for Lane Line Segmentation},
    author={Xinyu Zhang and Zhiwei Li and Xin Gao and Dafeng Jin and Jun Li},
    journal={Pattern Recognit.},
    year={2021},
    volume={118},
    pages={108020}
}

@article{Zou2021ANM,
    title={A novel multimodal fusion network based on a joint coding model for lane line segmentation},
    author={Zhenhong Zou and Xinyu Zhang and Huaping Liu and Zhiwei Li and A. Hussain and Jun Li},
    journal={ArXiv},
    year={2021},
    volume={abs/2103.11114}
}

sensaturban-bev-seg3d's People

Contributors

zouzhenhong98 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

sensaturban-bev-seg3d's Issues

kpconv处理SensatUrban数据集

您好,可以提供以下关于使用kpconv网络模型处理SensatUrban数据集的代码吗?我现在正在为这个事情着急。因为我需要对比不同网络处理该数据的性能,关于处理该数据的网络模型目前还是很少。

May I know where to find your paper?

I only found the archive version. May I know how I can find your paper in a conference? Need a non-archive version for citation.

Thanks,

Eric

About reproducing results

Thank you so much for your great work
When I train according to the readme process, the model always fails to achieve the same curve as img/single_scale_training.png
My experimental configuration is:
2 x 2080ti and batchsize=4
miou can only be up to 41% in the validation set

iShot2022-09-09 10 17 07

In addition, can you also provide pre-trained weights?
thank you very much

A problem about data

image
SensatUrban数据集分为train,val,test,train和val数据都是N×7,test是N程6(没有class),但是在point_EDA_31.py内生成test的BEV时并没有按照N×6的形式进行处理而是N×7.

Confusion about the training data

您好!我preprocess后得到了alt,rgb,cls,vis,但是训练的好像是rgbd四通道数据?有点没明白能麻烦解答一下嘛

Seeking Code for Converting to Cityscapes Format

Hello, I'm searching for the code that can help me convert data into the Cityscapes format. Could someone please direct me to the repository or source where I can find this code? Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.