Code Monkey home page Code Monkey logo

r2dm's Introduction

R2DM

R2DM is a denoising diffusion probabilistic model (DDPM) for LiDAR range/reflectance generation based on the equirectangular representation.

sampled in 256 steps

LiDAR Data Synthesis with Denoising Diffusion Probabilistic Models
Kazuto Nakashima, Ryo Kurazume
ICRA 2024
project | arxiv | online demo

Quick demo:

pip install torch torchvision einops tqdm pydantic
import torch

# Setup our pre-trained model & sampling
r2dm, lidar_utils, cfg = torch.hub.load("kazuto1011/r2dm", "pretrained_r2dm", device="cuda")
lidar_image = r2dm.sample(batch_size=1, num_steps=256)  # (batch size, 2, height, width)

# Postprocessing
lidar_image = lidar_utils.denormalize(lidar_image.clamp(-1, 1))  # [-1,1] -> [0,1]
range_image = lidar_utils.revert_depth(lidar_image[:, [0]])  # Range
rflct_image = lidar_image[:, [1]]  # Reflectance
point_cloud = lidar_utils.to_xyz(range_image)  # Point cloud

Setup

Python & CUDA

w/ conda framework:

conda env create -f environment.yaml
conda activate r2dm

If you are stuck with an endless installation, try libmamba for the conda solver.

Dataset

For training & evaluation, please download the KITTI-360 dataset (163 GB) and make a symlink:

ln -sf $PATH_TO_KITTI360_ROOT data/kitti_360/dataset

Please set the environment variable $HF_DATASETS_CACHE to cache the processed dataset (default: ~/.cache/huggingface/datasets).

Training

To start training DDPMs:

accelerate launch train.py
  • The initial run takes about 15 min to preprocess & cache the whole dataset.
  • The default configuration is config H (R2DM) in our paper.
  • Distributed training and mixed precision are enabled by default.
  • Run with --help to list the available options.

To monitor the training progress:

tensorboard --logdir logs/

To generate samples w/ a training checkpoint (*.pth) at $CHECKPOINT_PATH:

python generate.py --ckpt $CHECKPOINT_PATH

Evaluation

To generate, save, and evaluate samples:

accelerate launch sample_and_save.py --ckpt $CHECKPOINT_PATH --output_dir $OUTPUT_DIR
python evaluate.py --ckpt $CHECKPOINT_PATH --sample_dir $OUTPUT_DIR

The generated samples are saved in $OUTPUT_DIR.

Completion demo

python completion_demo.py --ckpt $CHECKPOINT_PATH

completion

Citation

If you find this code useful for your research, please cite our paper:

@article{nakashima2023lidar,
    title   = {LiDAR Data Synthesis with Denoising Diffusion Probabilistic Models},
    author  = {Kazuto Nakashima and Ryo Kurazume},
    year    = 2023,
    journal = {arXiv:2309.09256}
}

Acknowledgements

r2dm's People

Contributors

kazuto1011 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.