Code Monkey home page Code Monkey logo

raflow's Introduction

Self-Supervised Scene Flow Estimation with 4-D Automotive Radar

arxiv  GitHub

This repository is the official implementation of RaFlow (IEEE RA-L & IROS'22), a robust method for scene flow estimation on 4-D radar point clouds with self-supervised learning. [Paper] [Video]

News

[2022-10] We run our method on the publicly available View-of-Delft (VoD) dataset. A video demo can be found at Video Demo. Please see Running for how to experiment with the VoD dataset.

[2023-03] Our latest work "Hidden Gems: 4D Radar Scene Flow Learning Using Cross-Modal Supervision" has been accepted by CVPR 2023. Please see CMFlow for more details and find out how to run RaFlow on VoD dataset.

Abstract

Scene flow allows autonomous vehicles to reason about the arbitrary motion of multiple independent objects which is the key to long-term mobile autonomy. While estimating the scene flow from LiDAR has progressed recently, it remains largely unknown how to estimate the scene flow from a 4-D radar - an increasingly popular automotive sensor for its robustness against adverse weather and lighting conditions. Compared with the LiDAR point clouds, radar data are drastically sparser, noisier and in much lower resolution. Annotated datasets for radar scene flow are also in absence and costly to acquire in the real world. These factors jointly pose the radar scene flow estimation as a challenging problem. This work aims to address the above challenges and estimate scene flow from 4-D radar point clouds by leveraging self-supervised learning. A robust scene flow estimation architecture and three novel losses are bespoken designed to cope with intractable radar data. Real-world experimental results validate that our method is able to robustly estimate the radar scene flow in the wild and effectively supports the downstream task of motion segmentation.

Citation

If you found our work useful for your research, please consider citing:

@article{ding2022raflow,
  author={Ding, Fangqiang and Pan, Zhijun and Deng, Yimin and Deng, Jianning and Lu, Chris Xiaoxuan},
  journal={IEEE Robotics and Automation Letters}, 
  title={Self-Supervised Scene Flow Estimation With 4-D Automotive Radar}, 
  year={2022},
  pages={1-8},
  doi={10.1109/LRA.2022.3187248}}
}

Video Demo

A short video demo showing our qualitative results on the View-of-Delft dataset (click the figure to see):

Click the figure below to see the video

Visualization

a. Scene Flow

More qualititative results can be found in Results Visualization.

b. Motion Segmentation

Installation

Note: the code in this repo has been tested on Ubuntu 16.04/18.04 with Python 3.7, CUDA 11.1, PyTorch 1.7. It may work for other setups, but has not been tested.

Please follow the steps below to build up your environment. Make sure that you correctly install GPU driver and CUDA before setting up.

a. Clone the repository to local

git clone https://github.com/Toytiny/RaFlow

b. Set up a new environment with Anaconda

conda create -n YOUR_ENV_NAME python=3.7
source activate YOUR_ENV_NAME

c. Install common dependicies

conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=11.0 -c pytorch
pip install -r requirements.txt

d. Install PointNet++ library for basic point cloud operation

cd lib
python setup.py install
cd ..

Running

a. Inhouse data

The main experiments are conducted on our inhouse dataset. Our trained model can be found at ./checkpoints/raflow/models. Besides, we also provide a few testing, training and valiation data under ./demo_data/ for users to run.

For evaluation on inhouse test data, please run

python main.py --eval --vis --dataset_path ./demo_data/ --exp_name raflow

The results visualization at bird's eye view (BEV) will be saved under ./checkpoints/raflow/test_vis_2d/. Experiment configuration can be further modified at ./configs.yaml.

For training new model, please run

python main.py --dataset_path ./demo_data/ --exp_name raflow_new

Since only limited inhouse data is provided in this repository, we recommend the users to collect their own data or use recent public datasets for large-scale training and testing.

b. VoD data

We also run our method on the public View-of-Delft (VoD) dataset. To start, please first request the access from their official webiste and download their data and annotations. Before experiments, please put your preprocessed scene flow samples under ./vod_data/ and split them into training, validation and testing sets.

Here we provide our trained model on the VoD dataset under ./checkpoints/raflow_vod/models. For evaluating this model on VoD, please run the following code:

python main.py --eval --vis --dataset_path ./vod_data/ --model raflow_vod --exp_name raflow_vod --dataset vodDataset

For training your own model, please run:

python main.py --dataset_path ./vod_data/ --model raflow_vod --exp_name raflow_vod_new --dataset vodDataset

We provide the instructions on how to run RaFlow on the VoD at the GETTING_STARTED of our CVPR'23 work. Please follow the steps there to run RaFlow or our new method CMFlow.

Acknowledgments

This repository is based on the following codebases.

raflow's People

Contributors

toytiny avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

raflow's Issues

Question about baselines recurrent in paper

Thank you very much for such an excellent work of open source and I recommend it to many of my peers. I see the results of SLIM presented in your paper, but I have a problem with TF automatic gradient descent when reproducing this work, do you rewrite a set with torch or use their code directly, do you have the same problem?

Question about how to annotate radar's labels

Thank you for open-sourcing such excellent work. I would like to ask how to label the ground truth when there is no point cloud corresponding to the previous frame in the next frame of point cloud due to the sparsity and lack of semantic information of millimeter-wave radar. Also, can you provide a visualization program for motion segmentation?

Flow encoder or decoder?

In your paper, section III.B, you call the third layer as flow encoder, but in the figure and the code, you mention it as flow decoder.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.