Code Monkey home page Code Monkey logo

point-cloud-prediction's Introduction

Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks

This is a Pytorch-Lightning implementation of the paper "Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks".

Given a sequence of P past point clouds (left in red) at time T, the goal is to predict the F future scans (right in blue).

Table of Contents

  1. Publication
  2. Data
  3. Installation
  4. Training
  5. Testing
  6. Visualization
  7. Download
  8. License

Overview of our architecture

Publication

If you use our code in your academic work, please cite the corresponding paper:

@inproceedings{mersch2021corl,
  author = {B. Mersch and X. Chen and J. Behley and C. Stachniss},
  title = {{Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks}},
  booktitle = {Proc.~of the Conf.~on Robot Learning (CoRL)},
  year = {2021},
}

Data

Download the Kitti Odometry data from the official website.

Installation

Source Code

Clone this repository and run

cd point-cloud-prediction
git submodule update --init

to install the Chamfer distance submodule. The Chamfer distance submodule is originally taken from here with some modifications to use it as a submodule. All parameters are stored in config/parameters.yaml.

Dependencies

In this project, we use CUDA 10.2. All other dependencies are managed with Python Poetry and can be found in the poetry.lock file. If you want to use Python Poetry (recommended), install it with:

curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/install-poetry.py | python -

Install Python dependencies with Python Poetry

poetry install

and activate the virtual environment in the shell with

poetry shell

Export Environment Variables to dataset

We process the data in advance to speed up training. The preprocessing is automatically done if GENERATE_FILES is set to true in config/parameters.yaml. The environment variable PCF_DATA_RAW points to the directory containing the train/val/test sequences specified in the config file. It can be set with

export PCF_DATA_RAW=/path/to/kitti-odometry/dataset/sequences

and the destination of the processed files PCF_DATA_PROCESSED is set with

export PCF_DATA_PROCESSED=/desired/path/to/processed/data/

Training

Note If you have not pre-processed the data yet, you need to set GENERATE_FILES: True in config/parameters.yaml. After that, you can set GENERATE_FILES: False to skip this step.

The training script can be run by

python pcf/train.py

using the parameters defined in config/parameters.yaml. Pass the flag --help if you want to see more options like resuming from a checkpoint or initializing the weights from a pre-trained model. A directory will be created in pcf/runs which makes it easier to discriminate between different runs and to avoid overwriting existing logs. The script saves everything like the used config, logs and checkpoints into a path pcf/runs/COMMIT/EXPERIMENT_DATE_TIME consisting of the current git commit ID (this allows you to checkout at the last git commit used for training), the specified experiment ID (pcf by default) and the date and time.

Example: pcf/runs/7f1f6d4/pcf_20211106_140014

7f1f6d4: Git commit ID

pcf_20211106_140014: Experiment ID, date and time

Testing

Test your model by running

python pcf/test.py -m COMMIT/EXPERIMENT_DATE_TIME

where COMMIT/EXPERIMENT_DATE_TIME is the relative path to your model in pcf/runs. Note: Use the flag -s if you want to save the predicted point clouds for visualiztion and -l if you want to test the model on a smaller amount of data.

Example

python pcf/test.py -m 7f1f6d4/pcf_20211106_140014

or

python pcf/test.py -m 7f1f6d4/pcf_20211106_140014 -l 5 -s

if you want to test the model on 5 batches and save the resulting point clouds.

Visualization

After passing the -s flag to the testing script, the predicted range images will be saved as .svg files in pcf/runs/COMMIT/EXPERIMENT_DATE_TIME/range_view_predictions. The predicted point clouds are saved to pcf/runs/COMMIT/EXPERIMENT_DATE_TIME/test/point_clouds. You can visualize them by running

python pcf/visualize.py -p pcf/runs/COMMIT/EXPERIMENT_DATE_TIME/test/point_clouds

Five past and five future ground truth and our five predicted future range images.

Last received point cloud at time T and the predicted next 5 future point clouds. Ground truth points are shown in red and predicted points in blue.

Download

You can download our best performing model from the paper here. Just extract the zip file into pcf/runs.

License

This project is free software made available under the MIT License. For details see the LICENSE file.

point-cloud-prediction's People

Contributors

benemer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

point-cloud-prediction's Issues

got an error when using my own dataset

Hi Benedikt
Thank you so much for this work, I have achieved training and prediction with the help of your documentation
But I encountered difficulties when I wanted to use it on non-lidar point cloud data. This data comes from my depth camera [Bumblebeexb3]. The following is a sketch of it opened in Cloudcompare and in txt format.
1672472465784
754fd287d70b0d752352c6f9202ae97
The data contained in it is x y z intensity.It may not be obvious but it is a landslide of an indoor experiment, and I hope to predict the landslide through this work.
My problem is that he reported this error when I converted the txt file into a .bin file through the code.
1672472611345
I use numpy to convert the data
point_cloud = numpy.loadtxt("try.txt") a.tofile("try.bin")

When I use Cloudcompare to convert the txt file into .pcd and open it to add the intensity header file and then convert .pcd into .bin through another script, I can run through the above preprocessing link, but Dataloader does not read the data
df9a5e2d5a1b38cfd280ab1580fed8c
8371914b560bc8dd589584bdf37b43f
You can also see that the calculated averages are all 0.
Could you please give me some suggestion or solution?

the environment requirements of this code

Hi Benedikt,
I have read your paper, your work is great and thanks for releasing your code !
When I use your code, it seems that there are some conflicts between your code and my environment, especially about the part of Chamfer Distance.

Would you mind sharing the requirements about your code?
I'm using RTX 3090 GPU, CUDA11.3 and pytorch1.10.

Thanks

Learning rate decaying schedule.

Hello authors,

Thanks for your nice work!
Could you please tell me details about the learning rate decaying schedule for trainning. I cannot reproduce the results in your paper.

Regarding N_FUTURE STEPS in config

Hi,

I had a doubt regarding the n_future steps in the config files.
N_PAST_STEPS: 5 # Number of input range images
N_FUTURE_STEPS: 5 # Number of predicted future range images

I wanted to use this network as a simple generative model to reconstruct the input LIDAR back.

As far as I can read, would putting the above 2 config variables as 1 help in the making the network a simple generative model with the aim of only reconstructing the input LiDAR frame.

Eagerly awaiting the response of the authors.

Parameters passed do not match required

Thank you for this great work.
I run pcf/train.py but i found pycharm throw out mistakes about parameters passed do not match required.
To specifically, it seems that the class Trainer constructor in trainer.py does not match the parameter name passed in by the trainer object in train.py.
There are error information below. "TypeError: init() got an unexpected keyword argument 'gpus' "
"TypeError: init() got an unexpected keyword argument 'resume_from_checkpoint' "
I think that "num_nodes" should be changed to "gpus". If i am correct, Which parameter does resume_from_checkpoint correspond to?

Looking forward to your response, thanks!

MEANs and STDs in parameters.yml

Hi Benedikt,

Thank you for providing this elegant and powerful repo and I really appreciate your brilliant work!

I noticed that there are 5 means and 5 stds declared in parameters.yml.

Are they the means and stds of 5 channels(range, x, y, z, intensity) of the whole training set?

Thank you in advance!

run test.py error

Hi author,
When I run the test.py ,
TypeError: on_test_epoch_end() missing 1 required positional argument: 'outputs'
How can I deal it?

Problem about visualization

Hi Benedikt,

I've done the training and testing about the network, and I appreciate your help.

However, I met another problem about visualization after that.

I run the command as

python pcf/test.py -m cc56eea/pcf_20211209_172403 -l 5 -s

and

python pcf/visualize.py -p /pcf/runs/cc56eea/pcf_20211209_172403/test/point_clouds

Then I got noting but an empty window about open3d. Also, there are warnings about the lack of some .ply files like this
30cac4d81d9714fb61eb326faf166ac

I noticed that in the visualization.py, the saved .ply data should be

├── sequence
│ ├── gt
| | ├──frame.ply
│ ├─── pred
| | ├── frame
| │ | ├─── (frame+1).ply

But I got the every frame_id with 5 future prediction .ply files such as

41df79c339b4f242004cae6a51653d0

So I can't visualize the results.

The train/val/test_iter in datasets.py

Hi Benemer,

I have a question about the effect of train/val/test_iter in datasets.py, line 56, 67 and 78, what are they used for?

iter

I’m running a network based on your code on multi gpus, but I met some errors about multi processed. If I commented out the train/val/test_iter the code will run without error, but I'm not sure whether commenting out them is correct or not.

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.