Code Monkey home page Code Monkey logo

snowflakenet's Introduction

Snowflake Point Deconvolution for Point Cloud Completion and Generation with Skip-Transformer (TPAMI 2023)

Peng Xiang*, Xin Wen*, Yu-Shen Liu, Yan-Pei Cao, Pengfei Wan, Wen Zheng, Zhizhong Han

Intro pic

[NEWS]

  • 2023-02 [NEW:tada:] The Jittor implementations of SPD are released in the SPD_jittor repo.

  • 2023-01 [NEW:tada:] This repository now contains the code of the ICCV paper and the extra contents of the extended version, including:

    • Point cloud completion on the ShapeNet-34/21 dataset for unseen class completion.
    • Point cloud completion on the PCN dataset evaluated under EMD metric.
    • Point cloud auto-encoding and novel shape generation, see the generation folder.
    • Single view reconstruction, seed the svr folder.
    • Point cloud upsampling, see the PU folder.
  • 2022-10 [NEW:tada:] SPD, the journal extension of SnowflakeNet, is accepted to TPAMI. We have extended the application of snowflake point deconvolution to more generative tasks other than point cloud completion, including point cloud auto-encoding, generation, single view reconstruction (SVR), and point cloud upsampling (PU).

  • 2021-10 SnowflakeNet is published at ICCV 2021, and the code is released!

[SPD]

1. Snowflake Point Deconvolution for Point Cloud Completion and Generation with Skip-Transformer (TPAMI 2023)

2. SnowflakeNet: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer (ICCV 2021, Oral)

Most existing point cloud completion methods suffer from the discrete nature of point clouds and the unstructured prediction of points in local regions, which makes it difficult to reveal fine local geometric details. To resolve this issue, we propose SnowflakeNet with snowflake point deconvolution (SPD) to generate complete point clouds. SPD models the generation of point clouds as the snowflake-like growth of points, where child points are generated progressively by splitting their parent points after each SPD. Our insight into the detailed geometry is to introduce a skip-transformer in the SPD to learn the point splitting patterns that can best fit the local regions. The skip-transformer leverages attention mechanism to summarize the splitting patterns used in the previous SPD layer to produce the splitting in the current layer. The locally compact and structured point clouds generated by SPD precisely reveal the structural characteristics of the 3D shape in local patches, which enables us to predict highly detailed geometries. Moreover, since SPD is a general operation that is not limited to completion, we explore its applications in other generative tasks, including point cloud auto-encoding, generation, single image reconstruction, and upsampling. Our experimental results outperform state-of-the-art methods under widely used benchmarks.

[Cite this work]

@ARTICLE{xiang2023SPD,
  author={Xiang, Peng and Wen, Xin and Liu, Yu-Shen and Cao, Yan-Pei and Wan, Pengfei and Zheng, Wen and Han, Zhizhong},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  title={Snowflake Point Deconvolution for Point Cloud Completion and Generation With Skip-Transformer}, 
  year={2023},
  volume={45},
  number={5},
  pages={6320-6338},
  doi={10.1109/TPAMI.2022.3217161}}

@inproceedings{xiang2021snowflakenet,
  title={{SnowflakeNet}: Point Cloud Completion by Snowflake Point Deconvolution with Skip-Transformer},
  author={Xiang, Peng and Wen, Xin and Liu, Yu-Shen and Cao, Yan-Pei and Wan, Pengfei and Zheng, Wen and Han, Zhizhong},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
  year={2021}
}

[Getting Started]

Build Environment

# python environment
$ cd SnowflakeNet
$ conda create -n spd python=3.7
$ conda activate spd
$ pip3 install -r requirements.txt

# pytorch
$ pip3 install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html

Build PyTorch Extensions

cd models/pointnet2_ops_lib
python setup.py install

cd ../..

cd loss_functions/Chamfer3D
python setup.py install

cd ../emd
python setup.py install

Pre-trained models

We provided the pretrained models on different tasks:

Backup Links:

Visualization of point splitting paths

We provide visualization code for point splitting paths in the visualization folder.

Acknowledgements

Some of the code of this repo is borrowed from:

We thank the authors for their great job!

License

This project is open sourced under MIT license.

snowflakenet's People

Contributors

allenxiangx avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.