Yang You, Wenhai Liu, Yanjie Ze, Yong-Lu Li, Weiming Wang, Cewu Lu
CVPR 2022
UKPGAN is a self-supervised 3D keypoint detector on both rigid/non-rigid objects and real scenes. Note that our keypoint detector solely depends on local features and is both translational and rotational invariant.
- Overview
- Installation
- Train on ShapeNet Models
- Test on ShapeNet Models
- Train on SMPL Models
- Test on SMPL Models
- Test on Real-world Scenes
- Pretrained Models
- Related Projects
- Citation
This repo is a TensorFlow implementation of our work UKPGAN.
Create Conda Environments
conda env create -f environment.yml
Compile smoothed density value (SDV) source files
First install Pybind11 and PCL C++ dependencies. Then run the following command to build the SDV feature extractor:
cd sdv_src
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
make
cd ../..
If you want to visualize keypoint results, you will need to install open3d
.
Prepare Data
Download ShapeNet point clouds from KeypointNet and unzip pcds folder to the root.
Category Configuration
Change the category name cat_name
to what you want in config/config.yaml
.
Start Training
Open a separate terminal to monitor training process:
visdom -port 1080
Then run (e.g., chair):
python train.py cat_name=chair
Evaluate IoU
Once trained, to evaluate the IoU with human annotations, first download KeypointNet data (you may only download the category that you wish to evaluate), then run
python eval_iou.py --kpnet_root /your/kpnet/root
Visualization
To test and visualize on ShapeNet models, run:
python visualize.py --type shapenet --nms --nms_radius 0.1
Prepare Data
You should register on SMPL website and download the model. We follow this repo to pre-process the model to generate model.pkl
. Place model.pkl
into data/model.pkl
.
Start Training
The following command start training on SMPL models on the fly:
python train.py cat_name=smpl symmetry_factor=0
Visualization
To test and visualize on SMPL models, run:
python visualize.py --type smpl --nms --nms_radius 0.2 --kp_num 10
For this task, we use the model that is trained on a large collection of ShapeNet models (across 10+ categories), called universal
.
Prepare Data
You will need to download data from 3DMatch. We also provide a demo scene for visualization.
Visualization
To test and visualize on 3DMatch, run:
python visualize.py --type 3dmatch --nms --nms_radius 0.05
We provide pretrained models on Google Drive.
- KeypointNet: A Large-scale 3D Keypoint Dataset Aggregated from Numerous Human Annotations
- TopNet: Structural Point Cloud Decoder
- The Perfect Match: 3D Point Cloud Matching with Smoothed Densities
If you find our algorithm useful in your research, please consider citing:
@inproceedings{you2022ukpgan,
title={UKPGAN: A General Self-Supervised Keypoint Detector},
author={You, Yang and Liu, Wenhai and Ze, Yanjie and Li, Yong-Lu and Wang, Weiming and Lu, Cewu},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2022}
}