hmgoforth / pointnetlk Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
I add gaussian noise of certain standard deviation to src, and I use the generation_rotations.py to genetate perturbations with deg=90.0 but the result is bad. Did you make the degree equal to 0.0 and change the SD of noise to get the results of Figure 4?
Hi authors, thanks for your code. I trained the model by following your instructions. The error of PointNetLK is very large. I tried a lot of times. The errors are still large. Do you know what is the problem? The errors show below:
test, 0/1202, 2.841065
test, 1/1202, 3.167926
test, 2/1202, 3.015713
test, 3/1202, 3.097108
test, 4/1202, 3.093638
test, 5/1202, 2.960435
test, 6/1202, 0.006093
test, 7/1202, 1.114216
test, 8/1202, 3.233206
test, 9/1202, 3.199666
test, 10/1202, 3.116020
test, 11/1202, 3.075513
test, 12/1202, 3.103952
test, 13/1202, 2.838798
test, 14/1202, 3.170758
test, 15/1202, 3.019938
test, 16/1202, 2.854872
test, 17/1202, 2.970612
test, 18/1202, 3.127913
test, 19/1202, 1.735299
The error of ICP shows below:
test, 9/1202, 0.011994
test, 10/1202, 0.000000
test, 11/1202, 0.633587
test, 12/1202, 0.469648
test, 12/1202, 0.139527
test, 13/1202, 0.017881
test, 13/1202, 0.264104
test, 14/1202, 0.349899
test, 14/1202, 0.226185
test, 15/1202, 0.026434
test, 15/1202, 0.039178
test, 16/1202, 0.010611
test, 17/1202, 0.000000
test, 18/1202, 0.017347
test, 16/1202, 0.010419
test, 17/1202, 0.313736
test, 19/1202, 0.060660
Hi,
I am reading your paper and trying to understand your code. I have a question regarding the backpropagation stage.
From figure 2 in your paper, it seems that this is purely an iterative approach, where based on a fixed jacobian over pointnet features, you iterate over small perturbations of the SE(3) space until the features from the template and source point clouds are similar enough.
Once that iteration is done for one batch, you follow a standard optimization routine (zero_grad, backward, and optimize). My question is: what exactly is it being optimized here? The pointnet weights? Or is there some other operation with learnable weights that I am missing?
Thanks
Raul
Hello,
Great work.
Thank you for sharing the code.
I am able to train and test, with my own data, but it twists the same point cloud and registers it to find the error.
How can I use different source and template for point cloud registration?
Hello,
Can I ask why the G matrix is a [4, 4] matrix instead of [3, 3]? I thought it's going to transform the coordinates from source to template?
Thanks.
Thank you.
We have been searching a sophisticated registration method for point clouds, and we came across your solution. We kind of need a help while implementing your solution to our case. Here is how we do registration and implement your solution.
We register point clouds taken with synchronized Azure kinect cams with a custom solution in which a pattern seen by cam-pairs is used. Since this solution is time-taking and requires somebody else that is necessary for adjusting pattern position during registration, we needed better and more sophisticated solution. After I looked into your study and github code, I got a feeling that your solution could be applicable to our case, And then for training, we collected approximately 100 pairs of point clouds taken by two cams with 100 different positions. For each pair of point clouds, we also calculated transformation matrix between point clouds using our custom solution. In every point cloud, we have only a person isolated. So for every data point, we have one transformation matrix and a pair of point clouds. I use only these data with train_pointlk.py after doing some necessary changes. But the result is not positive, I mean loss function is not converged to 0.
What I would like to learn from you guys is whether the way I follow is correct or not? or Do you have any other suggestions?
Cheers,
Hamit
Hello,
It is awesome work. Thanks for sharing the code.
As reported in the paper, PointNetLK has been validated on partially visible data. But I didn't find this experiment in this code. Do you release this experiment?
Thanks!
Weiwei.
Hi, I read your paper, but i can't understand the benefit of using LK
algorithm, how about directly using PointNet
with source and target point clouds as input to predict transformation and use the transformation to transform source point cloud, then minimize the Chamfer distance
loss between transformed source point cloud and target point cloud.
Thanks
Hi yasuhiro:
Thanks your work,I want ask a question:
where is the ex1_pointlk_0915_model_best.pth?Is this open source?
Hi, first of all thanks for the good work and for releasing the source code. I would like to better understand the protocol for training/testing on partially visible data.
Thank you.
There is no software license applied to any of the code I've looked at. Could you please either add your desired license/copywrite to the README or as comments to the top of each file. We use open-source code to develop applications for industrial robots using ROS. If there is no license, we can't use it. We are looking for a better alternative to ICP for an application where we locate aircraft components for a Fanuc on a gantry sanding operation. Point clouds are obtained from a 3D camera mounted on the EOAT. Pre-planned sanding paths are then transformed, path-planned and executed. Note, we prefer the Apache license. GPL has major issues. However, its completely up to you. Thanks
Hi!
Thanks for sharing your code with us! :-)
I'm trying to train the model using ex1_train.sh
with the ModelNet40.
After a long time of training the script quit with the following error:
.../ptlk/pointlk.py", line 58, in do_forward
a0[:, 0:3, 3] = p0_m
RuntimeError: unsupported operation: more than one element of the written-to tensor refers to a single memory location. Please clone() the tensor before performing the operation.
Can you help me find out what's going wrong here?
Hello, I am using your code to train PointLK, but my video card only has 8g of video memory, so there is always the problem of video memory allocation, do you have a solution?Or can you provide me with the PointNetLK model that pretrained?
I have 2 point cloud datas and I want to merge correctly?
I can train,
I can test,
but I can not use for my point cloud datas.
Hello all,
I am new in point cloud registration and want to ask a silly question. Why the number of twist parameter is 6? If it is the reason that rigid transformation is in 6 dimensions of freedom(X, Y, Z, Roll, Yaw, Pitch).
Train a model with tables for registration. is still can use this model to predict the chair's translation and orientation? and get a good result based on pointnetlk?
hi, @hmgoforth
I get a .csv file through test_pointlk.py ,but I do not know how to use this file.Could you tell me how to use this file, and how I can get visual results as your paper presented
Hope your reply.Thanks
The log of a rotation matrix obtained from so3.log
for the following matrix is incorrect:
import torch
import ptlk
mat = torch.Tensor([
[-1.00000396e+00, -9.55433245e-07, 1.04267154e-06],
[ 1.04267254e-06, -9.99052394e-01, 4.36201482e-02],
[ 9.55432245e-07, 4.36191482e-02, 9.99051394e-01]])
print(ptlk.so3.log(mat))
# prints:
# tensor([0., 0., 0.])
print(ptlk.so3.exp(ptlk.so3.log(mat)))
# prints:
# tensor([[1., 0., 0.],
# [0., 1., 0.],
# [0., 0., 1.]])
print(torch.allclose(mat, ptlk.so3.exp(ptlk.so3.log(mat)), atol=1e-2))
# prints:
# False
I tried to change eps
in so3.log
function, but it does not seem to help.
Hello,
I wanted to ask how is the rotation error and translation error (shown in the paper in figures 3 and 4) calculated.
In the code I have seen calculation of the mean error in the twist parameters (norm of the whole vector of twist parameters, dm) and I have also seen in result_stat.py a calculation of the error of the rotation part of the twist error vector and translation part, separately (ep and ev for translation and also ew for the error in rotation twist parameters). But it is not clear to me what is the specific metric used in the paper. Especially how to get to the rotation error metric in degrees units.
Also, why do you transform the twist parameters to rotation matrices and than back again to twist parameters?
Thank you in advance.
I try to generate datasets from your code, but it can not be used. Can you tell me how to do your code step by step?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.