Code Monkey home page Code Monkey logo

rpmnet's People

Contributors

yewzijian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rpmnet's Issues

Shuffle Problem

Hi, when I test the model with my own dataset, I found that the random seed for the shuffle transformation will strongly influence the performance of the model. Can you tell me more about how to set the ranom seed? I saw in your dataset, you just use the index of the data in the batch to be the random seed, when using the random sample transformation.

Evaluation Difference to DCP

Hi yewzijian,

I noticed that your evaluation results are totally different from the ones reported by DCP paper (https://arxiv.org/pdf/1905.03304.pdf).

In particular, DCP reports that it is much better than FGR and PointNetLK, whereas in your comparison those methods achieve up to 100x lower errors and outperform DCP. So I was wondering if you use the same evaluation setup as DCP or if there is some important difference that I missed?

Best regards,
Felix

EOFError: Ran out of input

I'm running this project on windows, so I don't expect the ModelNetHdf._download_dataset() work properly. Alternativly I downloaded the dataset manually in RPMNet/datasets/modelnet40_ply_hdf5_2048\, but I keep getting this error from for train_data in train_loader in train.py

EOFError: Ran out of input
image

Google suggests the target file may be empty, so I checked it by adding print(len(train_data), len(val_data)) in get_train_datasets() in datasets.py. But it seems the dataset was loaded successfully, cause the printed lens are 5112 and 1202
image

Question about the evaluation

Hi yewzijian:

Thank you for the sharing! I have a quesion when i read the code. In the evalution function, why you test the transformation matrix calculated in different iteration separeately? In my understand, the final transformation between the source and reference is the product of those transformation matrix calculated in different iteration, right?So i am confused about the evaluation process.

`
def evaluate(pred_transforms, data_loader):
_logger.info('Evaluating transforms...')
num_processed, num_total = 0, len(pred_transforms)

if pred_transforms.ndim == 4:
    pred_transforms = torch.from_numpy(pred_transforms).to(_device)
else:
    assert pred_transforms.ndim == 3 and \
           (pred_transforms.shape[1:] == (4, 4) or pred_transforms.shape[1:] == (3, 4))
    pred_transforms = torch.from_numpy(pred_transforms[:, None, :, :]).to(_device)

metrics_for_iter = [defaultdict(list) for _ in range(pred_transforms.shape[1])]

for data in tqdm(data_loader, leave=False):
    dict_all_to_device(data, _device)

    batch_size = 0
    for i_iter in range(pred_transforms.shape[1]):
        batch_size = data['points_src'].shape[0]

        cur_pred_transforms = pred_transforms[num_processed:num_processed+batch_size, i_iter, :, :]
        metrics = compute_metrics(data, cur_pred_transforms)
        for k in metrics:
            metrics_for_iter[i_iter][k].append(metrics[k])
    num_processed += batch_size

for i_iter in range(len(metrics_for_iter)):
    metrics_for_iter[i_iter] = {k: np.concatenate(metrics_for_iter[i_iter][k], axis=0)
                                for k in metrics_for_iter[i_iter]}
    summary_metrics = summarize_metrics(metrics_for_iter[i_iter])
    print_metrics(_logger, summary_metrics, title='Evaluation result (iter {})'.format(i_iter))

return metrics_for_iter, summary_metrics`

eval.py error "scipy.spatial.transform.rotation.Rotation' object has no attribute 'from_dcm'"

Hello all,
running eval.py resulted in the following error:
scipy.spatial.transform.rotation.Rotation' object has no attribute 'from_dcm'

It seems that script.spatial.Rotation method 'from_dcm' was replaced with 'from_matrix'.
Changing that in the code solves the problem.

More info: https://stackoverflow.com/questions/65628149/ratcave-scipy-spatial-transform-rotation-rotation-object-has-no-attribute-a
My config: Python 3.8 on Mac OS X 11.2.3

Training full data encounter svd did not converge

Hi,
I have encounter an error when I replace the categories to be trained form first 20 categories to all categories,the training process seem fine but when it come to 10'st epoch, it report an error:

File "train.py", line 291, in
main()
File "train.py", line 43, in main
run(train_set, val_set)
File "train.py", line 260, in run
pred_transforms, endpoints = model(train_data, _args.num_train_reg_iter) # Use less iter during training
File "../rpmnet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "../RPMNet/src/models/rpmnet.py", line 199, in forward
transform = compute_rigid_transform(xyz_src, weighted_ref, weights=torch.sum(perm_matrix, dim=2))
File "../RPMNet/src/models/rpmnet.py", line 131, in compute_rigid_transform
u, s, v = torch.svd(cov, some=False, compute_uv=True)
RuntimeError: svd_cuda: the updating process of SBDSDC did not converge (error: 2)

how can I fix it?

Questions about the inlier loss

Hello! I got a question about the inlier loss as defined in equation-11 in the paper.

In my understanding, the definition of inlier loss in the paper is to average the total summation of the confidence matrix m_jk by J and K to encourage more entries to be labeled as inliers:
image

But in the code, the actual computation of the inlier loss is a little bit different:
image

As you add an additional scalar one to subtract the sum of each row and column. Why add this? Is it related to the performance of the training process?

That would be so much helpful if you give me some hints about it.

Thanks for your help.

eval_results

Hello, I would like to ask you
How did I obtain the results under the clean condition in the paper? I used eval. py to verify the results of clean-trained. pth, but it is much worse than the results in the paper. May I ask where did I go wrong? Here are the results I have obtained
微信图片_20240118105412
微信图片_20240118105551

Confusion of the logic in forward pipeline.

Hi, I got another problem about the details of your network design.

For each iteration, you will update the source point cloud before feeding it into the feature extraction network. According to your implementation, in details, you extract the features from the transformed source points cloud and reference points cloud, and then compute the transformation at this interation by using the extracted features.

Afterwards, you will use the computed transformation to multiply with the original source point cloud to obtain the input source point clouds for the next iteration every time.

In my opinion, this pipeline is OK for the first iteration, since the featrues are extracted from the original source and reference points cloud at the begining, therefore, it is natural to compute the transformation between the original source and reference points cloud based on these features.

However, for the remaining iterations, the features are extracted from the transformed source points cloud and original reference points cloud. It seems that the transformation computed from these features are the alignment of the transformed source points cloud and original reference points cloud, thus, the new transformed source points cloud should be computed by using the previous transformed points cloud and currently computed transformation, rather than the original points cloud and the present transformation.

I am a little bit confused by these, and want to know why don't you implement the forward pipeline like this. Is there any other considerations?

ICP and FGR

Hello, yewzijian. Your work is excellent, but I haven't adjusted the results of ICP and FGR in your paper. Is it convenient to provide the parameters in ICP and FGR, or to evaluate the code of ICP and FGR?

Evaluated metrics are not meeting expectations

I trained the model for many times using the same metrics as proposed in RPMNet paper. However, I'm not getting an promising evaluating results.
Here is one of the evaluated results.

# this model is trained on --noise_type clean --num_points 717

{
	"r_rmse": 30.818792915605933, 
	"r_mae": 23.835809988770535, 
	"t_rmse": 0.3206479847431183, 
	"t_mae": 0.25871723890304565, 
	"err_r_deg_mean": 46.72205352783203, 
	"err_r_deg_rmse": 51.79454803466797, 
	"err_t_mean": 0.5199803709983826, 
	"err_t_rmse": 0.5553786158561707, 
	"chamfer_dist": 0.12541979551315308
}

All metrics are so far away from those in paper:
image

I don't think I've changed the codes which calculates the metrics. But however I tried, the results were just not good enough.

code release

hi,

When will you open source your code for this nice CVPR work?

Input single fragment for real-time registration

Hello,
Thank you for the great work. The results on the paper look amazing!
It seems like RPMNet can be used for real-time processing with good registration accuracy.
Would you give me some idea which part of the code I need to fix so that the registration can be done for single noisy fragment (.ply, numpy array or open3d point cloud) to a clean reference? I can already see that I have to change "inference" in eval.py (if I just ignore "evaluate") but I'm not quite sure how to change them.

Registration and matching?

Thank you for your wonderful work. I'm looking for an point cloud matching algorithm like this, but is it different from point cloud registration
?

A question about the isotropic metric

Hello!

My question is, is the isotropic metrics defined originally by you or you get from somewhere else? It would be great if you can explain how to formulate this metric in mathmatics.

Thanks so much for your help!

Does transfer learning exist for RPMNet?

Hello again,

Can we do transfer learning with RPMNet to quickly add new objects? Currently, pre-trained weights does not work well for my own object:
Screenshot from 2021-05-25 22-49-43

This may happen because my source (orange) miss almost 60% of geometry compared to the target (blue) (pretty harsh condition due to sensor noise and occlusion). However, I want to verify whether pre-trained weights is an issue for this network not showing good performance.

shape_names.txt

Hello, I would like to ask you
with open(os.path.join(dataset_path, 'shape_names.txt')) as fid:
self._classes = [l.strip() for l in fid]
self._category2idx = {e[1]: e[0] for e in enumerate(self._classes)}
self._idx2category = self._classes
Where to get the shape_names.txt files in the code

Using model for user point clouds

Hi!
I would like to do the comparison of your pretrained model, model trained on my data and other point cloud registration methods.
I am using my own point clouds obtained with Intel RealSense.
What is the easiest way to just put my point clouds as input in your model and get the output?
Thanks!

About isotropic and anisotropic Metrics in Experiments.

Hi, yewzijian!
Thanks for your excellent work and compact declaration. It really inspires me a lot.
But I got a little confusion in Experiments Part. In paper, you mentioned that the euler angle error and translation error were anisotropic, while another one was isotropic.
Sorry that I cannot understand anisotropy in metric very well. Could you plz give a more specific explanation?
Thank you in advance.

Results analysis on personal dataset

Hi, I made some progress with the personal data training. It works pretty well, thanks for the awesome repository. I have extended dataset.py to accept other type of formats as well.
However what I was wondering is if you already have script where you can visualise the eval results? I know this is straight forward, however In case if it is there then would be a great go.

Moreover what else i have found is the fact that for smaller features the network is not converging very good. I have already trained the network for 4000 epochs. Translation error does not goes below 2 and rotation is around 3 degrees. I have just used one capacitor ply file to train it. I am not sure if i include more data in the dataset (different types of capacitors?)

Any suggestions or hint would be appreciated:)

How does the defined feature contribute to regression?

I'm here...again....
My work generally followed your backbone with a few modification which includes replacing ppf feature and loss function.
I defined a pmd feature and it worked pretty well on clean data, but failed in jitter and crop in the first 200 epoch.
I guess my pmd feature is not robust enough to partial or noisy data. But I'm totally confused how to improve the robustness of a local feature.

Here's the brief definition of pmd. Given point S, select k neighbors K_i, mark A = K_1, B=K_2, C=K_3. Thus S, A, B, C forms a triangular cone S-ABC where S is the appex. Three meshes, SAB, SAC, SBC, intersect at point S.
Thus,

# mesh SAB, SAC, SBC intersect at S

angle_SA=angle(SAB, SAC)
angle_SB=angle(SAB, SBC)
angle_SC=angle(SAC, SBC)

# ni, nr, d_norm are the same as RPMNet
angle_ni = angle(ni, nr)
d_norm = torch.norm(d, dim=-1)

# SOME_CODES_HERE
# angle_SA, angle_SB, angle_SC  are repeated here to keep a same shape with nr_ni and d_norm

pmd = torch.stack([angle_SA, angle_SB, angle_SC, nr_ni, d_norm], dim=-1)
return {'xyz': new_xyz, 'dxyz': xyz_feat, 'pmd': pmd}

I'm not asking for detailed implementation...That's quite rude...
It would be nice of you if you could tell me:
Did you try other features before ppf, especially any features designed by yourself?
Is it possible to improve the robustness of handcraft features during its definition, not during training afterwards.

TypeError: can't pickle _thread.RLock objects

I don't know why the error is happening.
Please help me.

Traceback (most recent call last):
File "train.py", line 291, in
main()
File "train.py", line 43, in main
run(train_set, val_set)
File "train.py", line 253, in run
for train_data in train_loader:
File "C:\Users\for\anaconda3\envs\rpmnet_test\lib\site-packages\torch\utils\data\dataloader.py", line 278, in iter
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\for\anaconda3\envs\rpmnet_test\lib\site-packages\torch\utils\data\dataloader.py", line 682, in init
w.start()
File "C:\Users\for\anaconda3\envs\rpmnet_test\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\for\anaconda3\envs\rpmnet_test\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\for\anaconda3\envs\rpmnet_test\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\for\anaconda3\envs\rpmnet_test\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
reduction.dump(process_obj, to_child)
File "C:\Users\for\anaconda3\envs\rpmnet_test\lib\multiprocessing\reduction.py", line 62, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle _thread.RLock objects

Additionally, the error occurred as above.

Traceback (most recent call last):
File "", line 1, in
File "C:\Users\for\anaconda3\envs\rpmnet_test\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\for\anaconda3\envs\rpmnet_test\lib\multiprocessing\spawn.py", line 116, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

DCP_v2 pretrained model required && Multi-GPU implemetations

Hello! Thank you for open-sourcing this impressive work, I have several questions:

  1. Could you provide the DCP_v2 checkpoints? I trained the DCP for 250 epochs (noise-free, 5112/1266 pairs for training/testing) in order to get the results in table 1, but the results are far from good.
  2. I could get the results in table 1~3 using your providing RPMNet checkpoints, but training for 1k epochs using a single GPU is time-consuming, have you ever consider implementing your code with multi-GPU? I tried by simply adding

model = nn.DataParallel(model)

and it reports a lot of bugs.

Thanks very much for your help!

FileNotFoundError

Hi, I got an error when I try to run the code:
FileNotFoundError: [Errno 2] No such file or directory: '../datasets/modelnet40_ply_hdf5_2048/shape_names.txt'
Would I have to download this file?
Thanks for your reply

RuntimeError: invalid argument 2: A should be non-empty 2 dimensional

when run

python eval.py --noise_type crop --resume ./pretrained/partial-trained.pth --val_batch_size 8

I got:

RPMNet/src/models/rpmnet.py", line 133, in compute_rigid_transform
u, s, v = torch.svd(cov, some=False, compute_uv=True)
RuntimeError: invalid argument 2: A should be non-empty 2 dimensional at /opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/generic/THCTensorMathMagma.cu:264

License?

Hi, I would like to know under what license is your code released. I couldn't find it anywhere.
Thank you.

The model output 'nan' when training on the 3rd dataset.

Hi yewzijian:

I am trying to train the model on my own datasets, but the model output 'nan' when training several epoch. I find the 'nan' are always coming from this line (

new_feat = self.prepool(new_feat)
)
the 0.weight of the self.prepool module will be 'nan' tensor after several epoch, i cilp the gradients but it seems make no sense.I will appreciate if you can give me some advices.

Some questions about the experiments

Hello,it's reallly a nice work.But I have some questions about the results of experiments,I really want to know how to get the test results of other papers compared,such as dcp-v2 and so on.I am looking forward to your reply,thank you.

Normal Problem

Hi, after I test your great work, I got one question about the normal, due to I don't find any description of the normal from the official webset of the modelnet40. I have three questions about the normals:

  1. Is the normal of each vertex computed from its 64 neighbourhoods which inside the sphere region with 0.3 radius?
  2. Are all the normals normalized?
  3. Are the orientations of all the normals consistent? If yes, whether they are orienting to the mass center of the mesh?

Meanwhile, if I want to test the network with my own dataset, what kind of preprocessing should I do for the normals? Or, can I just use the open3d's function esitimate_normal to be the input normal?

Something about the code!

Thank you for your wonderful work. I recently learn about your code of RPM, but I have one problem that confuses me, which is how to visualize the transformed src point cloud and the ref point cloud? My idea is that when testing, I save the transformed src point cloud and the ref point cloud, then use meshlab or other software to visualize them.

how to evaluate other point cloud ?

Hi,
Thanks for your masterpiece. I am trying to use this network to obtain the rigid transformation parameters between two related point clouds, but I don't know how to realize this thing. Could you please give me some cues?

Any requirements for point clouds? It looks like that the point cloud should be pcl::pointXYZINormal. And they should be same size like (16,717,3)

`L_total = L_reg + λL_inlier`?? a little confused about this formula

  1. in your paper, L_total = L_reg + λL_inlier you use λ=0.5^(N_i-i) but in the function >src/train.py>compute_losses:You seem to put λ out of the formula and make it into L_total = λ(L_reg + L_inlier)
    Here is your code description:
    for i in range(num_iter):
        ref_outliers_strength = (1.0 - torch.sum(endpoints['perm_matrices'][i], dim=1)) * _args.wt_inliers
        src_outliers_strength = (1.0 - torch.sum(endpoints['perm_matrices'][i], dim=2)) * _args.wt_inliers
        if reduction.lower() == 'mean':
            losses['outlier_{}'.format(i)] = torch.mean(ref_outliers_strength) + torch.mean(src_outliers_strength)
        elif reduction.lower() == 'none':
            losses['outlier_{}'.format(i)] = torch.mean(ref_outliers_strength, dim=1) + \
                                             torch.mean(src_outliers_strength, dim=1)

    discount_factor = 0.5  # Early iterations will be discounted
    total_losses = []
    for k in losses:
        discount = discount_factor ** (num_iter - int(k[k.rfind('_')+1:]) - 1)
        total_losses.append(losses[k] * discount)
    losses['total'] = torch.sum(torch.stack(total_losses), dim=0)
    return losses
  1. I can't understand why L_inlier can alleviate the outlier problem? Do you have some theoretical basis?

Corresponding normals for the points

Hi,

Thanks for sharing your work.
I saw that you are using xyz (XYZ coordinates of the points) and normals (Corresponding normals for the points) as input. May I ask what is "normals"? I want to use your model on my dataset which only has xyz coordinates, how can I create "normals coordinates" for my dataset.
Thank you,

Find the nearest neighbor and SVD to solve for the transformation

Hi yewzijian:
Thanks your work,I want ask two questions:

  1. As for finding the nearest neighbor:
    group_idx[sqrdists > radius ** 2] = N
    group_idx = group_idx.sort(dim=-1)[0][:, :, :nsample]
    In the process, did you not use the 64 closest points? just took the 64 points that satisfy the distance requirement?
  2. SVD to solve for the transformation:
    In this function below:
    transform = compute_rigid_transform(xyz_src, weighted_ref, weights=torch.sum(perm_matrix, dim=2))
    cov = a_centered.transpose(-2, -1) @ (b_centered * weights_normalized)
    the cov should be cov = a_centered.transpose(-2, -1) @ b_centered ?

How to color the src and ref point clouds with learned features?

The input train_data in model contains train_data['points_src'] of 8x717x6 and train_data['points_ref"] of 8x717x6.
I extracted the following feature for each sample as the author defined during learning:

# feature returned by sample_and_group_multi()
feature_before_dim = {
    'xyz'  :     # an array of (717, 1, 3),
    'dxyz':     # an array of (717, 64, 3),
    'ppf':       # an array of (717, 64, 4)
}

# feature returned by FeatExtractionEarlyFusion.forward()
feature_after_dim     # an array of (717, 96)

Here's the problem. How either of the above two features help me color the 717 points in src clouds? It is expected that the points with similar features should share the same color. In other words, given n colors (blue, red, etc..), how to design a hash function f such that f(feat) -> {0, 1, 2, ..., n-1} maps the input feat to a particular color

demo in the video

is demo.py here? I am a beginner and want to test your perfect work like your video ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.