Code Monkey home page Code Monkey logo

lidar-mos's Issues

Inferring the test dataset using SalsaNext pretrain model

Hi, thanks for your great work!

I'm getting errors when trying to infer the test dataset with SalsaNext pretrain model. It runs fluently on train and valid dataset. But when I run it on the test dataset, I can only get the prediction result of sequence 11, and then I get the following error

Traceback (most recent call last):
  File "infer.py", line 145, in <module>
    user.infer()
  File "../../tasks/semantic/modules/user.py", line 113, in infer
    to_orig_fn=self.parser.to_original, cnn=cnn, knn=knn)
  File "../../tasks/semantic/modules/user.py", line 134, in infer_subset
    for i, (proj_in, proj_mask, _, _, path_seq, path_name, p_x, p_y, proj_range, unproj_range, _, _, _, _, npoints) in enumerate(loader):
  File "/home/ws/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
    data = self._next_data()
  File "/home/ws/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 838, in _next_data
    return self._process_data(data)
  File "/home/ws/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data
    data.reraise()
  File "/home/ws/anaconda3/envs/py37/lib/python3.7/site-packages/torch/_utils.py", line 395, in reraise
    raise self.exc_type(msg)
IndexError: Caught IndexError in DataLoader worker process 1.
Original Traceback (most recent call last):
  File "/home/ws/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/ws/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/ws/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "../..//tasks/semantic/dataset/kitti/parser.py", line 244, in __getitem__
    exec("residual_file_" + str(i+1) + " = " + "self.residual_files_" + str(i+1) + "[seq][index]")
  File "<string>", line 1, in <module>
IndexError: list index out of range

error occur during train with salsanext_mos without pretrained model

Thanks for author's remarkable working!

when i start to train this network without pretrained model, this error occurs showing below.
Can someone help me, thanks a lot!

/home/lijianguo/anaconda3/envs/LiDAR-MOS-1.1/bin/python /media/lijianguo/data_ssd/coding_test_platfrom/LiDAR-MOS-1.1/mos_SalsaNext/train/tasks/semantic/train.py -d /media/lijianguo/data_ssd/coding_test_platfrom/LiDAR-MOS-1.1/data -ac /media/lijianguo/data_ssd/coding_test_platfrom/LiDAR-MOS-1.1/mos_SalsaNext/salsanext_mos.yml -l /media/lijianguo/data_ssd/coding_test_platfrom/LiDAR-MOS-1.1/data/train_logs
Opening arch config file /media/lijianguo/data_ssd/coding_test_platfrom/LiDAR-MOS-1.1/mos_SalsaNext/salsanext_mos.yml
Opening data config file config/labels/semantic-kitti-mos.yaml

INTERFACE:
dataset /media/lijianguo/data_ssd/coding_test_platfrom/LiDAR-MOS-1.1/data
arch_cfg /media/lijianguo/data_ssd/coding_test_platfrom/LiDAR-MOS-1.1/mos_SalsaNext/salsanext_mos.yml
data_cfg config/labels/semantic-kitti-mos.yaml
uncertainty True
Total of Trainable Parameters: 6.71M
log /media/lijianguo/data_ssd/coding_test_platfrom/LiDAR-MOS-1.1/data/train_logs/logs/2022-4-04-22:06
pretrained None

Traceback (most recent call last):
File "/media/lijianguo/data_ssd/coding_test_platfrom/LiDAR-MOS-1.1/mos_SalsaNext/train/tasks/semantic/train.py", line 136, in
os.makedirs(FLAGS.log)
File "/home/lijianguo/anaconda3/envs/LiDAR-MOS-1.1/lib/python3.8/os.py", line 223, in makedirs
mkdir(name, mode)
OSError: [Errno 22] Invalid argument: '/media/lijianguo/data_ssd/coding_test_platfrom/LiDAR-MOS-1.1/data/train_logs/logs/2022-4-04-22:06'

Issue with trying to train on multiple gpus

Hello!
I was trying to train on multiple gpus and and was facing an issue. When I ran the script, I got an error message:

RuntimeError: Mismatch in shape: grad_output[0] has a shape of torch.Size([4]) and output[0] has a shape of torch.Size([1]).

This happens in the trainer.py script on line 389:
loss_m.backward(idx)

I was trying to train on 4 gpus, and so I assumed that was why grad_output[0] has a shape of torch.Size([4]), but the loss_m output from line 384 has a shape of torch.Size([1]):
loss_m = criterion(torch.log(output.clamp(min=1e-8)), proj_labels) + self.ls(output, proj_labels.long())

Do you have an idea of what might be causing the issue? Thank you in advance for your help!

TF/Tensorboard version mismatch

Hi there, thanks for releasing the code, the paper was a very interesting read!

I'm getting errors when trying to run MOS with SalsaNext related to the TF/Tensorboard versions. As per the config file, TF and Tensorboard versions 1.13.1 but in LiDAR-MOS/mos_SalsaNext/train/common/logger.py line 18, you use self.writer = tf.summary.create_file_writer(log_dir) which seems to be the way to call Tensorboard using TF 2.0 (mingyuliutw/UNIT#56 (comment), https://www.tensorflow.org/tensorboard/migrate).

This is the exact error I get:

No pretrained directory found.
Copying files to /host-machine/ml/LiDAR-MOS/logs/logs/2021-7-16-18:10salsanext for further reference.
Sequences folder exists! Using sequences from /host-machine/semKITTI/Data/SemanticKitti/dataset/sequences
parsing seq 00
parsing seq 01
parsing seq 02
parsing seq 03
parsing seq 04
parsing seq 05
parsing seq 06
parsing seq 07
parsing seq 09
parsing seq 10
Using 19130 scans from sequences [0, 1, 2, 3, 4, 5, 6, 7, 9, 10]
Sequences folder exists! Using sequences from /host-machine/semKITTI/Data/SemanticKitti/dataset/sequences
parsing seq 08
Using 4071 scans from sequences [8]
Loss weights from content:  tensor([  0.0000,   1.0210, 296.4371])
Depth of backbone input =  6
Traceback (most recent call last):
  File "./train.py", line 177, in <module>
    trainer = Trainer(ARCH, DATA, FLAGS.dataset, FLAGS.log, FLAGS.pretrained,FLAGS.uncertainty)
  File "../../tasks/semantic/modules/trainer.py", line 130, in __init__
    self.tb_logger = Logger(self.log + "/tb")
  File "../../common/logger.py", line 18, in __init__
    self.writer = tf.summary.create_file_writer(log_dir)
AttributeError: module 'tensorflow._api.v1.summary' has no attribute 'create_file_writer'

Question about usage with SLAM

Hi ! Thanks for your great work~

I've just read your paper and have some question about it. 
The Paper directly use the pose estimated by the SLAM algorithm and transform the pointcloud sequence to current pose before calculate the residual image, right? So the pose should be accurate, or the residual image will be wrong.  Am i right?
If what I said above is right, when moving object causes large drift in odometry, your algorithm might not improve the odometry accuracy since the moving object could not be identified accurately.

Thanks for your reply in advance!!

Residual Images问题

你好,请问residual images在哪里计算出来的,代码我只看到加载文件获取的。或者可以不用,用激光里的remissions?

dims of multi_residuals_images, thanks!

Dear author,

IF n_input_scans =2,
so dims of proj_full is 12? (x,y ,z ,r,e, x,y,z,r,e, residual1 ,resudual2)?

Is that right?

I'm so sorry to bother you. That's really confused me.
image

Thanks.

Understanding the Labels Visualization

Hi,
I'm trying to understand the output of the "visualize_mos.py".
I get the following image:
image

  1. are the pixels in red in the bottom figure considered to be dynamic?
  2. what is the the top figure represents? is it the residual image in the current frame? (I'm working with 1 residual image)

Thanks!

关于测试自己的数据集的问题

您好,请问我是否可以使用您的pretrained model测试自己的数据集,是否需要提供人工标注的label以及对自己的数据集进行training

Question about usage with SLAM

Hi ! Thanks for your great work~

I've just read your paper and have some question about it. 
The Paper directly use the pose estimated by the SLAM algorithm and transform the pointcloud sequence to current pose before calculate the residual image, right? So the pose should be accurate, or the residual image will be wrong.  Am i right?
If what I said above is right, when moving object causes large drift in odometry, your algorithm might not improve the odometry accuracy since the moving object could not be identified accurately.

Thanks for your reply in advance!!

Question about loading the pretrained salsanext model

Hi!

Thanks so much for the codes! I've a question about loading the pretrained salsanext model. When I followed the steps outlined in the "How to use" section and tested on the toy example, I came across the issue when trying to run infer.py on the toy dataset (python3 infer.py -d ../../../../data -m ../../../../data/model_salsanext_residual_1 -l ../../../../data/predictions_salsanext_residual_1_new -s valid) and got this error:

RuntimeError: ../../../../data/model_salsanext_residual_1/SalsaNext_valid_best is a zip archive (did you mean to use torch.jit.load()?)

I tried to switch from torch.load() to torch.jit.load() in the user.py as it suggested but it leads to other errors. What did I do wrong, or did I miss something along the way? I set up the environment according to the instruction linked on the github (using Pytorch 1.1).

Thank you in advance for your help!

How to use SalsaNet with my own dataset?

Hi, I have read the paper and built-and-run your LiDAR-MOS.

Thanks for sharing your awesome projects here.


I have a question.

How can I use the code with my own dataset?

I'll use the pretrained model you uploaded, so I think all I need to do is make my data be appropriate format for the LiDAR-MOS.

The data I have consists of .bag file and .pcd file.

I'd appreciate for you to give me some advice.

Best regards.

Migrating the model to Livox Horizon

Thanks for your excellent work. I download and infer the toy dataset you provide and the performance is good. Now I want to use the model in my Livox Horizon with 80x25 FoV and similar point density to 64-line LiDAR. When I use your tools to generate range image and residual image, I just change the pose.txt and calib.txt to my own and everything is alright, I can generate correct range image and residual image. But when I try to infer my data using the model, I meet this error the range image generation function in salsanext. Are there any possible reasons?
image

question about ROS/bag API

感谢您做出的如此优秀的工作!我想请问该算法可以直接只对一个bag包进行操作吗,然后由此获取一个去除了动态物体的bag包,这样就可以有更多的适用性了?

How can i train 'SalsaNext' successfully?(训练'SalsaNext'时侯出现了问题)

Hi, thanks for sharing your great code.
I'm just trying to do whole process of your works.
but I can't train SalsaNext,

I tried :

./train.sh -d ../../../../dataset/KITTI_dataset/velodyne_laser/dataset/ -a salsanext_mos.yml -l logs/ -c 0

learning process arrived at :

Lr: 5.944e-03 | Update: 2.381e-04 mean,3.611e-04 std | Epoch: [0][11370/19130] | Time 0.203 (0.204) | Data 0.030 (0.031) | Loss 0.3839 (0.2800) | acc 0.962 (0.980) | IoU 0.685 (0.517) | [7 days, 18:10:40]

and the error msgs i got :

proj_full = torch.cat([proj_full, torch.unsqueeze(eval("proj_residuals_" + str(i+1)), 0)])
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 2048 and 2058 in dimension 2 at ../aten/src/TH/generic/THTensor.cpp:711

Is the conda environment setting wrong? currently using :

tensorboard               1.13.1                   pypi_0    pypi
tensorboard-data-server   0.6.1                    pypi_0    pypi
tensorboard-plugin-wit    1.8.0                    pypi_0    pypi
tensorflow                1.13.1                   pypi_0    pypi
tensorflow-estimator      1.13.0                   pypi_0    pypi

Or, can you check again links Collection of downloads ? seems like can't access now.
Thanks you !

Training setups (tested with different GPUs)

Dear author,

Thanks for the sharing code.

I'm trying to reproduce the metrics from the paper, but haven't been successful yet.
I would like to ask about some training parameters and hardware equipment for the experiment?
Regarding the indicators such as iou in the paper, do you mean miou or just the iou of the moving class?

Thanks!

error when evaluate iou

hello, I try to eval iou after inferring them. For this example, I try in sequence 08.
I got this error.

evaluating label  /mnt/dataset/sequences/08/labels/000000.label with /mnt/result/infer_0108/sequences/08/predictions/000000.label
Traceback (most recent call last):
  File "evaluate_iou.py", line 176, in <module>
    label.open_scan(scan_file)
TypeError: open_scan() missing 2 required positional arguments: 'from_pose' and 'to_pose'

Can you help me? Thank you so much.

Questions about LIDAR-MOS visualization

Hello, your LIDAR-MOS code is very good, but I have a problem that cannot be visualized when reproducing your code, as shown in the figure:
image
After running this command, the program seems to be stuck. I don't know why, I want to get your visualized results, as shown below:
image
ps:Author reply:From the results of the operation, it seems to be a tkinter problem.But the tkinker module does not seem to be missing.
If anyone knows how to solve it, hope you can help me, thanks.

Generating Residual Images for Inference

Hi,
Say that I'm interesting in using the pretrained model with my own dataset only for inference in the first stage.
Do I have to prepare the residual images beforehand or does the model generates them automatically?
assuming I want to use residual images (residual: True).

Thanks :)

Multiple input frames

Thank you for your great code!

I notice in rangenet_mos.yaml and salsanext_mos.yaml you only use one input frame. If I want to use multiple scans for training, how can I do it?

As far as I know, I need to change n_input_scans in both backbone and dataset, and transform in the dataset. What else do I need?

Another question is about the pose transform. When using a sequence of scans to train the model, why do you transform the pose to the last scan but not the first scan? here

TRAIN BUGS

Thanks for your quick response!
@Chen-Xieyuanli

I trained with salsa_mos in semantic kitti. When it run here
Lr: 3.977e-03 | Update: 3.258e-04 mean,5.209e-04 std | Epoch: [0][950/2391] | Time 0.641 (0.623) | Data 0.081 (0.067) | Loss 0.6863 (0.9777) | acc 0.830 (0.855) | IoU 0.417 (0.434) | [1 day, 3:19:41]
LiDAR-MOS/mos_SalsaNext/train/tasks/semantic/../../common/laserscan.py:166: RuntimeWarning: invalid value encountered in true_divide pitch = np.arcsin(scan_z / depth)
i got error
File "conda/envs/salsa/lib/python3.9/site-packages/torch/_utils.py", line 457, in reraise raise exception IndexError: Caught IndexError in DataLoader worker process 1.
and
File "LiDAR-MOS/mos_SalsaNext/train/tasks/semantic/../../common/laserscan.py", line 201, in do_range_projection self.proj_range[proj_y, proj_x] = depth IndexError: index -2147483648 is out of bounds for axis 0 with size 64
The conda env is
`- python=3.9.12=h12debd9_0

python-flatbuffers=1.12=pyhd3eb1b0_0
pytorch=1.11.0=py3.9_cuda11.3_cudnn8.2.0_0
pytorch-mutex=1.0=cuda
tensorflow-base=2.6.0=mkl_py39h3d85931_0
tensorflow-estimator=2.6.0=pyh7b7c402_0`
Is there any wrong with this env?
Or it is too new?
Or there is something wrong with data in residual images?

Tweaking the model for partial azimuth FOV Lidar

Hi,
My Lidar's azimuth FOV is only ~100 [deg] .
What would be the best way to tweak the model or some configuration so it will work?
Currently the range images (and also the residual images) are very sparse at the right and left sides and
I think that is one of the reason for the bad performance I get.
Thanks

About FlowNet3D in the paper

Hi, thanks for your great work!
I have a question about FlowNet3D used in your paper. FlowNet3D estimates the motion of 3D point cloud, which is generated by object motion and **sensor motion**,  so why you can identify dynamic objects by a threshold? As you described in the paper:"We set a threshold on the estimated translation of each point to decide the label for each point, i.e., points with translations larger than the threshold are labeled as moving".
In my opinion, large translation of sence flow doesn't mean dynamic objects, maybe just the lidar sensor is moving.  Do you firstly transform the two frames to the same coordinate, and then estimate the sence flow?

labels

can we predict the moving object without lables?

combining lidar-mos and removert

Hi Chen, always I'm so grateful for the codes and works you shared.
This issue is not a question or error report, but I wanna share my recent mini-project (for fun but practically makes sense) for the other readers.

I think lidar-mos is good at proactively removing a bunch of points in the wild, particularly in highly dynamic urban environments such as KITTI 01.
Thus, I've used the scan_cleaner you shared and fed the output (cleaned scans from lidar-mos) into the Removert as its input.

This is the tutorial video https://youtu.be/zWuoqtDofsE
and an example result can be seen here https://github.com/irapkaist/removert#further-improvements

thanks for reading 😋

how to change the "n_input_scans"?

I only change the "arch_cfg.yaml" in model ,but there is a error:size mismatch for module.downCntx.conv1.weight: copying a param with shape torch.Size([32, 6, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 13, 1, 1]),so how to change the "n_input_scans"?thank you :)

May I ask how to fix this error? (When using my own trained model to infer)

Traceback (most recent call last):
File "infer.py", line 144, in
user = User(ARCH, DATA, FLAGS.dataset, FLAGS.log, FLAGS.model,FLAGS.split,FLAGS.uncertainty,FLAGS.monte_carlo)
File "../../tasks/semantic/modules/user.py", line 70, in init
self.model.load_state_dict(w_dict['state_dict'], strict=True)
File "/root/miniconda3/envs/salsanext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1605, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for DataParallel:
Missing key(s) in state_dict: "module.downCntx.conv1.weight", "module.downCntx.conv1.bias", "module.downCntx.conv2.weight", "module.downCntx.conv2.bias", "module.downCntx.bn1.weight", "module.downCntx.bn1.bias", "module.downCntx.bn1.running_mean", "module.downCntx.bn1.running_var", "module.downCntx.conv3.weight", "module.downCntx.conv3.bias", "module.downCntx.bn2.weight", "module.downCntx.bn2.bias", "module.downCntx.bn2.running_mean", "module.downCntx.bn2.running_var", "module.downCntx2.conv1.weight", "module.downCntx2.conv1.bias", "module.downCntx2.conv2.weight", "module.downCntx2.conv2.bias", "module.downCntx2.bn1.weight", "module.downCntx2.bn1.bias", "module.downCntx2.bn1.running_mean", "module.downCntx2.bn1.running_var", "module.downCntx2.conv3.weight", "module.downCntx2.conv3.bias", "module.downCntx2.bn2.weight", "module.downCntx2.bn2.bias", "module.downCntx2.bn2.running_mean", "module.downCntx2.bn2.running_var", "module.downCntx3.conv1.weight", "module.downCntx3.conv1.bias", "module.downCntx3.conv2.weight", "module.downCntx3.conv2.bias", "module.downCntx3.bn1.weight", "module.downCntx3.bn1.bias", "module.downCntx3.bn1.running_mean", "module.downCntx3.bn1.running_var", "module.downCntx3.conv3.weight", "module.downCntx3.conv3.bias", "module.downCntx3.bn2.weight", "module.downCntx3.bn2.bias", "module.downCntx3.bn2.running_mean", "module.downCntx3.bn2.running_var", "module.resBlock1.conv1.weight", "module.resBlock1.conv1.bias", "module.resBlock1.conv2.weight", "module.resBlock1.conv2.bias", "module.resBlock1.bn1.weight", "module.resBlock1.bn1.bias", "module.resBlock1.bn1.running_mean", "module.resBlock1.bn1.running_var", "module.resBlock1.conv3.weight", "module.resBlock1.conv3.bias", "module.resBlock1.bn2.weight", "module.resBlock1.bn2.bias", "module.resBlock1.bn2.running_mean", "module.resBlock1.bn2.running_var", "module.resBlock1.conv4.weight", "module.resBlock1.conv4.bias", "module.resBlock1.bn3.weight", "module.resBlock1.bn3.bias", "module.resBlock1.bn3.running_mean", "module.resBlock1.bn3.running_var", "module.resBlock1.conv5.weight", "module.resBlock1.conv5.bias", "module.resBlock1.bn4.weight", "module.resBlock1.bn4.bias", "module.resBlock1.bn4.running_mean", "module.resBlock1.bn4.running_var", "module.resBlock2.conv1.weight", "module.resBlock2.conv1.bias", "module.resBlock2.conv2.weight", "module.resBlock2.conv2.bias", "module.resBlock2.bn1.weight", "module.resBlock2.bn1.bias", "module.resBlock2.bn1.running_mean", "module.resBlock2.bn1.running_var", "module.resBlock2.conv3.weight", "module.resBlock2.conv3.bias", "module.resBlock2.bn2.weight", "module.resBlock2.bn2.bias", "module.resBlock2.bn2.running_mean", "module.resBlock2.bn2.running_var", "module.resBlock2.conv4.weight", "module.resBlock2.conv4.bias", "module.resBlock2.bn3.weight", "module.resBlock2.bn3.bias", "module.resBlock2.bn3.running_mean", "module.resBlock2.bn3.running_var", "module.resBlock2.conv5.weight", "module.resBlock2.conv5.bias", "module.resBlock2.bn4.weight", "module.resBlock2.bn4.bias", "module.resBlock2.bn4.running_mean", "module.resBlock2.bn4.running_var", "module.resBlock3.conv1.weight", "module.resBlock3.conv1.bias", "module.resBlock3.conv2.weight", "module.resBlock3.conv2.bias", "module.resBlock3.bn1.weight", "module.resBlock3.bn1.bias", "module.resBlock3.bn1.running_mean", "module.resBlock3.bn1.running_var", "module.resBlock3.conv3.weight", "module.resBlock3.conv3.bias", "module.resBlock3.bn2.weight", "module.resBlock3.bn2.bias", "module.resBlock3.bn2.running_mean", "module.resBlock3.bn2.running_var", "module.resBlock3.conv4.weight", "module.resBlock3.conv4.bias", "module.resBlock3.bn3.weight", "module.resBlock3.bn3.bias", "module.resBlock3.bn3.running_mean", "module.resBlock3.bn3.running_var", "module.resBlock3.conv5.weight", "module.resBlock3.conv5.bias", "module.resBlock3.bn4.weight", "module.resBlock3.bn4.bias", "module.resBlock3.bn4.running_mean", "module.resBlock3.bn4.running_var", "module.resBlock4.conv1.weight", "module.resBlock4.conv1.bias", "module.resBlock4.conv2.weight", "module.resBlock4.conv2.bias", "module.resBlock4.bn1.weight", "module.resBlock4.bn1.bias", "module.resBlock4.bn1.running_mean", "module.resBlock4.bn1.running_var", "module.resBlock4.conv3.weight", "module.resBlock4.conv3.bias", "module.resBlock4.bn2.weight", "module.resBlock4.bn2.bias", "module.resBlock4.bn2.running_mean", "module.resBlock4.bn2.running_var", "module.resBlock4.conv4.weight", "module.resBlock4.conv4.bias", "module.resBlock4.bn3.weight", "module.resBlock4.bn3.bias", "module.resBlock4.bn3.running_mean", "module.resBlock4.bn3.running_var", "module.resBlock4.conv5.weight", "module.resBlock4.conv5.bias", "module.resBlock4.bn4.weight", "module.resBlock4.bn4.bias", "module.resBlock4.bn4.running_mean", "module.resBlock4.bn4.running_var", "module.resBlock5.conv1.weight", "module.resBlock5.conv1.bias", "module.resBlock5.conv2.weight", "module.resBlock5.conv2.bias", "module.resBlock5.bn1.weight", "module.resBlock5.bn1.bias", "module.resBlock5.bn1.running_mean", "module.resBlock5.bn1.running_var", "module.resBlock5.conv3.weight", "module.resBlock5.conv3.bias", "module.resBlock5.bn2.weight", "module.resBlock5.bn2.bias", "module.resBlock5.bn2.running_mean", "module.resBlock5.bn2.running_var", "module.resBlock5.conv4.weight", "module.resBlock5.conv4.bias", "module.resBlock5.bn3.weight", "module.resBlock5.bn3.bias", "module.resBlock5.bn3.running_mean", "module.resBlock5.bn3.running_var", "module.resBlock5.conv5.weight", "module.resBlock5.conv5.bias", "module.resBlock5.bn4.weight", "module.resBlock5.bn4.bias", "module.resBlock5.bn4.running_mean", "module.resBlock5.bn4.running_var", "module.upBlock1.conv1.weight", "module.upBlock1.conv1.bias", "module.upBlock1.bn1.weight", "module.upBlock1.bn1.bias", "module.upBlock1.bn1.running_mean", "module.upBlock1.bn1.running_var", "module.upBlock1.conv2.weight", "module.upBlock1.conv2.bias", "module.upBlock1.bn2.weight", "module.upBlock1.bn2.bias", "module.upBlock1.bn2.running_mean", "module.upBlock1.bn2.running_var", "module.upBlock1.conv3.weight", "module.upBlock1.conv3.bias", "module.upBlock1.bn3.weight", "module.upBlock1.bn3.bias", "module.upBlock1.bn3.running_mean", "module.upBlock1.bn3.running_var", "module.upBlock1.conv4.weight", "module.upBlock1.conv4.bias", "module.upBlock1.bn4.weight", "module.upBlock1.bn4.bias", "module.upBlock1.bn4.running_mean", "module.upBlock1.bn4.running_var", "module.upBlock2.conv1.weight", "module.upBlock2.conv1.bias", "module.upBlock2.bn1.weight", "module.upBlock2.bn1.bias", "module.upBlock2.bn1.running_mean", "module.upBlock2.bn1.running_var", "module.upBlock2.conv2.weight", "module.upBlock2.conv2.bias", "module.upBlock2.bn2.weight", "module.upBlock2.bn2.bias", "module.upBlock2.bn2.running_mean", "module.upBlock2.bn2.running_var", "module.upBlock2.conv3.weight", "module.upBlock2.conv3.bias", "module.upBlock2.bn3.weight", "module.upBlock2.bn3.bias", "module.upBlock2.bn3.running_mean", "module.upBlock2.bn3.running_var", "module.upBlock2.conv4.weight", "module.upBlock2.conv4.bias", "module.upBlock2.bn4.weight", "module.upBlock2.bn4.bias", "module.upBlock2.bn4.running_mean", "module.upBlock2.bn4.running_var", "module.upBlock3.conv1.weight", "module.upBlock3.conv1.bias", "module.upBlock3.bn1.weight", "module.upBlock3.bn1.bias", "module.upBlock3.bn1.running_mean", "module.upBlock3.bn1.running_var", "module.upBlock3.conv2.weight", "module.upBlock3.conv2.bias", "module.upBlock3.bn2.weight", "module.upBlock3.bn2.bias", "module.upBlock3.bn2.running_mean", "module.upBlock3.bn2.running_var", "module.upBlock3.conv3.weight", "module.upBlock3.conv3.bias", "module.upBlock3.bn3.weight", "module.upBlock3.bn3.bias", "module.upBlock3.bn3.running_mean", "module.upBlock3.bn3.running_var", "module.upBlock3.conv4.weight", "module.upBlock3.conv4.bias", "module.upBlock3.bn4.weight", "module.upBlock3.bn4.bias", "module.upBlock3.bn4.running_mean", "module.upBlock3.bn4.running_var", "module.upBlock4.conv1.weight", "module.upBlock4.conv1.bias", "module.upBlock4.bn1.weight", "module.upBlock4.bn1.bias", "module.upBlock4.bn1.running_mean", "module.upBlock4.bn1.running_var", "module.upBlock4.conv2.weight", "module.upBlock4.conv2.bias", "module.upBlock4.bn2.weight", "module.upBlock4.bn2.bias", "module.upBlock4.bn2.running_mean", "module.upBlock4.bn2.running_var", "module.upBlock4.conv3.weight", "module.upBlock4.conv3.bias", "module.upBlock4.bn3.weight", "module.upBlock4.bn3.bias", "module.upBlock4.bn3.running_mean", "module.upBlock4.bn3.running_var", "module.upBlock4.conv4.weight", "module.upBlock4.conv4.bias", "module.upBlock4.bn4.weight", "module.upBlock4.bn4.bias", "module.upBlock4.bn4.running_mean", "module.upBlock4.bn4.running_var", "module.logits.weight", "module.logits.bias".
Unexpected key(s) in state_dict: "downCntx.conv1.weight", "downCntx.conv1.bias", "downCntx.conv2.weight", "downCntx.conv2.bias", "downCntx.bn1.weight", "downCntx.bn1.bias", "downCntx.bn1.running_mean", "downCntx.bn1.running_var", "downCntx.bn1.num_batches_tracked", "downCntx.conv3.weight", "downCntx.conv3.bias", "downCntx.bn2.weight", "downCntx.bn2.bias", "downCntx.bn2.running_mean", "downCntx.bn2.running_var", "downCntx.bn2.num_batches_tracked", "downCntx2.conv1.weight", "downCntx2.conv1.bias", "downCntx2.conv2.weight", "downCntx2.conv2.bias", "downCntx2.bn1.weight", "downCntx2.bn1.bias", "downCntx2.bn1.running_mean", "downCntx2.bn1.running_var", "downCntx2.bn1.num_batches_tracked", "downCntx2.conv3.weight", "downCntx2.conv3.bias", "downCntx2.bn2.weight", "downCntx2.bn2.bias", "downCntx2.bn2.running_mean", "downCntx2.bn2.running_var", "downCntx2.bn2.num_batches_tracked", "downCntx3.conv1.weight", "downCntx3.conv1.bias", "downCntx3.conv2.weight", "downCntx3.conv2.bias", "downCntx3.bn1.weight", "downCntx3.bn1.bias", "downCntx3.bn1.running_mean", "downCntx3.bn1.running_var", "downCntx3.bn1.num_batches_tracked", "downCntx3.conv3.weight", "downCntx3.conv3.bias", "downCntx3.bn2.weight", "downCntx3.bn2.bias", "downCntx3.bn2.running_mean", "downCntx3.bn2.running_var", "downCntx3.bn2.num_batches_tracked", "resBlock1.conv1.weight", "resBlock1.conv1.bias", "resBlock1.conv2.weight", "resBlock1.conv2.bias", "resBlock1.bn1.weight", "resBlock1.bn1.bias", "resBlock1.bn1.running_mean", "resBlock1.bn1.running_var", "resBlock1.bn1.num_batches_tracked", "resBlock1.conv3.weight", "resBlock1.conv3.bias", "resBlock1.bn2.weight", "resBlock1.bn2.bias", "resBlock1.bn2.running_mean", "resBlock1.bn2.running_var", "resBlock1.bn2.num_batches_tracked", "resBlock1.conv4.weight", "resBlock1.conv4.bias", "resBlock1.bn3.weight", "resBlock1.bn3.bias", "resBlock1.bn3.running_mean", "resBlock1.bn3.running_var", "resBlock1.bn3.num_batches_tracked", "resBlock1.conv5.weight", "resBlock1.conv5.bias", "resBlock1.bn4.weight", "resBlock1.bn4.bias", "resBlock1.bn4.running_mean", "resBlock1.bn4.running_var", "resBlock1.bn4.num_batches_tracked", "resBlock2.conv1.weight", "resBlock2.conv1.bias", "resBlock2.conv2.weight", "resBlock2.conv2.bias", "resBlock2.bn1.weight", "resBlock2.bn1.bias", "resBlock2.bn1.running_mean", "resBlock2.bn1.running_var", "resBlock2.bn1.num_batches_tracked", "resBlock2.conv3.weight", "resBlock2.conv3.bias", "resBlock2.bn2.weight", "resBlock2.bn2.bias", "resBlock2.bn2.running_mean", "resBlock2.bn2.running_var", "resBlock2.bn2.num_batches_tracked", "resBlock2.conv4.weight", "resBlock2.conv4.bias", "resBlock2.bn3.weight", "resBlock2.bn3.bias", "resBlock2.bn3.running_mean", "resBlock2.bn3.running_var", "resBlock2.bn3.num_batches_tracked", "resBlock2.conv5.weight", "resBlock2.conv5.bias", "resBlock2.bn4.weight", "resBlock2.bn4.bias", "resBlock2.bn4.running_mean", "resBlock2.bn4.running_var", "resBlock2.bn4.num_batches_tracked", "resBlock3.conv1.weight", "resBlock3.conv1.bias", "resBlock3.conv2.weight", "resBlock3.conv2.bias", "resBlock3.bn1.weight", "resBlock3.bn1.bias", "resBlock3.bn1.running_mean", "resBlock3.bn1.running_var", "resBlock3.bn1.num_batches_tracked", "resBlock3.conv3.weight", "resBlock3.conv3.bias", "resBlock3.bn2.weight", "resBlock3.bn2.bias", "resBlock3.bn2.running_mean", "resBlock3.bn2.running_var", "resBlock3.bn2.num_batches_tracked", "resBlock3.conv4.weight", "resBlock3.conv4.bias", "resBlock3.bn3.weight", "resBlock3.bn3.bias", "resBlock3.bn3.running_mean", "resBlock3.bn3.running_var", "resBlock3.bn3.num_batches_tracked", "resBlock3.conv5.weight", "resBlock3.conv5.bias", "resBlock3.bn4.weight", "resBlock3.bn4.bias", "resBlock3.bn4.running_mean", "resBlock3.bn4.running_var", "resBlock3.bn4.num_batches_tracked", "resBlock4.conv1.weight", "resBlock4.conv1.bias", "resBlock4.conv2.weight", "resBlock4.conv2.bias", "resBlock4.bn1.weight", "resBlock4.bn1.bias", "resBlock4.bn1.running_mean", "resBlock4.bn1.running_var", "resBlock4.bn1.num_batches_tracked", "resBlock4.conv3.weight", "resBlock4.conv3.bias", "resBlock4.bn2.weight", "resBlock4.bn2.bias", "resBlock4.bn2.running_mean", "resBlock4.bn2.running_var", "resBlock4.bn2.num_batches_tracked", "resBlock4.conv4.weight", "resBlock4.conv4.bias", "resBlock4.bn3.weight", "resBlock4.bn3.bias", "resBlock4.bn3.running_mean", "resBlock4.bn3.running_var", "resBlock4.bn3.num_batches_tracked", "resBlock4.conv5.weight", "resBlock4.conv5.bias", "resBlock4.bn4.weight", "resBlock4.bn4.bias", "resBlock4.bn4.running_mean", "resBlock4.bn4.running_var", "resBlock4.bn4.num_batches_tracked", "resBlock5.conv1.weight", "resBlock5.conv1.bias", "resBlock5.conv2.weight", "resBlock5.conv2.bias", "resBlock5.bn1.weight", "resBlock5.bn1.bias", "resBlock5.bn1.running_mean", "resBlock5.bn1.running_var", "resBlock5.bn1.num_batches_tracked", "resBlock5.conv3.weight", "resBlock5.conv3.bias", "resBlock5.bn2.weight", "resBlock5.bn2.bias", "resBlock5.bn2.running_mean", "resBlock5.bn2.running_var", "resBlock5.bn2.num_batches_tracked", "resBlock5.conv4.weight", "resBlock5.conv4.bias", "resBlock5.bn3.weight", "resBlock5.bn3.bias", "resBlock5.bn3.running_mean", "resBlock5.bn3.running_var", "resBlock5.bn3.num_batches_tracked", "resBlock5.conv5.weight", "resBlock5.conv5.bias", "resBlock5.bn4.weight", "resBlock5.bn4.bias", "resBlock5.bn4.running_mean", "resBlock5.bn4.running_var", "resBlock5.bn4.num_batches_tracked", "upBlock1.conv1.weight", "upBlock1.conv1.bias", "upBlock1.bn1.weight", "upBlock1.bn1.bias", "upBlock1.bn1.running_mean", "upBlock1.bn1.running_var", "upBlock1.bn1.num_batches_tracked", "upBlock1.conv2.weight", "upBlock1.conv2.bias", "upBlock1.bn2.weight", "upBlock1.bn2.bias", "upBlock1.bn2.running_mean", "upBlock1.bn2.running_var", "upBlock1.bn2.num_batches_tracked", "upBlock1.conv3.weight", "upBlock1.conv3.bias", "upBlock1.bn3.weight", "upBlock1.bn3.bias", "upBlock1.bn3.running_mean", "upBlock1.bn3.running_var", "upBlock1.bn3.num_batches_tracked", "upBlock1.conv4.weight", "upBlock1.conv4.bias", "upBlock1.bn4.weight", "upBlock1.bn4.bias", "upBlock1.bn4.running_mean", "upBlock1.bn4.running_var", "upBlock1.bn4.num_batches_tracked", "upBlock2.conv1.weight", "upBlock2.conv1.bias", "upBlock2.bn1.weight", "upBlock2.bn1.bias", "upBlock2.bn1.running_mean", "upBlock2.bn1.running_var", "upBlock2.bn1.num_batches_tracked", "upBlock2.conv2.weight", "upBlock2.conv2.bias", "upBlock2.bn2.weight", "upBlock2.bn2.bias", "upBlock2.bn2.running_mean", "upBlock2.bn2.running_var", "upBlock2.bn2.num_batches_tracked", "upBlock2.conv3.weight", "upBlock2.conv3.bias", "upBlock2.bn3.weight", "upBlock2.bn3.bias", "upBlock2.bn3.running_mean", "upBlock2.bn3.running_var", "upBlock2.bn3.num_batches_tracked", "upBlock2.conv4.weight", "upBlock2.conv4.bias", "upBlock2.bn4.weight", "upBlock2.bn4.bias", "upBlock2.bn4.running_mean", "upBlock2.bn4.running_var", "upBlock2.bn4.num_batches_tracked", "upBlock3.conv1.weight", "upBlock3.conv1.bias", "upBlock3.bn1.weight", "upBlock3.bn1.bias", "upBlock3.bn1.running_mean", "upBlock3.bn1.running_var", "upBlock3.bn1.num_batches_tracked", "upBlock3.conv2.weight", "upBlock3.conv2.bias", "upBlock3.bn2.weight", "upBlock3.bn2.bias", "upBlock3.bn2.running_mean", "upBlock3.bn2.running_var", "upBlock3.bn2.num_batches_tracked", "upBlock3.conv3.weight", "upBlock3.conv3.bias", "upBlock3.bn3.weight", "upBlock3.bn3.bias", "upBlock3.bn3.running_mean", "upBlock3.bn3.running_var", "upBlock3.bn3.num_batches_tracked", "upBlock3.conv4.weight", "upBlock3.conv4.bias", "upBlock3.bn4.weight", "upBlock3.bn4.bias", "upBlock3.bn4.running_mean", "upBlock3.bn4.running_var", "upBlock3.bn4.num_batches_tracked", "upBlock4.conv1.weight", "upBlock4.conv1.bias", "upBlock4.bn1.weight", "upBlock4.bn1.bias", "upBlock4.bn1.running_mean", "upBlock4.bn1.running_var", "upBlock4.bn1.num_batches_tracked", "upBlock4.conv2.weight", "upBlock4.conv2.bias", "upBlock4.bn2.weight", "upBlock4.bn2.bias", "upBlock4.bn2.running_mean", "upBlock4.bn2.running_var", "upBlock4.bn2.num_batches_tracked", "upBlock4.conv3.weight", "upBlock4.conv3.bias", "upBlock4.bn3.weight", "upBlock4.bn3.bias", "upBlock4.bn3.running_mean", "upBlock4.bn3.running_var", "upBlock4.bn3.num_batches_tracked", "upBlock4.conv4.weight", "upBlock4.conv4.bias", "upBlock4.bn4.weight", "upBlock4.bn4.bias", "upBlock4.bn4.running_mean", "upBlock4.bn4.running_var", "upBlock4.bn4.num_batches_tracked", "logits.weight", "logits.bias".

Labels Files while Inferring

Hi,
while trying to infer my dataset the code fails at the start with this error:
"File "../..//tasks/semantic/dataset/kitti/parser.py", line 197, in init
assert(len(scan_files) == len(label_files))"

I think there is something basic that I'm missing here.
if the purpose of the inference is to create the label files (predictions), then why it tries to locate them at the beginning?
Thanks

prediction labels in toy dataset

Hi,

I see there are some segmentation results already present in the toy dataset.
image

The one that says salsanext, does it use one residual image?

Best Regards
Sambit

关于多帧residual的问题 (Question about generating residual images)

您好,关于此工作,对于gen_residual_images.py中

  last_pose = poses[frame_idx - num_last_n]
  last_scan = load_vertex(scan_paths[frame_idx - num_last_n])
  last_scan_transformed = np.linalg.inv(current_pose).dot(last_pose).dot(last_scan.T).T
  last_range_transformed = range_projection(last_scan_transformed.astype(np.float32),
                                            range_image_params['height'], range_image_params['width'],
                                            range_image_params['fov_up'], range_image_params['fov_down'],
                                            range_image_params['max_range'], range_image_params['min_range'])[:, :, 3]

对于不同的num_last_n数量也只是取了对于当前帧num_last_n前的那一帧数据吗?而不是连续数据叠加?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.