Code Monkey home page Code Monkey logo

lidar-mos's Introduction

LMNet: Moving Object Segmentation in 3D LiDAR Data

This repo contains the code for our paper: Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach Exploiting Sequential Data PDF.

Our approach accurately segments the scene into moving and static objects, i.e., distinguishing between moving cars vs. parked cars. This task is also called 3D motion detection or segmentation. Our method runs faster than the frame rate of the sensor and can be used to improve 3D LiDAR-based odometry/SLAM and mapping results as shown below.

Additionally, we created a new benchmark for LiDAR-based moving object segmentation based on SemanticKITTI here.

Complete demo video can be found in YouTube here. LiDAR-MOS in action:

Table of Contents

  1. Introduction of the repo and benchmark
  2. Publication
  3. Log
  4. Dependencies
  5. How to use
  6. Applications
  7. Collection of downloads
  8. License

Publication

If you use our code and benchmark in your academic work, please cite the corresponding paper:

@article{chen2021ral,
	title={{Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach Exploiting Sequential Data}},
	author={X. Chen and S. Li and B. Mersch and L. Wiesmann and J. Gall and J. Behley and C. Stachniss},
	year={2021},
	volume=6,
	issue=4,
	pages={6529-6536},
	journal={IEEE Robotics and Automation Letters (RA-L)},
	url = {http://www.ipb.uni-bonn.de/pdfs/chen2021ral-iros.pdf},
	doi = {10.1109/LRA.2021.3093567},
	issn = {2377-3766},
}

Log

News 20220907

The old codalab server stopped the service.

Please use the new link here to submit your results to the benchmark. You could still find the old results here.

News 20220706

Our MotionSeg3D is open-source here.

It uses a dual-branch and dual-head structure to fuse Spatial-Temporal information for LiDAR moving object segmentation.

News 20220615

Our 4DMOS is open-source here.

It uses sparse CNN on 4D point clouds for LiDAR moving object segmentation.

v1.1

Thanks Jiadai Sun for testing and correcting some bugs of SalsaNext-MOS.

More setups can also be found here: #47

v1.0

Open-source version

Dependencies

We built and tested our work based on SalsaNext, RangeNet++ and MINet. We thank the original authors for their nice work and implementation. If you are interested in fast LiDAR-based semantic segmentation, we strongly recommend having a look at the original repositories.

Note that, in this repo, we show that how easily we could achieve LiDAR-based moving object segmentation exploiting sequential information with existing segmentation networks. We didn't change the original pipeline of the segmentation networks, but only changed the data loader and input of the network as shown in the figure below. Therefore, our method can be used with any range-image-based LiDAR segmentation networks.

Our method is based on range images. To use range projection with fast c++ library, please find the usage doc here.

How to use

For a quick test of all the steps below, one could download a toy dataset here and decompress it in the data\ folder following the data structure data/README.md.

Prepare training data

To use our method, one needs to generate the residual images. Here is a quick demo:

  $ python3 utils/gen_residual_images.py

More setup about the data preparation can be found in the yaml file config/data_preparing.yaml. To prepare the training data for the whole KITTI-Odometry dataset, please download the original website.

Using SalsaNext as the baseline

To use SalsaNext as the baseline segmentation network for LiDAR-MOS, one should follow the mos_SalsaNext/README.md to set it up.

Note that, we use pytorch v1.5.1+cu101 which is different from the original one. More information about the related issue is here.

Inferring

To generate the LiDAR-MOS predictions with pretrained model with one residual image (download, please unzip before using). Quick test on toy dataset, directly run

  $ cd mos_SalsaNext/train/tasks/semantic
  $ python3 infer.py -d ../../../../data -m ../../../../data/model_salsanext_residual_1 -l ../../../../data/predictions_salsanext_residual_1_new -s valid

Inferring the whole dataset, please download the KITTI-Odometry dataset from the original website, and change the corresponding paths.

  $ cd mos_SalsaNext/train/tasks/semantic
  $ python3 infer.py -d path/to/kitti/dataset -m path/to/pretrained_model -l path/to/log -s train/valid/test # depending of desired split to evaluate

Training

To train a LiDAR-MOS network with SalsaNext from scratch, one has to download the KITTI-Odometry dataset and Semantic-Kitti dataset: Change the corresponding paths and run:

  $ cd mos_SalsaNext/train/tasks/semantic
  $ ./train.sh -d path/to/kitti/dataset -a salsanext_mos.yml -l path/to/log -c 0  # the number of used gpu cores

Using RangeNet++ as the baseline

To use RangeNet++ as the baseline segmentation network for LiDAR-MOS, one should follow the mos_RangeNet/README.md to set it up.

Inferring

Inferring the whole dataset, please download the KITTI-Odometry dataset from the original website, the pretrained model and change the corresponding paths.

  $ cd mos_RangeNet/tasks/semantic
  $ python3 infer.py -d path/to/kitti/dataset -m path/to/pretrained_model -l path/to/log -s train/valid/test # depending of desired split to evaluate

Training

To train a LiDAR-MOS network with RangeNet++ from scratch, one has to download the KITTI-Odometry dataset and Semantic-Kitti dataset and change the corresponding paths and run:

  $ cd mos_RangeNet/tasks/semantic
  $ python3 train.py -d path/to/kitti/dataset -ac rangenet_mos.yaml -l path/to/log

More pretrained model and LiDAR-MOS predictions can be found in collection of downloads.

Evaluation and visualization

How to evaluate

Evaluation metrics. Let's call the moving (dynamic) status as D and the static status as S.

Since we ignore the unlabelled and invalid status, therefore in MOD there are only two classes.

GT\Prediction dynamic static
dynamic TD FS
static FD TS
  • $$ IoU_{MOS} = \frac{TD}{TD+FD+FS} $$

To evaluate the MOS results on the toy dataset just run:

  $ python3 utils/evaluate_mos.py -d data -p data/predictions_salsanext_residual_1_valid -s valid

To evaluate the MOS results on our LiDAR-MOS benchmark please have a look at our semantic-kitti-api and benchmark website.

How to visualize the predictions

To visualize the MOS results on the toy dataset just run:

  $ python3 utils/visualize_mos.py -d data -p data/predictions_salsanext_residual_1_valid -s 8  # here we use a specific sequence number

where:

  • sequence is the sequence to be accessed.
  • dataset is the path to the kitti dataset where the sequences directory is.

Navigation:

  • n is next scan,
  • b is previous scan,
  • esc or q exits.

Applications

LiDAR-MOS is very important for building consistent maps, making future state predictions, avoiding collisions, and planning. It can also improve and robustify pose estimation, sensor data registration, and SLAM. Here we show two obvious applications of our LiDAR-MOS which are LiDAR-based odometry/SLAM as well as 3D mapping. Before that, we show two simple examples of how to combine our method with semantics and clean the scans. After cleaning scans we can get better odometry/SLAM and 3D mapping results.

Note that, here we show two direct use cases of our MOS approach without any further optimizations employed.

Enhanced with semantics

To show a simple way of combining our LiDAR-MOS with semantics, we provide a quick demo with the toy dataset:

  $ python3 utils/combine_semantics.py

It just simply checks whether the moving objects are movable classes or not. If not, re-assigned as static.

Clean the scans

To clean the LiDAR scans with our LiDAR-MOS as masks, we also provide a quick demo on the toy dataset:

  $ python3 utils/scan_cleaner.py

Odometry/SLAM

Using the cleaned LiDAR scans, we see that by simply applying our MOS predictions as a preprocessing mask, the odometry results are improved in both the KITTI training and test data and even slightly better than the carefully-designed full classes semantic-enhanced SuMa++.

The testing results of our methods can also be found in KITTI-Odometry benchmark.

Mapping

we compare the aggregated point cloud maps (left) directly with the raw LiDAR scans, (right) with the cleaned LiDAR scans by applying our MOS predictions as masks. As can be seen, there are moving objects present that pollute the map, which might have adversarial effects, when used for localization or path planning. By using our MOS predictions as masks, we can effectively remove these artifacts and get a clean map.

Map cleaning

For offline map cleaning, Giseop Kim combined his Removert and LiDAR-MOS, and got very good results. More information can be found in #28.

Collection of downloads

License

This project is free software made available under the MIT License. For details see the LICENSE file.

lidar-mos's People

Contributors

chen-xieyuanli avatar dependabot[bot] avatar joelosw avatar l-reichardt avatar maxchanger avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lidar-mos's Issues

Training setups (tested with different GPUs)

Dear author,

Thanks for the sharing code.

I'm trying to reproduce the metrics from the paper, but haven't been successful yet.
I would like to ask about some training parameters and hardware equipment for the experiment?
Regarding the indicators such as iou in the paper, do you mean miou or just the iou of the moving class?

Thanks!

Residual Images问题

你好,请问residual images在哪里计算出来的,代码我只看到加载文件获取的。或者可以不用,用激光里的remissions?

Issue with trying to train on multiple gpus

Hello!
I was trying to train on multiple gpus and and was facing an issue. When I ran the script, I got an error message:

RuntimeError: Mismatch in shape: grad_output[0] has a shape of torch.Size([4]) and output[0] has a shape of torch.Size([1]).

This happens in the trainer.py script on line 389:
loss_m.backward(idx)

I was trying to train on 4 gpus, and so I assumed that was why grad_output[0] has a shape of torch.Size([4]), but the loss_m output from line 384 has a shape of torch.Size([1]):
loss_m = criterion(torch.log(output.clamp(min=1e-8)), proj_labels) + self.ls(output, proj_labels.long())

Do you have an idea of what might be causing the issue? Thank you in advance for your help!

dims of multi_residuals_images, thanks!

Dear author,

IF n_input_scans =2,
so dims of proj_full is 12? (x,y ,z ,r,e, x,y,z,r,e, residual1 ,resudual2)?

Is that right?

I'm so sorry to bother you. That's really confused me.
image

Thanks.

Migrating the model to Livox Horizon

Thanks for your excellent work. I download and infer the toy dataset you provide and the performance is good. Now I want to use the model in my Livox Horizon with 80x25 FoV and similar point density to 64-line LiDAR. When I use your tools to generate range image and residual image, I just change the pose.txt and calib.txt to my own and everything is alright, I can generate correct range image and residual image. But when I try to infer my data using the model, I meet this error the range image generation function in salsanext. Are there any possible reasons?
image

Tweaking the model for partial azimuth FOV Lidar

Hi,
My Lidar's azimuth FOV is only ~100 [deg] .
What would be the best way to tweak the model or some configuration so it will work?
Currently the range images (and also the residual images) are very sparse at the right and left sides and
I think that is one of the reason for the bad performance I get.
Thanks

关于测试自己的数据集的问题

您好,请问我是否可以使用您的pretrained model测试自己的数据集,是否需要提供人工标注的label以及对自己的数据集进行training

How can i train 'SalsaNext' successfully?(训练'SalsaNext'时侯出现了问题)

Hi, thanks for sharing your great code.
I'm just trying to do whole process of your works.
but I can't train SalsaNext,

I tried :

./train.sh -d ../../../../dataset/KITTI_dataset/velodyne_laser/dataset/ -a salsanext_mos.yml -l logs/ -c 0

learning process arrived at :

Lr: 5.944e-03 | Update: 2.381e-04 mean,3.611e-04 std | Epoch: [0][11370/19130] | Time 0.203 (0.204) | Data 0.030 (0.031) | Loss 0.3839 (0.2800) | acc 0.962 (0.980) | IoU 0.685 (0.517) | [7 days, 18:10:40]

and the error msgs i got :

proj_full = torch.cat([proj_full, torch.unsqueeze(eval("proj_residuals_" + str(i+1)), 0)])
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 2048 and 2058 in dimension 2 at ../aten/src/TH/generic/THTensor.cpp:711

Is the conda environment setting wrong? currently using :

tensorboard               1.13.1                   pypi_0    pypi
tensorboard-data-server   0.6.1                    pypi_0    pypi
tensorboard-plugin-wit    1.8.0                    pypi_0    pypi
tensorflow                1.13.1                   pypi_0    pypi
tensorflow-estimator      1.13.0                   pypi_0    pypi

Or, can you check again links Collection of downloads ? seems like can't access now.
Thanks you !

error when evaluate iou

hello, I try to eval iou after inferring them. For this example, I try in sequence 08.
I got this error.

evaluating label  /mnt/dataset/sequences/08/labels/000000.label with /mnt/result/infer_0108/sequences/08/predictions/000000.label
Traceback (most recent call last):
  File "evaluate_iou.py", line 176, in <module>
    label.open_scan(scan_file)
TypeError: open_scan() missing 2 required positional arguments: 'from_pose' and 'to_pose'

Can you help me? Thank you so much.

Question about usage with SLAM

Hi ! Thanks for your great work~

I've just read your paper and have some question about it. 
The Paper directly use the pose estimated by the SLAM algorithm and transform the pointcloud sequence to current pose before calculate the residual image, right? So the pose should be accurate, or the residual image will be wrong.  Am i right?
If what I said above is right, when moving object causes large drift in odometry, your algorithm might not improve the odometry accuracy since the moving object could not be identified accurately.

Thanks for your reply in advance!!

How to use SalsaNet with my own dataset?

Hi, I have read the paper and built-and-run your LiDAR-MOS.

Thanks for sharing your awesome projects here.


I have a question.

How can I use the code with my own dataset?

I'll use the pretrained model you uploaded, so I think all I need to do is make my data be appropriate format for the LiDAR-MOS.

The data I have consists of .bag file and .pcd file.

I'd appreciate for you to give me some advice.

Best regards.

Inferring the test dataset using SalsaNext pretrain model

Hi, thanks for your great work!

I'm getting errors when trying to infer the test dataset with SalsaNext pretrain model. It runs fluently on train and valid dataset. But when I run it on the test dataset, I can only get the prediction result of sequence 11, and then I get the following error

Traceback (most recent call last):
  File "infer.py", line 145, in <module>
    user.infer()
  File "../../tasks/semantic/modules/user.py", line 113, in infer
    to_orig_fn=self.parser.to_original, cnn=cnn, knn=knn)
  File "../../tasks/semantic/modules/user.py", line 134, in infer_subset
    for i, (proj_in, proj_mask, _, _, path_seq, path_name, p_x, p_y, proj_range, unproj_range, _, _, _, _, npoints) in enumerate(loader):
  File "/home/ws/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
    data = self._next_data()
  File "/home/ws/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 838, in _next_data
    return self._process_data(data)
  File "/home/ws/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data
    data.reraise()
  File "/home/ws/anaconda3/envs/py37/lib/python3.7/site-packages/torch/_utils.py", line 395, in reraise
    raise self.exc_type(msg)
IndexError: Caught IndexError in DataLoader worker process 1.
Original Traceback (most recent call last):
  File "/home/ws/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/ws/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/ws/anaconda3/envs/py37/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "../..//tasks/semantic/dataset/kitti/parser.py", line 244, in __getitem__
    exec("residual_file_" + str(i+1) + " = " + "self.residual_files_" + str(i+1) + "[seq][index]")
  File "<string>", line 1, in <module>
IndexError: list index out of range

labels

can we predict the moving object without lables?

Question about usage with SLAM

Hi ! Thanks for your great work~

I've just read your paper and have some question about it. 
The Paper directly use the pose estimated by the SLAM algorithm and transform the pointcloud sequence to current pose before calculate the residual image, right? So the pose should be accurate, or the residual image will be wrong.  Am i right?
If what I said above is right, when moving object causes large drift in odometry, your algorithm might not improve the odometry accuracy since the moving object could not be identified accurately.

Thanks for your reply in advance!!

About FlowNet3D in the paper

Hi, thanks for your great work!
I have a question about FlowNet3D used in your paper. FlowNet3D estimates the motion of 3D point cloud, which is generated by object motion and **sensor motion**,  so why you can identify dynamic objects by a threshold? As you described in the paper:"We set a threshold on the estimated translation of each point to decide the label for each point, i.e., points with translations larger than the threshold are labeled as moving".
In my opinion, large translation of sence flow doesn't mean dynamic objects, maybe just the lidar sensor is moving.  Do you firstly transform the two frames to the same coordinate, and then estimate the sence flow?

Labels Files while Inferring

Hi,
while trying to infer my dataset the code fails at the start with this error:
"File "../..//tasks/semantic/dataset/kitti/parser.py", line 197, in init
assert(len(scan_files) == len(label_files))"

I think there is something basic that I'm missing here.
if the purpose of the inference is to create the label files (predictions), then why it tries to locate them at the beginning?
Thanks

Question about loading the pretrained salsanext model

Hi!

Thanks so much for the codes! I've a question about loading the pretrained salsanext model. When I followed the steps outlined in the "How to use" section and tested on the toy example, I came across the issue when trying to run infer.py on the toy dataset (python3 infer.py -d ../../../../data -m ../../../../data/model_salsanext_residual_1 -l ../../../../data/predictions_salsanext_residual_1_new -s valid) and got this error:

RuntimeError: ../../../../data/model_salsanext_residual_1/SalsaNext_valid_best is a zip archive (did you mean to use torch.jit.load()?)

I tried to switch from torch.load() to torch.jit.load() in the user.py as it suggested but it leads to other errors. What did I do wrong, or did I miss something along the way? I set up the environment according to the instruction linked on the github (using Pytorch 1.1).

Thank you in advance for your help!

关于多帧residual的问题 (Question about generating residual images)

您好,关于此工作,对于gen_residual_images.py中

  last_pose = poses[frame_idx - num_last_n]
  last_scan = load_vertex(scan_paths[frame_idx - num_last_n])
  last_scan_transformed = np.linalg.inv(current_pose).dot(last_pose).dot(last_scan.T).T
  last_range_transformed = range_projection(last_scan_transformed.astype(np.float32),
                                            range_image_params['height'], range_image_params['width'],
                                            range_image_params['fov_up'], range_image_params['fov_down'],
                                            range_image_params['max_range'], range_image_params['min_range'])[:, :, 3]

对于不同的num_last_n数量也只是取了对于当前帧num_last_n前的那一帧数据吗?而不是连续数据叠加?

Understanding the Labels Visualization

Hi,
I'm trying to understand the output of the "visualize_mos.py".
I get the following image:
image

  1. are the pixels in red in the bottom figure considered to be dynamic?
  2. what is the the top figure represents? is it the residual image in the current frame? (I'm working with 1 residual image)

Thanks!

error occur during train with salsanext_mos without pretrained model

Thanks for author's remarkable working!

when i start to train this network without pretrained model, this error occurs showing below.
Can someone help me, thanks a lot!

/home/lijianguo/anaconda3/envs/LiDAR-MOS-1.1/bin/python /media/lijianguo/data_ssd/coding_test_platfrom/LiDAR-MOS-1.1/mos_SalsaNext/train/tasks/semantic/train.py -d /media/lijianguo/data_ssd/coding_test_platfrom/LiDAR-MOS-1.1/data -ac /media/lijianguo/data_ssd/coding_test_platfrom/LiDAR-MOS-1.1/mos_SalsaNext/salsanext_mos.yml -l /media/lijianguo/data_ssd/coding_test_platfrom/LiDAR-MOS-1.1/data/train_logs
Opening arch config file /media/lijianguo/data_ssd/coding_test_platfrom/LiDAR-MOS-1.1/mos_SalsaNext/salsanext_mos.yml
Opening data config file config/labels/semantic-kitti-mos.yaml

INTERFACE:
dataset /media/lijianguo/data_ssd/coding_test_platfrom/LiDAR-MOS-1.1/data
arch_cfg /media/lijianguo/data_ssd/coding_test_platfrom/LiDAR-MOS-1.1/mos_SalsaNext/salsanext_mos.yml
data_cfg config/labels/semantic-kitti-mos.yaml
uncertainty True
Total of Trainable Parameters: 6.71M
log /media/lijianguo/data_ssd/coding_test_platfrom/LiDAR-MOS-1.1/data/train_logs/logs/2022-4-04-22:06
pretrained None

Traceback (most recent call last):
File "/media/lijianguo/data_ssd/coding_test_platfrom/LiDAR-MOS-1.1/mos_SalsaNext/train/tasks/semantic/train.py", line 136, in
os.makedirs(FLAGS.log)
File "/home/lijianguo/anaconda3/envs/LiDAR-MOS-1.1/lib/python3.8/os.py", line 223, in makedirs
mkdir(name, mode)
OSError: [Errno 22] Invalid argument: '/media/lijianguo/data_ssd/coding_test_platfrom/LiDAR-MOS-1.1/data/train_logs/logs/2022-4-04-22:06'

May I ask how to fix this error? (When using my own trained model to infer)

Traceback (most recent call last):
File "infer.py", line 144, in
user = User(ARCH, DATA, FLAGS.dataset, FLAGS.log, FLAGS.model,FLAGS.split,FLAGS.uncertainty,FLAGS.monte_carlo)
File "../../tasks/semantic/modules/user.py", line 70, in init
self.model.load_state_dict(w_dict['state_dict'], strict=True)
File "/root/miniconda3/envs/salsanext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1605, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for DataParallel:
Missing key(s) in state_dict: "module.downCntx.conv1.weight", "module.downCntx.conv1.bias", "module.downCntx.conv2.weight", "module.downCntx.conv2.bias", "module.downCntx.bn1.weight", "module.downCntx.bn1.bias", "module.downCntx.bn1.running_mean", "module.downCntx.bn1.running_var", "module.downCntx.conv3.weight", "module.downCntx.conv3.bias", "module.downCntx.bn2.weight", "module.downCntx.bn2.bias", "module.downCntx.bn2.running_mean", "module.downCntx.bn2.running_var", "module.downCntx2.conv1.weight", "module.downCntx2.conv1.bias", "module.downCntx2.conv2.weight", "module.downCntx2.conv2.bias", "module.downCntx2.bn1.weight", "module.downCntx2.bn1.bias", "module.downCntx2.bn1.running_mean", "module.downCntx2.bn1.running_var", "module.downCntx2.conv3.weight", "module.downCntx2.conv3.bias", "module.downCntx2.bn2.weight", "module.downCntx2.bn2.bias", "module.downCntx2.bn2.running_mean", "module.downCntx2.bn2.running_var", "module.downCntx3.conv1.weight", "module.downCntx3.conv1.bias", "module.downCntx3.conv2.weight", "module.downCntx3.conv2.bias", "module.downCntx3.bn1.weight", "module.downCntx3.bn1.bias", "module.downCntx3.bn1.running_mean", "module.downCntx3.bn1.running_var", "module.downCntx3.conv3.weight", "module.downCntx3.conv3.bias", "module.downCntx3.bn2.weight", "module.downCntx3.bn2.bias", "module.downCntx3.bn2.running_mean", "module.downCntx3.bn2.running_var", "module.resBlock1.conv1.weight", "module.resBlock1.conv1.bias", "module.resBlock1.conv2.weight", "module.resBlock1.conv2.bias", "module.resBlock1.bn1.weight", "module.resBlock1.bn1.bias", "module.resBlock1.bn1.running_mean", "module.resBlock1.bn1.running_var", "module.resBlock1.conv3.weight", "module.resBlock1.conv3.bias", "module.resBlock1.bn2.weight", "module.resBlock1.bn2.bias", "module.resBlock1.bn2.running_mean", "module.resBlock1.bn2.running_var", "module.resBlock1.conv4.weight", "module.resBlock1.conv4.bias", "module.resBlock1.bn3.weight", "module.resBlock1.bn3.bias", "module.resBlock1.bn3.running_mean", "module.resBlock1.bn3.running_var", "module.resBlock1.conv5.weight", "module.resBlock1.conv5.bias", "module.resBlock1.bn4.weight", "module.resBlock1.bn4.bias", "module.resBlock1.bn4.running_mean", "module.resBlock1.bn4.running_var", "module.resBlock2.conv1.weight", "module.resBlock2.conv1.bias", "module.resBlock2.conv2.weight", "module.resBlock2.conv2.bias", "module.resBlock2.bn1.weight", "module.resBlock2.bn1.bias", "module.resBlock2.bn1.running_mean", "module.resBlock2.bn1.running_var", "module.resBlock2.conv3.weight", "module.resBlock2.conv3.bias", "module.resBlock2.bn2.weight", "module.resBlock2.bn2.bias", "module.resBlock2.bn2.running_mean", "module.resBlock2.bn2.running_var", "module.resBlock2.conv4.weight", "module.resBlock2.conv4.bias", "module.resBlock2.bn3.weight", "module.resBlock2.bn3.bias", "module.resBlock2.bn3.running_mean", "module.resBlock2.bn3.running_var", "module.resBlock2.conv5.weight", "module.resBlock2.conv5.bias", "module.resBlock2.bn4.weight", "module.resBlock2.bn4.bias", "module.resBlock2.bn4.running_mean", "module.resBlock2.bn4.running_var", "module.resBlock3.conv1.weight", "module.resBlock3.conv1.bias", "module.resBlock3.conv2.weight", "module.resBlock3.conv2.bias", "module.resBlock3.bn1.weight", "module.resBlock3.bn1.bias", "module.resBlock3.bn1.running_mean", "module.resBlock3.bn1.running_var", "module.resBlock3.conv3.weight", "module.resBlock3.conv3.bias", "module.resBlock3.bn2.weight", "module.resBlock3.bn2.bias", "module.resBlock3.bn2.running_mean", "module.resBlock3.bn2.running_var", "module.resBlock3.conv4.weight", "module.resBlock3.conv4.bias", "module.resBlock3.bn3.weight", "module.resBlock3.bn3.bias", "module.resBlock3.bn3.running_mean", "module.resBlock3.bn3.running_var", "module.resBlock3.conv5.weight", "module.resBlock3.conv5.bias", "module.resBlock3.bn4.weight", "module.resBlock3.bn4.bias", "module.resBlock3.bn4.running_mean", "module.resBlock3.bn4.running_var", "module.resBlock4.conv1.weight", "module.resBlock4.conv1.bias", "module.resBlock4.conv2.weight", "module.resBlock4.conv2.bias", "module.resBlock4.bn1.weight", "module.resBlock4.bn1.bias", "module.resBlock4.bn1.running_mean", "module.resBlock4.bn1.running_var", "module.resBlock4.conv3.weight", "module.resBlock4.conv3.bias", "module.resBlock4.bn2.weight", "module.resBlock4.bn2.bias", "module.resBlock4.bn2.running_mean", "module.resBlock4.bn2.running_var", "module.resBlock4.conv4.weight", "module.resBlock4.conv4.bias", "module.resBlock4.bn3.weight", "module.resBlock4.bn3.bias", "module.resBlock4.bn3.running_mean", "module.resBlock4.bn3.running_var", "module.resBlock4.conv5.weight", "module.resBlock4.conv5.bias", "module.resBlock4.bn4.weight", "module.resBlock4.bn4.bias", "module.resBlock4.bn4.running_mean", "module.resBlock4.bn4.running_var", "module.resBlock5.conv1.weight", "module.resBlock5.conv1.bias", "module.resBlock5.conv2.weight", "module.resBlock5.conv2.bias", "module.resBlock5.bn1.weight", "module.resBlock5.bn1.bias", "module.resBlock5.bn1.running_mean", "module.resBlock5.bn1.running_var", "module.resBlock5.conv3.weight", "module.resBlock5.conv3.bias", "module.resBlock5.bn2.weight", "module.resBlock5.bn2.bias", "module.resBlock5.bn2.running_mean", "module.resBlock5.bn2.running_var", "module.resBlock5.conv4.weight", "module.resBlock5.conv4.bias", "module.resBlock5.bn3.weight", "module.resBlock5.bn3.bias", "module.resBlock5.bn3.running_mean", "module.resBlock5.bn3.running_var", "module.resBlock5.conv5.weight", "module.resBlock5.conv5.bias", "module.resBlock5.bn4.weight", "module.resBlock5.bn4.bias", "module.resBlock5.bn4.running_mean", "module.resBlock5.bn4.running_var", "module.upBlock1.conv1.weight", "module.upBlock1.conv1.bias", "module.upBlock1.bn1.weight", "module.upBlock1.bn1.bias", "module.upBlock1.bn1.running_mean", "module.upBlock1.bn1.running_var", "module.upBlock1.conv2.weight", "module.upBlock1.conv2.bias", "module.upBlock1.bn2.weight", "module.upBlock1.bn2.bias", "module.upBlock1.bn2.running_mean", "module.upBlock1.bn2.running_var", "module.upBlock1.conv3.weight", "module.upBlock1.conv3.bias", "module.upBlock1.bn3.weight", "module.upBlock1.bn3.bias", "module.upBlock1.bn3.running_mean", "module.upBlock1.bn3.running_var", "module.upBlock1.conv4.weight", "module.upBlock1.conv4.bias", "module.upBlock1.bn4.weight", "module.upBlock1.bn4.bias", "module.upBlock1.bn4.running_mean", "module.upBlock1.bn4.running_var", "module.upBlock2.conv1.weight", "module.upBlock2.conv1.bias", "module.upBlock2.bn1.weight", "module.upBlock2.bn1.bias", "module.upBlock2.bn1.running_mean", "module.upBlock2.bn1.running_var", "module.upBlock2.conv2.weight", "module.upBlock2.conv2.bias", "module.upBlock2.bn2.weight", "module.upBlock2.bn2.bias", "module.upBlock2.bn2.running_mean", "module.upBlock2.bn2.running_var", "module.upBlock2.conv3.weight", "module.upBlock2.conv3.bias", "module.upBlock2.bn3.weight", "module.upBlock2.bn3.bias", "module.upBlock2.bn3.running_mean", "module.upBlock2.bn3.running_var", "module.upBlock2.conv4.weight", "module.upBlock2.conv4.bias", "module.upBlock2.bn4.weight", "module.upBlock2.bn4.bias", "module.upBlock2.bn4.running_mean", "module.upBlock2.bn4.running_var", "module.upBlock3.conv1.weight", "module.upBlock3.conv1.bias", "module.upBlock3.bn1.weight", "module.upBlock3.bn1.bias", "module.upBlock3.bn1.running_mean", "module.upBlock3.bn1.running_var", "module.upBlock3.conv2.weight", "module.upBlock3.conv2.bias", "module.upBlock3.bn2.weight", "module.upBlock3.bn2.bias", "module.upBlock3.bn2.running_mean", "module.upBlock3.bn2.running_var", "module.upBlock3.conv3.weight", "module.upBlock3.conv3.bias", "module.upBlock3.bn3.weight", "module.upBlock3.bn3.bias", "module.upBlock3.bn3.running_mean", "module.upBlock3.bn3.running_var", "module.upBlock3.conv4.weight", "module.upBlock3.conv4.bias", "module.upBlock3.bn4.weight", "module.upBlock3.bn4.bias", "module.upBlock3.bn4.running_mean", "module.upBlock3.bn4.running_var", "module.upBlock4.conv1.weight", "module.upBlock4.conv1.bias", "module.upBlock4.bn1.weight", "module.upBlock4.bn1.bias", "module.upBlock4.bn1.running_mean", "module.upBlock4.bn1.running_var", "module.upBlock4.conv2.weight", "module.upBlock4.conv2.bias", "module.upBlock4.bn2.weight", "module.upBlock4.bn2.bias", "module.upBlock4.bn2.running_mean", "module.upBlock4.bn2.running_var", "module.upBlock4.conv3.weight", "module.upBlock4.conv3.bias", "module.upBlock4.bn3.weight", "module.upBlock4.bn3.bias", "module.upBlock4.bn3.running_mean", "module.upBlock4.bn3.running_var", "module.upBlock4.conv4.weight", "module.upBlock4.conv4.bias", "module.upBlock4.bn4.weight", "module.upBlock4.bn4.bias", "module.upBlock4.bn4.running_mean", "module.upBlock4.bn4.running_var", "module.logits.weight", "module.logits.bias".
Unexpected key(s) in state_dict: "downCntx.conv1.weight", "downCntx.conv1.bias", "downCntx.conv2.weight", "downCntx.conv2.bias", "downCntx.bn1.weight", "downCntx.bn1.bias", "downCntx.bn1.running_mean", "downCntx.bn1.running_var", "downCntx.bn1.num_batches_tracked", "downCntx.conv3.weight", "downCntx.conv3.bias", "downCntx.bn2.weight", "downCntx.bn2.bias", "downCntx.bn2.running_mean", "downCntx.bn2.running_var", "downCntx.bn2.num_batches_tracked", "downCntx2.conv1.weight", "downCntx2.conv1.bias", "downCntx2.conv2.weight", "downCntx2.conv2.bias", "downCntx2.bn1.weight", "downCntx2.bn1.bias", "downCntx2.bn1.running_mean", "downCntx2.bn1.running_var", "downCntx2.bn1.num_batches_tracked", "downCntx2.conv3.weight", "downCntx2.conv3.bias", "downCntx2.bn2.weight", "downCntx2.bn2.bias", "downCntx2.bn2.running_mean", "downCntx2.bn2.running_var", "downCntx2.bn2.num_batches_tracked", "downCntx3.conv1.weight", "downCntx3.conv1.bias", "downCntx3.conv2.weight", "downCntx3.conv2.bias", "downCntx3.bn1.weight", "downCntx3.bn1.bias", "downCntx3.bn1.running_mean", "downCntx3.bn1.running_var", "downCntx3.bn1.num_batches_tracked", "downCntx3.conv3.weight", "downCntx3.conv3.bias", "downCntx3.bn2.weight", "downCntx3.bn2.bias", "downCntx3.bn2.running_mean", "downCntx3.bn2.running_var", "downCntx3.bn2.num_batches_tracked", "resBlock1.conv1.weight", "resBlock1.conv1.bias", "resBlock1.conv2.weight", "resBlock1.conv2.bias", "resBlock1.bn1.weight", "resBlock1.bn1.bias", "resBlock1.bn1.running_mean", "resBlock1.bn1.running_var", "resBlock1.bn1.num_batches_tracked", "resBlock1.conv3.weight", "resBlock1.conv3.bias", "resBlock1.bn2.weight", "resBlock1.bn2.bias", "resBlock1.bn2.running_mean", "resBlock1.bn2.running_var", "resBlock1.bn2.num_batches_tracked", "resBlock1.conv4.weight", "resBlock1.conv4.bias", "resBlock1.bn3.weight", "resBlock1.bn3.bias", "resBlock1.bn3.running_mean", "resBlock1.bn3.running_var", "resBlock1.bn3.num_batches_tracked", "resBlock1.conv5.weight", "resBlock1.conv5.bias", "resBlock1.bn4.weight", "resBlock1.bn4.bias", "resBlock1.bn4.running_mean", "resBlock1.bn4.running_var", "resBlock1.bn4.num_batches_tracked", "resBlock2.conv1.weight", "resBlock2.conv1.bias", "resBlock2.conv2.weight", "resBlock2.conv2.bias", "resBlock2.bn1.weight", "resBlock2.bn1.bias", "resBlock2.bn1.running_mean", "resBlock2.bn1.running_var", "resBlock2.bn1.num_batches_tracked", "resBlock2.conv3.weight", "resBlock2.conv3.bias", "resBlock2.bn2.weight", "resBlock2.bn2.bias", "resBlock2.bn2.running_mean", "resBlock2.bn2.running_var", "resBlock2.bn2.num_batches_tracked", "resBlock2.conv4.weight", "resBlock2.conv4.bias", "resBlock2.bn3.weight", "resBlock2.bn3.bias", "resBlock2.bn3.running_mean", "resBlock2.bn3.running_var", "resBlock2.bn3.num_batches_tracked", "resBlock2.conv5.weight", "resBlock2.conv5.bias", "resBlock2.bn4.weight", "resBlock2.bn4.bias", "resBlock2.bn4.running_mean", "resBlock2.bn4.running_var", "resBlock2.bn4.num_batches_tracked", "resBlock3.conv1.weight", "resBlock3.conv1.bias", "resBlock3.conv2.weight", "resBlock3.conv2.bias", "resBlock3.bn1.weight", "resBlock3.bn1.bias", "resBlock3.bn1.running_mean", "resBlock3.bn1.running_var", "resBlock3.bn1.num_batches_tracked", "resBlock3.conv3.weight", "resBlock3.conv3.bias", "resBlock3.bn2.weight", "resBlock3.bn2.bias", "resBlock3.bn2.running_mean", "resBlock3.bn2.running_var", "resBlock3.bn2.num_batches_tracked", "resBlock3.conv4.weight", "resBlock3.conv4.bias", "resBlock3.bn3.weight", "resBlock3.bn3.bias", "resBlock3.bn3.running_mean", "resBlock3.bn3.running_var", "resBlock3.bn3.num_batches_tracked", "resBlock3.conv5.weight", "resBlock3.conv5.bias", "resBlock3.bn4.weight", "resBlock3.bn4.bias", "resBlock3.bn4.running_mean", "resBlock3.bn4.running_var", "resBlock3.bn4.num_batches_tracked", "resBlock4.conv1.weight", "resBlock4.conv1.bias", "resBlock4.conv2.weight", "resBlock4.conv2.bias", "resBlock4.bn1.weight", "resBlock4.bn1.bias", "resBlock4.bn1.running_mean", "resBlock4.bn1.running_var", "resBlock4.bn1.num_batches_tracked", "resBlock4.conv3.weight", "resBlock4.conv3.bias", "resBlock4.bn2.weight", "resBlock4.bn2.bias", "resBlock4.bn2.running_mean", "resBlock4.bn2.running_var", "resBlock4.bn2.num_batches_tracked", "resBlock4.conv4.weight", "resBlock4.conv4.bias", "resBlock4.bn3.weight", "resBlock4.bn3.bias", "resBlock4.bn3.running_mean", "resBlock4.bn3.running_var", "resBlock4.bn3.num_batches_tracked", "resBlock4.conv5.weight", "resBlock4.conv5.bias", "resBlock4.bn4.weight", "resBlock4.bn4.bias", "resBlock4.bn4.running_mean", "resBlock4.bn4.running_var", "resBlock4.bn4.num_batches_tracked", "resBlock5.conv1.weight", "resBlock5.conv1.bias", "resBlock5.conv2.weight", "resBlock5.conv2.bias", "resBlock5.bn1.weight", "resBlock5.bn1.bias", "resBlock5.bn1.running_mean", "resBlock5.bn1.running_var", "resBlock5.bn1.num_batches_tracked", "resBlock5.conv3.weight", "resBlock5.conv3.bias", "resBlock5.bn2.weight", "resBlock5.bn2.bias", "resBlock5.bn2.running_mean", "resBlock5.bn2.running_var", "resBlock5.bn2.num_batches_tracked", "resBlock5.conv4.weight", "resBlock5.conv4.bias", "resBlock5.bn3.weight", "resBlock5.bn3.bias", "resBlock5.bn3.running_mean", "resBlock5.bn3.running_var", "resBlock5.bn3.num_batches_tracked", "resBlock5.conv5.weight", "resBlock5.conv5.bias", "resBlock5.bn4.weight", "resBlock5.bn4.bias", "resBlock5.bn4.running_mean", "resBlock5.bn4.running_var", "resBlock5.bn4.num_batches_tracked", "upBlock1.conv1.weight", "upBlock1.conv1.bias", "upBlock1.bn1.weight", "upBlock1.bn1.bias", "upBlock1.bn1.running_mean", "upBlock1.bn1.running_var", "upBlock1.bn1.num_batches_tracked", "upBlock1.conv2.weight", "upBlock1.conv2.bias", "upBlock1.bn2.weight", "upBlock1.bn2.bias", "upBlock1.bn2.running_mean", "upBlock1.bn2.running_var", "upBlock1.bn2.num_batches_tracked", "upBlock1.conv3.weight", "upBlock1.conv3.bias", "upBlock1.bn3.weight", "upBlock1.bn3.bias", "upBlock1.bn3.running_mean", "upBlock1.bn3.running_var", "upBlock1.bn3.num_batches_tracked", "upBlock1.conv4.weight", "upBlock1.conv4.bias", "upBlock1.bn4.weight", "upBlock1.bn4.bias", "upBlock1.bn4.running_mean", "upBlock1.bn4.running_var", "upBlock1.bn4.num_batches_tracked", "upBlock2.conv1.weight", "upBlock2.conv1.bias", "upBlock2.bn1.weight", "upBlock2.bn1.bias", "upBlock2.bn1.running_mean", "upBlock2.bn1.running_var", "upBlock2.bn1.num_batches_tracked", "upBlock2.conv2.weight", "upBlock2.conv2.bias", "upBlock2.bn2.weight", "upBlock2.bn2.bias", "upBlock2.bn2.running_mean", "upBlock2.bn2.running_var", "upBlock2.bn2.num_batches_tracked", "upBlock2.conv3.weight", "upBlock2.conv3.bias", "upBlock2.bn3.weight", "upBlock2.bn3.bias", "upBlock2.bn3.running_mean", "upBlock2.bn3.running_var", "upBlock2.bn3.num_batches_tracked", "upBlock2.conv4.weight", "upBlock2.conv4.bias", "upBlock2.bn4.weight", "upBlock2.bn4.bias", "upBlock2.bn4.running_mean", "upBlock2.bn4.running_var", "upBlock2.bn4.num_batches_tracked", "upBlock3.conv1.weight", "upBlock3.conv1.bias", "upBlock3.bn1.weight", "upBlock3.bn1.bias", "upBlock3.bn1.running_mean", "upBlock3.bn1.running_var", "upBlock3.bn1.num_batches_tracked", "upBlock3.conv2.weight", "upBlock3.conv2.bias", "upBlock3.bn2.weight", "upBlock3.bn2.bias", "upBlock3.bn2.running_mean", "upBlock3.bn2.running_var", "upBlock3.bn2.num_batches_tracked", "upBlock3.conv3.weight", "upBlock3.conv3.bias", "upBlock3.bn3.weight", "upBlock3.bn3.bias", "upBlock3.bn3.running_mean", "upBlock3.bn3.running_var", "upBlock3.bn3.num_batches_tracked", "upBlock3.conv4.weight", "upBlock3.conv4.bias", "upBlock3.bn4.weight", "upBlock3.bn4.bias", "upBlock3.bn4.running_mean", "upBlock3.bn4.running_var", "upBlock3.bn4.num_batches_tracked", "upBlock4.conv1.weight", "upBlock4.conv1.bias", "upBlock4.bn1.weight", "upBlock4.bn1.bias", "upBlock4.bn1.running_mean", "upBlock4.bn1.running_var", "upBlock4.bn1.num_batches_tracked", "upBlock4.conv2.weight", "upBlock4.conv2.bias", "upBlock4.bn2.weight", "upBlock4.bn2.bias", "upBlock4.bn2.running_mean", "upBlock4.bn2.running_var", "upBlock4.bn2.num_batches_tracked", "upBlock4.conv3.weight", "upBlock4.conv3.bias", "upBlock4.bn3.weight", "upBlock4.bn3.bias", "upBlock4.bn3.running_mean", "upBlock4.bn3.running_var", "upBlock4.bn3.num_batches_tracked", "upBlock4.conv4.weight", "upBlock4.conv4.bias", "upBlock4.bn4.weight", "upBlock4.bn4.bias", "upBlock4.bn4.running_mean", "upBlock4.bn4.running_var", "upBlock4.bn4.num_batches_tracked", "logits.weight", "logits.bias".

Questions about LIDAR-MOS visualization

Hello, your LIDAR-MOS code is very good, but I have a problem that cannot be visualized when reproducing your code, as shown in the figure:
image
After running this command, the program seems to be stuck. I don't know why, I want to get your visualized results, as shown below:
image
ps:Author reply:From the results of the operation, it seems to be a tkinter problem.But the tkinker module does not seem to be missing.
If anyone knows how to solve it, hope you can help me, thanks.

Multiple input frames

Thank you for your great code!

I notice in rangenet_mos.yaml and salsanext_mos.yaml you only use one input frame. If I want to use multiple scans for training, how can I do it?

As far as I know, I need to change n_input_scans in both backbone and dataset, and transform in the dataset. What else do I need?

Another question is about the pose transform. When using a sequence of scans to train the model, why do you transform the pose to the last scan but not the first scan? here

how to change the "n_input_scans"?

I only change the "arch_cfg.yaml" in model ,but there is a error:size mismatch for module.downCntx.conv1.weight: copying a param with shape torch.Size([32, 6, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 13, 1, 1]),so how to change the "n_input_scans"?thank you :)

TRAIN BUGS

Thanks for your quick response!
@Chen-Xieyuanli

I trained with salsa_mos in semantic kitti. When it run here
Lr: 3.977e-03 | Update: 3.258e-04 mean,5.209e-04 std | Epoch: [0][950/2391] | Time 0.641 (0.623) | Data 0.081 (0.067) | Loss 0.6863 (0.9777) | acc 0.830 (0.855) | IoU 0.417 (0.434) | [1 day, 3:19:41]
LiDAR-MOS/mos_SalsaNext/train/tasks/semantic/../../common/laserscan.py:166: RuntimeWarning: invalid value encountered in true_divide pitch = np.arcsin(scan_z / depth)
i got error
File "conda/envs/salsa/lib/python3.9/site-packages/torch/_utils.py", line 457, in reraise raise exception IndexError: Caught IndexError in DataLoader worker process 1.
and
File "LiDAR-MOS/mos_SalsaNext/train/tasks/semantic/../../common/laserscan.py", line 201, in do_range_projection self.proj_range[proj_y, proj_x] = depth IndexError: index -2147483648 is out of bounds for axis 0 with size 64
The conda env is
`- python=3.9.12=h12debd9_0

python-flatbuffers=1.12=pyhd3eb1b0_0
pytorch=1.11.0=py3.9_cuda11.3_cudnn8.2.0_0
pytorch-mutex=1.0=cuda
tensorflow-base=2.6.0=mkl_py39h3d85931_0
tensorflow-estimator=2.6.0=pyh7b7c402_0`
Is there any wrong with this env?
Or it is too new?
Or there is something wrong with data in residual images?

Generating Residual Images for Inference

Hi,
Say that I'm interesting in using the pretrained model with my own dataset only for inference in the first stage.
Do I have to prepare the residual images beforehand or does the model generates them automatically?
assuming I want to use residual images (residual: True).

Thanks :)

combining lidar-mos and removert

Hi Chen, always I'm so grateful for the codes and works you shared.
This issue is not a question or error report, but I wanna share my recent mini-project (for fun but practically makes sense) for the other readers.

I think lidar-mos is good at proactively removing a bunch of points in the wild, particularly in highly dynamic urban environments such as KITTI 01.
Thus, I've used the scan_cleaner you shared and fed the output (cleaned scans from lidar-mos) into the Removert as its input.

This is the tutorial video https://youtu.be/zWuoqtDofsE
and an example result can be seen here https://github.com/irapkaist/removert#further-improvements

thanks for reading 😋

prediction labels in toy dataset

Hi,

I see there are some segmentation results already present in the toy dataset.
image

The one that says salsanext, does it use one residual image?

Best Regards
Sambit

question about ROS/bag API

感谢您做出的如此优秀的工作!我想请问该算法可以直接只对一个bag包进行操作吗,然后由此获取一个去除了动态物体的bag包,这样就可以有更多的适用性了?

TF/Tensorboard version mismatch

Hi there, thanks for releasing the code, the paper was a very interesting read!

I'm getting errors when trying to run MOS with SalsaNext related to the TF/Tensorboard versions. As per the config file, TF and Tensorboard versions 1.13.1 but in LiDAR-MOS/mos_SalsaNext/train/common/logger.py line 18, you use self.writer = tf.summary.create_file_writer(log_dir) which seems to be the way to call Tensorboard using TF 2.0 (mingyuliutw/UNIT#56 (comment), https://www.tensorflow.org/tensorboard/migrate).

This is the exact error I get:

No pretrained directory found.
Copying files to /host-machine/ml/LiDAR-MOS/logs/logs/2021-7-16-18:10salsanext for further reference.
Sequences folder exists! Using sequences from /host-machine/semKITTI/Data/SemanticKitti/dataset/sequences
parsing seq 00
parsing seq 01
parsing seq 02
parsing seq 03
parsing seq 04
parsing seq 05
parsing seq 06
parsing seq 07
parsing seq 09
parsing seq 10
Using 19130 scans from sequences [0, 1, 2, 3, 4, 5, 6, 7, 9, 10]
Sequences folder exists! Using sequences from /host-machine/semKITTI/Data/SemanticKitti/dataset/sequences
parsing seq 08
Using 4071 scans from sequences [8]
Loss weights from content:  tensor([  0.0000,   1.0210, 296.4371])
Depth of backbone input =  6
Traceback (most recent call last):
  File "./train.py", line 177, in <module>
    trainer = Trainer(ARCH, DATA, FLAGS.dataset, FLAGS.log, FLAGS.pretrained,FLAGS.uncertainty)
  File "../../tasks/semantic/modules/trainer.py", line 130, in __init__
    self.tb_logger = Logger(self.log + "/tb")
  File "../../common/logger.py", line 18, in __init__
    self.writer = tf.summary.create_file_writer(log_dir)
AttributeError: module 'tensorflow._api.v1.summary' has no attribute 'create_file_writer'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.