Code Monkey home page Code Monkey logo

frnet's Introduction

Xiangxu-0103's github stats

frnet's People

Contributors

ldkong1205 avatar xiangxu-0103 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

frnet's Issues

param and fps

Hello, could you please tell me how to obtain param and fps?

About frustum mix

Hi, thanks for the nice work~

I'm a little bit confused about the difference between frustum_vertical_mix_transform in your code and LaserMix? Also, what is the difference between frustum_horizental_mix_transform and the ablation study on laser beam partitions in LaserMix (showed in Table4)?

About export prediction results

Could you please guide me on how to export prediction results using code? I want to submit them to the KITTI leaderboard.Thanks!

Inference on different dataset

Hi, I would like to use FRNet on the UTCampus dataset, however, I don't get the expected results. The code runs without any issues, but the results are terrible, see below. I use the provided config and checkpoints files corresponding to the KITTI dataset within the mmdetection3D framework.

In my opinion, the poor results can be caused by the difference in LiDAR sensors, the setup of the config files, or incompatibility in general. I already tried a couple of things in order to solve the issue, unfortunately, they all didn't work:

  1. Tried the checkpoint and config file corresponding to the nuScenes dataset
  2. Changed the fov_up and fov_down in both config files
  3. Downsampled my point cloud to better match the resolution of KITTI and nuScenes

I hope that I gave you enough information, otherwise feel free to ask.

Kind regards,

Guido

image

About Test 11-21 sequences

Hello, I want to run the 11-21 sequence test set of semanticKITTI and then submit it to codalab. I used test.py and added tta to run and found that the 8th sequence of the validation set was run. Can you teach me how I can change the code to run 11-21 sequences? Including using tta. At the same time, I would like to ask what tricks you used when submitting to codalab? I reproduced several codes and submitted them to codalab, but they were unable to achieve the results of their papers. I don’t know what tricks were used.

Question about .onnx model

Hi,
Thank you for generously opening up this code !
i wonder how to convert my training results into .onnx model.
I am looking forward to your reply

Best wishes !

freeze model

Hello, could you please advise on how to freeze network parameters? For example, if I only want to train frnet_head.

CONFIG_FILE

Sorry, I can't seem to find a config yaml file for the corresponding model.

test time augmentation

Happy New Year and Happy Spring Festival! Apologies for any inconvenience. May I inquire whether the test time augmentation technique was utilized during valing and testing?

about infer time

i test the infer time on titan rtx, one frame about 300ms,
after convert the model to onnx, the cost time is about 200ms

i try to use fp16 by 'inferencer.model.half()', but report some error, can you give some advice how to accelerate

About the result of test

Hi @Xiangxu-0103 Great work! I'm a beginner,and I have some questions about the result. I downloaded the checkpoint provided in the repo, and used this model to test. "python test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} --work-dir xxx"
But finally I got 67.55% miou, which is far from the miou 73.3% mentioned in the article.
image
I don't know where the problem is. I hope you could tell me and look forward to your reply! thx!

Some questions

Thank you very much for your excellent work. May I inquire about how to resume training if it was interrupted midway? Additionally, could you please let me know where I can find the code for calculating the Frustum loss? Thank you!

About visualization on val

Hi, I'm trying to reproduce your codes, and it does run successfully with:
python test.py "configs/frnet/frnet-semantickitti_seg.py" "pretrained/frnet-semantickitti_seg.pth"
but when I want to visualize the results using follow command:
python test.py "configs/frnet/frnet-semantickitti_seg.py" "pretrained/frnet-semantickitti_seg.pth" --show --show-dir "show_dirs" --task "lidar_seg"
it turns to be
AssertionError: 'data_sample' must contain 'img_path' or 'lidar_path'
So how do you do to visualize just like what is showed in the project page, thank you! I am not familiar with mmcv, and just tried the command in its document.

How to visualize

Hello, thank you very much for your work. I would like to visualize the test results of the semanticKITTI dataset and record the FPS. How should I proceed? Thank you very much for your help。

About your paper.

image
Here, if following the representation in your code as:
image
should f_up be |f_down|?

add some points, original point's label will be change

Original point cloud p0, p1, p2, p3, ... pn, Obtain inference results L0, L1, L2, L3,... Ln
Add some points (from original) to the original point cloud to form a new point cloud p0, p1, p2, p3, ... pn, p2, p3, Will cause a change in the inference category of the original point cloud

Is it my mistake or is it that the result of algorithm is like this?

Label-Efficient LiDAR Segmentation

Thank you for sharing your work and congratulations on the impressive results. I would like to inquire about the semi-supervised task in more detail. In the paper, you mentioned 'following the lasermix paradigm'. Does that mean you followed the same training pipeline and configurations, but only changed the backbone to FRNet and the type of augmentation to frustumMix instead of lasermix? Do you have any code related to this experiment? Did you utilize the lasermix repository and simply modify the backbone and augmentation type?

About FPS

Thank you for your patience with each response.And How can I obtain the FPS mentioned in the text? Do I need additional scripts for that?

KeyError: 'pts_semantic_mask_path' while testing

Hi, thank you so much for your awesome work!

I am testing with your pre-trained model on nuscenes test set and am facing this error while launching the test program. Could you please give me a hint which part could be wrong?

Thank you in advance!

Traceback (most recent call last):
  File "test.py", line 146, in <module>
    main()
  File "test.py", line 142, in main
    runner.test()
  File "/root/miniconda3/envs/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1823, in test
    metrics = self.test_loop.run()  # type: ignore
  File "/root/miniconda3/envs/lib/python3.8/site-packages/mmengine/runner/loops.py", line 442, in run
    for idx, data_batch in enumerate(self.dataloader):
  File "/root/miniconda3/envs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 517, in __next__
    data = self._next_data()
  File "/root/miniconda3/envs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data
    return self._process_data(data)
  File "/root/miniconda3/envs/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data
    data.reraise()
  File "/root/miniconda3/envs/lib/python3.8/site-packages/torch/_utils.py", line 429, in reraise
    raise self.exc_type(msg)
KeyError: Caught KeyError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/root/miniconda3/envs/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
    data = fetcher.fetch(index)
  File "/root/miniconda3/envs/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/root/miniconda3/envs/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/root/miniconda3/envs/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 403, in __getitem__
    data = self.prepare_data(idx)
  File "/root/miniconda3/envs/lib/python3.8/site-packages/mmdet3d/datasets/seg3d_dataset.py", line 305, in prepare_data
    return super().prepare_data(idx)
  File "/root/miniconda3/envs/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 793, in prepare_data
    return self.pipeline(data_info)
  File "/root/miniconda3/envs/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 60, in __call__
    data = t(data)
  File "/root/miniconda3/envs/lib/python3.8/site-packages/mmcv/transforms/base.py", line 12, in __call__
    return self.transform(results)
  File "/root/miniconda3/envs/lib/python3.8/site-packages/mmdet3d/datasets/transforms/loading.py", line 1070, in transform
    results = self._load_semantic_seg_3d(results)
  File "/root/miniconda3/envs/lib/python3.8/site-packages/mmdet3d/datasets/transforms/loading.py", line 955, in _load_semantic_seg_3d
    pts_semantic_mask_path = results['pts_semantic_mask_path']
KeyError: 'pts_semantic_mask_path'

About the result of test set

Nice Work! But I have some question about the result of test set.
I try to create a new test_evaluator instead of SegMetric,And I upload the test set result with tta on the codalab,it get a score about 66.7.
I wonder if the pretrained model provided by the repository is trained on the train-valid set.
Or maybe the method I'm using when reproducing is not accurate enough,I am a beginner in the mm3d framework.
Here is my test_evaluator :

# Copyright (c) OpenMMLab. All rights reserved.
import os.path as osp
import tempfile
from typing import Dict, Optional, Sequence
import yaml
import mmcv
import numpy as np
from mmengine.evaluator import BaseMetric
from mmengine.logging import MMLogger
import mmengine
from mmdet3d.evaluation import seg_eval
from mmdet3d.registry import METRICS
import pdb
@METRICS.register_module()
class SemantickInferMertric(BaseMetric):
    
    def __init__(self,
                 collect_device: str = 'cpu',
                 prefix: Optional[str] = None,
                 pklfile_prefix: str = None,
                 submission_prefix: str = None,
                 result_path:str = None,
                 result_start_index:int = 0,
                 conf:str = None,
                 **kwargs):
        self.pklfile_prefix = pklfile_prefix
        self.submission_prefix = submission_prefix
        self.result_path = result_path
        self.result_start_index = result_start_index
        self.current_start_index = self.result_start_index
        self.limit=[921,1061,3281,631,1901,1731,491,1801,4981,831,2721]#the length of every test set seq
        self.limit_id = 0 #count 
        self.scene_id = 0 #seq id
        super(SemantickInferMertric, self).__init__(
            prefix=prefix, collect_device=collect_device)
        self.conf = conf
        if self.conf :
            with open(self.conf) as f:
                self.conf = yaml.safe_load(f)
    def process(self, data_batch: dict, data_samples: Sequence[dict]) -> None:
        """Process one batch of data samples and predictions.

        The processed results should be stored in ``self.results``,
        which will be used to compute the metrics when all batches
        have been processed.

        Args:
            data_batch (dict): A batch of data from the dataloader.
            data_samples (Sequence[dict]): A batch of outputs from
                the model.
        """
        self.results.append((0, 0))

        # label inv
        # pdb.set_trace()
        # pdb.set_trace()
        pred = data_samples[0]['pred_pts_seg']['pts_semantic_mask'] #labels
        map_inv = self.dataset_meta['learning_map_inv'] #inv mapping
        pred[pred == 19] += 99 #unlabeled or ignore
        pred += 1 #[0,18] -> [1,19]
        for i in map_inv: 
            pred[pred==i] = map_inv[i]+1000  #avoid the pred label in [1-19] be mapped twice 
        pred[pred!=119]-=1000
        pred[pred==119] = 0 #unlabel 
        pred.cpu().numpy().astype(np.int32).tofile(f'/mnt/storage/dataset/semanticKitti/dataset/FRNet/sequences/{self.scene_id+11}/predictions/{self.limit_id:06}.label')
        print(f'finsh the {self.scene_id+11}/predictions/{self.limit_id:06}.label')
        self.limit_id+=1
        if self.limit_id == self.limit[self.scene_id]: #next sequence
            self.scene_id += 1
            self.limit_id = 0
        # pdb.set_trace()

        # output to label file
    def format_results(self, results):
        r"""Format the results to txt file. Refer to `ScanNet documentation
        <http://kaldir.vc.in.tum.de/scannet_benchmark/documentation>`_.

        Args:
            outputs (list[dict]): Testing results of the dataset.

        Returns:
            tuple: (outputs, tmp_dir), outputs is the detection results,
                tmp_dir is the temporal directory created for saving submission
                files when ``submission_prefix`` is not specified.
        """

       

    def compute_metrics(self, results: list) -> Dict[str, float]:
        """Compute the metrics from processed results.

        Args:
            results (list): The processed results of each batch.

        Returns:
            Dict[str, float]: The computed metrics. The keys are the names of
            the metrics, and the values are corresponding results.
        """
        ret_dict = dict()

        return ret_dict

and the result of test set :
image

Question About Replacing SemanticKITTI Dataset with Custom Dataset

Hello, thank you very much for your work.
I tried to replace the semanticKITTI dataset with my own dataset, but I encountered the following error during training. I have tried many methods, but I still couldn't solve this issue. Do you know how to fix it? Thank you very much for your help.

05/31 20:22:55 - mmengine - INFO - Iter(train) [ 950/50000] lr: 6.1179e-04 eta: 20:29:54 time: 1.5111 data_time: 0.0034 memory: 5897 loss: 5.7215 decode.loss_ce: 0.1238 aux_0.loss_ce: 0.0053 aux_0.loss_lovasz: 0.4535 aux_0.loss_boundary: 0.8194 aux_1.loss_ce: 0.0063 aux_1.loss_lovasz: 0.4696 aux_1.loss_boundary: 0.8386 aux_2.loss_ce: 0.0077 aux_2.loss_lovasz: 0.5261 aux_2.loss_boundary: 0.8952 aux_3.loss_ce: 0.0099 aux_3.loss_lovasz: 0.6142 aux_3.loss_boundary: 0.9518
05/31 20:24:10 - mmengine - INFO - Exp name: frnet-2024_20240531_195901
05/31 20:24:10 - mmengine - INFO - Iter(train) [ 1000/50000] lr: 6.3451e-04 eta: 20:28:07 time: 1.4915 data_time: 0.0033 memory: 6054 loss: 5.6490 decode.loss_ce: 0.1204 aux_0.loss_ce: 0.0046 aux_0.loss_lovasz: 0.4391 aux_0.loss_boundary: 0.8167 aux_1.loss_ce: 0.0058 aux_1.loss_lovasz: 0.4600 aux_1.loss_boundary: 0.8354 aux_2.loss_ce: 0.0078 aux_2.loss_lovasz: 0.5091 aux_2.loss_boundary: 0.8953 aux_3.loss_ce: 0.0102 aux_3.loss_lovasz: 0.5944 aux_3.loss_boundary: 0.9502
/home/xhy/code/FRNet-master1/frnet/datasets/transforms/transforms_3d.py:188: RuntimeWarning: invalid value encountered in divide
pitch = np.arcsin(points_numpy[:, 2] / depth)
/home/xhy/code/FRNet-master1/frnet/datasets/transforms/transforms_3d.py:205: RuntimeWarning: invalid value encountered in cast
proj_y = np.maximum(0, proj_y).astype(np.int64)
Traceback (most recent call last):
File "train.py", line 133, in
main()
File "train.py", line 129, in main
runner.train()
File "/home/xhy/miniconda3/envs/frnet1/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1777, in train
model = self.train_loop.run() # type: ignore
File "/home/xhy/miniconda3/envs/frnet1/lib/python3.8/site-packages/mmengine/runner/loops.py", line 284, in run
self.runner.val_loop.run()
File "/home/xhy/miniconda3/envs/frnet1/lib/python3.8/site-packages/mmengine/runner/loops.py", line 362, in run
for idx, data_batch in enumerate(self.dataloader):
File "/home/xhy/miniconda3/envs/frnet1/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 517, in next
data = self._next_data()
File "/home/xhy/miniconda3/envs/frnet1/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data
return self._process_data(data)
File "/home/xhy/miniconda3/envs/frnet1/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data
data.reraise()
File "/home/xhy/miniconda3/envs/frnet1/lib/python3.8/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
IndexError: Caught IndexError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/xhy/miniconda3/envs/frnet1/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/home/xhy/miniconda3/envs/frnet1/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/xhy/miniconda3/envs/frnet1/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/xhy/miniconda3/envs/frnet1/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 403, in getitem
data = self.prepare_data(idx)
File "/home/xhy/miniconda3/envs/frnet1/lib/python3.8/site-packages/mmdet3d/datasets/seg3d_dataset.py", line 305, in prepare_data
return super().prepare_data(idx)
File "/home/xhy/miniconda3/envs/frnet1/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 793, in prepare_data
return self.pipeline(data_info)
File "/home/xhy/miniconda3/envs/frnet1/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 60, in call
data = t(data)
File "/home/xhy/miniconda3/envs/frnet1/lib/python3.8/site-packages/mmcv/transforms/base.py", line 12, in call
return self.transform(results)
File "/home/xhy/code/FRNet-master1/frnet/datasets/transforms/transforms_3d.py", line 210, in transform
proj_idx[proj_y[order], proj_x[order]] = indices[order]
IndexError: index -9223372036854775808 is out of bounds for axis 0 with size 64

About infer

May I ask how I can use FRNet for inference to obtain visualized results? Currently, I can get the mIoU, but I also want to obtain the corresponding point cloud inference results. Thank you!

About how to test

Hello,

I'm a beginner and it's my first time debugging code. When running the test code python test.py ${CONFIG_FILE} ${CHECKPOINT_FILE}, what file path should be provided for ${CONFIG_FILE}? Do I need to generate it myself? If so, how can I generate it? And for ${CHECKPOINT_FILE}, is it correct to input the path of the pre-trained checkpoints you provided? Thank you very much for your help.

About update

I'm so glad you've updated your nice work. Could you please provide details on what specific updates have been made to enhance stability?Thanks!

Predicted test set results

Thanks for this excellent work, may I ask how to get the predictions on the semantickitti dataset test sets (11-21)?

Testing issues:OSError: Caught OSError in DataLoader worker process 0. And OSError: [Errno 5] Input/output error

__Thank you very much for your contribution. I encountered an interruption while testing SemanticKITTI using your Pre-Trained Checkpoints with the command:
python test.py /home/xhy/code/FRNet-master/configs/frnet/frnet-semantickitti_seg.py /home/xhy/code/FRNet-master/frnet-semantickitti_seg.pth

The generated error is as follows:

python test.py /home/xhy/code/FRNet-master/configs/frnet/frnet-semantickitti_seg.py /home/xhy/code/FRNet-master/frnet-semantickitti_seg.pth
03/29 13:59:11 - mmengine - INFO -

System environment:
sys.platform: linux
Python: 3.8.0 (default, Nov 6 2019, 21:49:08) [GCC 7.3.0]
CUDA available: True
numpy_random_seed: 746820188
GPU 0,1: NVIDIA GeForce RTX 3060
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.3, V11.3.58
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.8.1+cu111
PyTorch compiling details: PyTorch built with:

  • GCC 7.3

  • C++ Version: 201402

  • Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications

  • Intel(R) MKL-DNN v1.7.0 (Git Hash 7aed236906b1f7a05c0917e5257a1af05e9ff683)

  • OpenMP 201511 (a.k.a. OpenMP 4.5)

  • NNPACK is enabled

  • CPU capability usage: AVX2

  • CUDA Runtime 11.1

  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86

  • CuDNN 8.0.5

  • Magma 2.5.2

  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.8.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,

    TorchVision: 0.9.1+cu111
    OpenCV: 4.9.0
    MMEngine: 0.9.0

Runtime environment:
cudnn_benchmark: False
mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0}
dist_cfg: {'backend': 'nccl'}
seed: 746820188
Distributed launcher: none
Distributed training: False
GPU number: 1

03/29 13:59:13 - mmengine - INFO - Config:
auto_scale_lr = dict(base_batch_size=4, enable=False)
backend_args = None
class_names = [
'car',
'bicycle',
'motorcycle',
'truck',
'bus',
'person',
'bicyclist',
'motorcyclist',
'road',
'parking',
'sidewalk',
'other-ground',
'building',
'fence',
'vegetation',
'trunck',
'terrian',
'pole',
'traffic-sign',
]
custom_imports = dict(
allow_failed_imports=False,
imports=[
'frnet.datasets',
'frnet.datasets.transforms',
'frnet.models',
])
data_root = '/data/SemanticKITTI_FRNet/'
dataset_type = 'SemanticKittiDataset'
default_hooks = dict(
checkpoint=dict(interval=-1, type='CheckpointHook'),
logger=dict(interval=50, type='LoggerHook'),
param_scheduler=dict(type='ParamSchedulerHook'),
sampler_seed=dict(type='DistSamplerSeedHook'),
timer=dict(type='IterTimerHook'),
visualization=dict(type='Det3DVisualizationHook'))
default_scope = 'mmdet3d'
env_cfg = dict(
cudnn_benchmark=False,
dist_cfg=dict(backend='nccl'),
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0))
input_modality = dict(use_camera=False, use_lidar=True)
labels_map = dict({
0: 19,
1: 19,
10: 0,
11: 1,
13: 4,
15: 2,
16: 4,
18: 3,
20: 4,
252: 0,
253: 6,
254: 5,
255: 7,
256: 4,
257: 4,
258: 3,
259: 4,
30: 5,
31: 6,
32: 7,
40: 8,
44: 9,
48: 10,
49: 11,
50: 12,
51: 13,
52: 19,
60: 8,
70: 14,
71: 15,
72: 16,
80: 17,
81: 18,
99: 19
})
launcher = 'none'
load_from = '/home/xhy/code/FRNet-master/frnet-semantickitti_seg.pth'
log_level = 'INFO'
log_processor = dict(by_epoch=True, type='LogProcessor', window_size=50)
lr = 0.01
metainfo = dict(
classes=[
'car',
'bicycle',
'motorcycle',
'truck',
'bus',
'person',
'bicyclist',
'motorcyclist',
'road',
'parking',
'sidewalk',
'other-ground',
'building',
'fence',
'vegetation',
'trunck',
'terrian',
'pole',
'traffic-sign',
],
max_label=259,
seg_label_mapping=dict({
0: 19,
1: 19,
10: 0,
11: 1,
13: 4,
15: 2,
16: 4,
18: 3,
20: 4,
252: 0,
253: 6,
254: 5,
255: 7,
256: 4,
257: 4,
258: 3,
259: 4,
30: 5,
31: 6,
32: 7,
40: 8,
44: 9,
48: 10,
49: 11,
50: 12,
51: 13,
52: 19,
60: 8,
70: 14,
71: 15,
72: 16,
80: 17,
81: 18,
99: 19
}))
model = dict(
auxiliary_head=[
dict(
channels=128,
conv_seg_kernel_size=1,
dropout_ratio=0,
ignore_index=19,
loss_boundary=dict(loss_weight=1.0, type='BoundaryLoss'),
loss_ce=dict(
class_weight=None,
loss_weight=1.0,
type='mmdet.CrossEntropyLoss',
use_sigmoid=False),
loss_lovasz=dict(
loss_weight=1.5, reduction='none', type='LovaszLoss'),
num_classes=20,
type='FrustumHead'),
dict(
channels=128,
conv_seg_kernel_size=1,
dropout_ratio=0,
ignore_index=19,
indices=2,
loss_boundary=dict(loss_weight=1.0, type='BoundaryLoss'),
loss_ce=dict(
class_weight=None,
loss_weight=1.0,
type='mmdet.CrossEntropyLoss',
use_sigmoid=False),
loss_lovasz=dict(
loss_weight=1.5, reduction='none', type='LovaszLoss'),
num_classes=20,
type='FrustumHead'),
dict(
channels=128,
conv_seg_kernel_size=1,
dropout_ratio=0,
ignore_index=19,
indices=3,
loss_boundary=dict(loss_weight=1.0, type='BoundaryLoss'),
loss_ce=dict(
class_weight=None,
loss_weight=1.0,
type='mmdet.CrossEntropyLoss',
use_sigmoid=False),
loss_lovasz=dict(
loss_weight=1.5, reduction='none', type='LovaszLoss'),
num_classes=20,
type='FrustumHead'),
dict(
channels=128,
conv_seg_kernel_size=1,
dropout_ratio=0,
ignore_index=19,
indices=4,
loss_boundary=dict(loss_weight=1.0, type='BoundaryLoss'),
loss_ce=dict(
class_weight=None,
loss_weight=1.0,
type='mmdet.CrossEntropyLoss',
use_sigmoid=False),
loss_lovasz=dict(
loss_weight=1.5, reduction='none', type='LovaszLoss'),
num_classes=20,
type='FrustumHead'),
],
backbone=dict(
act_cfg=dict(inplace=True, type='HSwish'),
depth=34,
dilations=(
1,
1,
1,
1,
),
fuse_channels=(
256,
128,
),
in_channels=16,
norm_cfg=dict(eps=0.001, momentum=0.01, type='naiveSyncBN2d'),
num_stages=4,
out_channels=(
128,
128,
128,
128,
),
output_shape=(
64,
512,
),
point_in_channels=384,
point_norm_cfg=dict(eps=0.001, momentum=0.01, type='naiveSyncBN1d'),
stem_channels=128,
strides=(
1,
2,
2,
2,
),
type='FRNetBackbone'),
data_preprocessor=dict(
H=64,
W=512,
fov_down=-25.0,
fov_up=3.0,
ignore_index=19,
type='FrustumRangePreprocessor'),
decode_head=dict(
channels=64,
conv_seg_kernel_size=1,
dropout_ratio=0,
ignore_index=19,
in_channels=128,
loss_ce=dict(
class_weight=None,
loss_weight=1.0,
type='mmdet.CrossEntropyLoss',
use_sigmoid=False),
middle_channels=(
128,
256,
128,
64,
),
norm_cfg=dict(eps=0.001, momentum=0.01, type='naiveSyncBN1d'),
num_classes=20,
type='FRHead'),
type='FRNet',
voxel_encoder=dict(
feat_channels=(
64,
128,
256,
256,
),
feat_compression=16,
in_channels=4,
norm_cfg=dict(eps=0.001, momentum=0.01, type='naiveSyncBN1d'),
type='FrustumFeatureEncoder',
with_cluster_center=True,
with_distance=True,
with_pre_norm=True))
optim_wrapper = dict(
optimizer=dict(
betas=(
0.9,
0.999,
),
eps=1e-06,
lr=0.01,
type='AdamW',
weight_decay=0.01),
type='OptimWrapper')
param_scheduler = [
dict(
by_epoch=True,
convert_to_iter_based=True,
div_factor=25.0,
eta_max=0.01,
final_div_factor=100.0,
pct_start=0.2,
total_steps=50,
type='OneCycleLR'),
]
pre_transform = [
dict(
backend_args=None,
coord_type='LIDAR',
load_dim=4,
type='LoadPointsFromFile',
use_dim=4),
dict(
backend_args=None,
dataset_type='semantickitti',
seg_3d_dtype='np.int32',
seg_offset=65536,
type='LoadAnnotations3D',
with_bbox_3d=False,
with_label_3d=False,
with_seg_3d=True),
dict(type='PointSegClassMapping'),
dict(num_points=0.9, type='PointSample'),
dict(
flip_ratio_bev_horizontal=0.5,
flip_ratio_bev_vertical=0.5,
sync_2d=False,
type='RandomFlip3D'),
dict(
rot_range=[
-3.1415926,
3.1415926,
],
scale_ratio_range=[
0.95,
1.05,
],
translation_std=[
0.1,
0.1,
0.1,
],
type='GlobalRotScaleTrans'),
]
resume = False
test_cfg = dict()
test_dataloader = dict(
batch_size=1,
dataset=dict(
ann_file='semantickitti_infos_val.pkl',
backend_args=None,
data_root='/data/SemanticKITTI_FRNet/',
ignore_index=19,
metainfo=dict(
classes=[
'car',
'bicycle',
'motorcycle',
'truck',
'bus',
'person',
'bicyclist',
'motorcyclist',
'road',
'parking',
'sidewalk',
'other-ground',
'building',
'fence',
'vegetation',
'trunck',
'terrian',
'pole',
'traffic-sign',
],
max_label=259,
seg_label_mapping=dict({
0: 19,
1: 19,
10: 0,
11: 1,
13: 4,
15: 2,
16: 4,
18: 3,
20: 4,
252: 0,
253: 6,
254: 5,
255: 7,
256: 4,
257: 4,
258: 3,
259: 4,
30: 5,
31: 6,
32: 7,
40: 8,
44: 9,
48: 10,
49: 11,
50: 12,
51: 13,
52: 19,
60: 8,
70: 14,
71: 15,
72: 16,
80: 17,
81: 18,
99: 19
})),
modality=dict(use_camera=False, use_lidar=True),
pipeline=[
dict(
backend_args=None,
coord_type='LIDAR',
load_dim=4,
type='LoadPointsFromFile',
use_dim=4),
dict(
backend_args=None,
dataset_type='semantickitti',
seg_3d_dtype='np.int32',
seg_offset=65536,
type='LoadAnnotations3D',
with_bbox_3d=False,
with_label_3d=False,
with_seg_3d=True),
dict(type='PointSegClassMapping'),
dict(
H=64,
W=2048,
fov_down=-25.0,
fov_up=3.0,
ignore_index=19,
type='RangeInterpolation'),
dict(
keys=[
'points',
],
meta_keys=[
'num_points',
],
type='Pack3DDetInputs'),
],
test_mode=True,
type='SemanticKittiDataset'),
drop_last=False,
num_workers=1,
persistent_workers=True,
sampler=dict(shuffle=False, type='DefaultSampler'))
test_evaluator = dict(type='SegMetric')
test_pipeline = [
dict(
backend_args=None,
coord_type='LIDAR',
load_dim=4,
type='LoadPointsFromFile',
use_dim=4),
dict(
backend_args=None,
dataset_type='semantickitti',
seg_3d_dtype='np.int32',
seg_offset=65536,
type='LoadAnnotations3D',
with_bbox_3d=False,
with_label_3d=False,
with_seg_3d=True),
dict(type='PointSegClassMapping'),
dict(
H=64,
W=2048,
fov_down=-25.0,
fov_up=3.0,
ignore_index=19,
type='RangeInterpolation'),
dict(
keys=[
'points',
], meta_keys=[
'num_points',
], type='Pack3DDetInputs'),
]
train_cfg = dict(by_epoch=True, max_epochs=50, val_interval=1)
train_dataloader = dict(
batch_size=1,
dataset=dict(
ann_file='semantickitti_infos_train.pkl',
backend_args=None,
data_root='/data/SemanticKITTI_FRNet/',
ignore_index=19,
metainfo=dict(
classes=[
'car',
'bicycle',
'motorcycle',
'truck',
'bus',
'person',
'bicyclist',
'motorcyclist',
'road',
'parking',
'sidewalk',
'other-ground',
'building',
'fence',
'vegetation',
'trunck',
'terrian',
'pole',
'traffic-sign',
],
max_label=259,
seg_label_mapping=dict({
0: 19,
1: 19,
10: 0,
11: 1,
13: 4,
15: 2,
16: 4,
18: 3,
20: 4,
252: 0,
253: 6,
254: 5,
255: 7,
256: 4,
257: 4,
258: 3,
259: 4,
30: 5,
31: 6,
32: 7,
40: 8,
44: 9,
48: 10,
49: 11,
50: 12,
51: 13,
52: 19,
60: 8,
70: 14,
71: 15,
72: 16,
80: 17,
81: 18,
99: 19
})),
modality=dict(use_camera=False, use_lidar=True),
pipeline=[
dict(
backend_args=None,
coord_type='LIDAR',
load_dim=4,
type='LoadPointsFromFile',
use_dim=4),
dict(
backend_args=None,
dataset_type='semantickitti',
seg_3d_dtype='np.int32',
seg_offset=65536,
type='LoadAnnotations3D',
with_bbox_3d=False,
with_label_3d=False,
with_seg_3d=True),
dict(type='PointSegClassMapping'),
dict(num_points=0.9, type='PointSample'),
dict(
flip_ratio_bev_horizontal=0.5,
flip_ratio_bev_vertical=0.5,
sync_2d=False,
type='RandomFlip3D'),
dict(
rot_range=[
-3.1415926,
3.1415926,
],
scale_ratio_range=[
0.95,
1.05,
],
translation_std=[
0.1,
0.1,
0.1,
],
type='GlobalRotScaleTrans'),
dict(
H=64,
W=512,
fov_down=-25.0,
fov_up=3.0,
num_areas=[
3,
4,
5,
6,
],
pre_transform=[
dict(
backend_args=None,
coord_type='LIDAR',
load_dim=4,
type='LoadPointsFromFile',
use_dim=4),
dict(
backend_args=None,
dataset_type='semantickitti',
seg_3d_dtype='np.int32',
seg_offset=65536,
type='LoadAnnotations3D',
with_bbox_3d=False,
with_label_3d=False,
with_seg_3d=True),
dict(type='PointSegClassMapping'),
dict(num_points=0.9, type='PointSample'),
dict(
flip_ratio_bev_horizontal=0.5,
flip_ratio_bev_vertical=0.5,
sync_2d=False,
type='RandomFlip3D'),
dict(
rot_range=[
-3.1415926,
3.1415926,
],
scale_ratio_range=[
0.95,
1.05,
],
translation_std=[
0.1,
0.1,
0.1,
],
type='GlobalRotScaleTrans'),
],
prob=1.0,
type='FrustumMix'),
dict(
instance_classes=[
1,
2,
3,
4,
5,
6,
7,
11,
15,
17,
18,
],
pre_transform=[
dict(
backend_args=None,
coord_type='LIDAR',
load_dim=4,
type='LoadPointsFromFile',
use_dim=4),
dict(
backend_args=None,
dataset_type='semantickitti',
seg_3d_dtype='np.int32',
seg_offset=65536,
type='LoadAnnotations3D',
with_bbox_3d=False,
with_label_3d=False,
with_seg_3d=True),
dict(type='PointSegClassMapping'),
dict(num_points=0.9, type='PointSample'),
dict(
flip_ratio_bev_horizontal=0.5,
flip_ratio_bev_vertical=0.5,
sync_2d=False,
type='RandomFlip3D'),
dict(
rot_range=[
-3.1415926,
3.1415926,
],
scale_ratio_range=[
0.95,
1.05,
],
translation_std=[
0.1,
0.1,
0.1,
],
type='GlobalRotScaleTrans'),
],
prob=1.0,
type='InstanceCopy'),
dict(
H=64,
W=2048,
fov_down=-25.0,
fov_up=3.0,
ignore_index=19,
type='RangeInterpolation'),
dict(
keys=[
'points',
'pts_semantic_mask',
], type='Pack3DDetInputs'),
],
type='SemanticKittiDataset'),
num_workers=4,
persistent_workers=True,
sampler=dict(shuffle=True, type='DefaultSampler'))
train_pipeline = [
dict(
backend_args=None,
coord_type='LIDAR',
load_dim=4,
type='LoadPointsFromFile',
use_dim=4),
dict(
backend_args=None,
dataset_type='semantickitti',
seg_3d_dtype='np.int32',
seg_offset=65536,
type='LoadAnnotations3D',
with_bbox_3d=False,
with_label_3d=False,
with_seg_3d=True),
dict(type='PointSegClassMapping'),
dict(num_points=0.9, type='PointSample'),
dict(
flip_ratio_bev_horizontal=0.5,
flip_ratio_bev_vertical=0.5,
sync_2d=False,
type='RandomFlip3D'),
dict(
rot_range=[
-3.1415926,
3.1415926,
],
scale_ratio_range=[
0.95,
1.05,
],
translation_std=[
0.1,
0.1,
0.1,
],
type='GlobalRotScaleTrans'),
dict(
H=64,
W=512,
fov_down=-25.0,
fov_up=3.0,
num_areas=[
3,
4,
5,
6,
],
pre_transform=[
dict(
backend_args=None,
coord_type='LIDAR',
load_dim=4,
type='LoadPointsFromFile',
use_dim=4),
dict(
backend_args=None,
dataset_type='semantickitti',
seg_3d_dtype='np.int32',
seg_offset=65536,
type='LoadAnnotations3D',
with_bbox_3d=False,
with_label_3d=False,
with_seg_3d=True),
dict(type='PointSegClassMapping'),
dict(num_points=0.9, type='PointSample'),
dict(
flip_ratio_bev_horizontal=0.5,
flip_ratio_bev_vertical=0.5,
sync_2d=False,
type='RandomFlip3D'),
dict(
rot_range=[
-3.1415926,
3.1415926,
],
scale_ratio_range=[
0.95,
1.05,
],
translation_std=[
0.1,
0.1,
0.1,
],
type='GlobalRotScaleTrans'),
],
prob=1.0,
type='FrustumMix'),
dict(
instance_classes=[
1,
2,
3,
4,
5,
6,
7,
11,
15,
17,
18,
],
pre_transform=[
dict(
backend_args=None,
coord_type='LIDAR',
load_dim=4,
type='LoadPointsFromFile',
use_dim=4),
dict(
backend_args=None,
dataset_type='semantickitti',
seg_3d_dtype='np.int32',
seg_offset=65536,
type='LoadAnnotations3D',
with_bbox_3d=False,
with_label_3d=False,
with_seg_3d=True),
dict(type='PointSegClassMapping'),
dict(num_points=0.9, type='PointSample'),
dict(
flip_ratio_bev_horizontal=0.5,
flip_ratio_bev_vertical=0.5,
sync_2d=False,
type='RandomFlip3D'),
dict(
rot_range=[
-3.1415926,
3.1415926,
],
scale_ratio_range=[
0.95,
1.05,
],
translation_std=[
0.1,
0.1,
0.1,
],
type='GlobalRotScaleTrans'),
],
prob=1.0,
type='InstanceCopy'),
dict(
H=64,
W=2048,
fov_down=-25.0,
fov_up=3.0,
ignore_index=19,
type='RangeInterpolation'),
dict(keys=[
'points',
'pts_semantic_mask',
], type='Pack3DDetInputs'),
]
tta_model = dict(type='Seg3DTTAModel')
tta_pipeline = [
dict(
backend_args=None,
coord_type='LIDAR',
load_dim=4,
type='LoadPointsFromFile',
use_dim=4),
dict(
backend_args=None,
dataset_type='semantickitti',
seg_3d_dtype='np.int32',
seg_offset=65536,
type='LoadAnnotations3D',
with_bbox_3d=False,
with_label_3d=False,
with_seg_3d=True),
dict(type='PointSegClassMapping'),
dict(
H=64,
W=2048,
fov_down=-25.0,
fov_up=3.0,
ignore_index=19,
type='RangeInterpolation'),
dict(
transforms=[
[
dict(
flip_ratio_bev_horizontal=0.0,
flip_ratio_bev_vertical=0.0,
sync_2d=False,
type='RandomFlip3D'),
dict(
flip_ratio_bev_horizontal=0.0,
flip_ratio_bev_vertical=1.0,
sync_2d=False,
type='RandomFlip3D'),
dict(
flip_ratio_bev_horizontal=1.0,
flip_ratio_bev_vertical=0.0,
sync_2d=False,
type='RandomFlip3D'),
dict(
flip_ratio_bev_horizontal=1.0,
flip_ratio_bev_vertical=1.0,
sync_2d=False,
type='RandomFlip3D'),
],
[
dict(
rot_range=[
-3.1415926,
3.1415926,
],
scale_ratio_range=[
0.95,
1.05,
],
translation_std=[
0.1,
0.1,
0.1,
],
type='GlobalRotScaleTrans'),
],
[
dict(
keys=[
'points',
],
meta_keys=[
'num_points',
],
type='Pack3DDetInputs'),
],
],
type='TestTimeAug'),
]
val_cfg = dict()
val_dataloader = dict(
batch_size=1,
dataset=dict(
ann_file='semantickitti_infos_val.pkl',
backend_args=None,
data_root='/data/SemanticKITTI_FRNet/',
ignore_index=19,
metainfo=dict(
classes=[
'car',
'bicycle',
'motorcycle',
'truck',
'bus',
'person',
'bicyclist',
'motorcyclist',
'road',
'parking',
'sidewalk',
'other-ground',
'building',
'fence',
'vegetation',
'trunck',
'terrian',
'pole',
'traffic-sign',
],
max_label=259,
seg_label_mapping=dict({
0: 19,
1: 19,
10: 0,
11: 1,
13: 4,
15: 2,
16: 4,
18: 3,
20: 4,
252: 0,
253: 6,
254: 5,
255: 7,
256: 4,
257: 4,
258: 3,
259: 4,
30: 5,
31: 6,
32: 7,
40: 8,
44: 9,
48: 10,
49: 11,
50: 12,
51: 13,
52: 19,
60: 8,
70: 14,
71: 15,
72: 16,
80: 17,
81: 18,
99: 19
})),
modality=dict(use_camera=False, use_lidar=True),
pipeline=[
dict(
backend_args=None,
coord_type='LIDAR',
load_dim=4,
type='LoadPointsFromFile',
use_dim=4),
dict(
backend_args=None,
dataset_type='semantickitti',
seg_3d_dtype='np.int32',
seg_offset=65536,
type='LoadAnnotations3D',
with_bbox_3d=False,
with_label_3d=False,
with_seg_3d=True),
dict(type='PointSegClassMapping'),
dict(
H=64,
W=2048,
fov_down=-25.0,
fov_up=3.0,
ignore_index=19,
type='RangeInterpolation'),
dict(
keys=[
'points',
],
meta_keys=[
'num_points',
],
type='Pack3DDetInputs'),
],
test_mode=True,
type='SemanticKittiDataset'),
drop_last=False,
num_workers=1,
persistent_workers=True,
sampler=dict(shuffle=False, type='DefaultSampler'))
val_evaluator = dict(type='SegMetric')
vis_backends = [
dict(type='LocalVisBackend'),
]
visualizer = dict(
name='visualizer',
type='Det3DLocalVisualizer',
vis_backends=[
dict(type='LocalVisBackend'),
])
work_dir = './work_dirs/frnet-semantickitti_seg'

03/29 13:59:17 - mmengine - INFO - Distributed training is not used, all SyncBatchNorm (SyncBN) layers in the model will be automatically reverted to BatchNormXd layers if they are used.
03/29 13:59:17 - mmengine - INFO - Autoplay mode, press [SPACE] to pause.
03/29 13:59:17 - mmengine - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH ) RuntimeInfoHook
(BELOW_NORMAL) LoggerHook

before_train:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(VERY_LOW ) CheckpointHook

before_train_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(NORMAL ) DistSamplerSeedHook

before_train_iter:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook

after_train_iter:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook

after_train_epoch:
(NORMAL ) IterTimerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook

before_val:
(VERY_HIGH ) RuntimeInfoHook

before_val_epoch:
(NORMAL ) IterTimerHook

before_val_iter:
(NORMAL ) IterTimerHook

after_val_iter:
(NORMAL ) IterTimerHook
(NORMAL ) Det3DVisualizationHook
(BELOW_NORMAL) LoggerHook

after_val_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook

after_val:
(VERY_HIGH ) RuntimeInfoHook

after_train:
(VERY_HIGH ) RuntimeInfoHook
(VERY_LOW ) CheckpointHook

before_test:
(VERY_HIGH ) RuntimeInfoHook

before_test_epoch:
(NORMAL ) IterTimerHook

before_test_iter:
(NORMAL ) IterTimerHook

after_test_iter:
(NORMAL ) IterTimerHook
(NORMAL ) Det3DVisualizationHook
(BELOW_NORMAL) LoggerHook

after_test_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook

after_test:
(VERY_HIGH ) RuntimeInfoHook

after_run:
(BELOW_NORMAL) LoggerHook

/home/xhy/miniconda3/envs/frnet/lib/python3.8/site-packages/mmdet3d/evaluation/functional/kitti_utils/eval.py:10: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def get_thresholds(scores: np.ndarray, num_gt, num_sample_pts=41):
03/29 13:59:19 - mmengine - WARNING - The prefix is not set in metric class SegMetric.
Loads checkpoint by local backend from path: /home/xhy/code/FRNet-master/frnet-semantickitti_seg.pth
03/29 13:59:20 - mmengine - INFO - Load checkpoint from /home/xhy/code/FRNet-master/frnet-semantickitti_seg.pth
03/29 13:59:28 - mmengine - INFO - Epoch(test) [ 50/4071] eta: 0:09:50 time: 0.1467 data_time: 0.0060 memory: 1481
03/29 13:59:45 - mmengine - INFO - Epoch(test) [ 100/4071] eta: 0:16:03 time: 0.3384 data_time: 0.2037 memory: 1496
03/29 13:59:52 - mmengine - INFO - Epoch(test) [ 150/4071] eta: 0:13:35 time: 0.1389 data_time: 0.0025 memory: 1513
03/29 13:59:58 - mmengine - INFO - Epoch(test) [ 200/4071] eta: 0:12:16 time: 0.1373 data_time: 0.0025 memory: 1508
03/29 14:00:05 - mmengine - INFO - Epoch(test) [ 250/4071] eta: 0:11:28 time: 0.1394 data_time: 0.0030 memory: 1513
03/29 14:00:12 - mmengine - INFO - Epoch(test) [ 300/4071] eta: 0:10:54 time: 0.1402 data_time: 0.0026 memory: 1520
03/29 14:00:19 - mmengine - INFO - Epoch(test) [ 350/4071] eta: 0:10:27 time: 0.1399 data_time: 0.0026 memory: 1522
03/29 14:00:26 - mmengine - INFO - Epoch(test) [ 400/4071] eta: 0:10:06 time: 0.1406 data_time: 0.0026 memory: 1523
03/29 14:00:33 - mmengine - INFO - Epoch(test) [ 450/4071] eta: 0:09:48 time: 0.1404 data_time: 0.0025 memory: 1522
03/29 14:00:40 - mmengine - INFO - Epoch(test) [ 500/4071] eta: 0:09:31 time: 0.1395 data_time: 0.0025 memory: 1518
03/29 14:00:47 - mmengine - INFO - Epoch(test) [ 550/4071] eta: 0:09:17 time: 0.1392 data_time: 0.0027 memory: 1505
03/29 14:00:54 - mmengine - INFO - Epoch(test) [ 600/4071] eta: 0:09:02 time: 0.1343 data_time: 0.0025 memory: 1480
03/29 14:01:01 - mmengine - INFO - Epoch(test) [ 650/4071] eta: 0:08:49 time: 0.1381 data_time: 0.0024 memory: 1506
03/29 14:01:08 - mmengine - INFO - Epoch(test) [ 700/4071] eta: 0:08:38 time: 0.1405 data_time: 0.0025 memory: 1522
03/29 14:01:15 - mmengine - INFO - Epoch(test) [ 750/4071] eta: 0:08:28 time: 0.1417 data_time: 0.0026 memory: 1533
03/29 14:01:22 - mmengine - INFO - Epoch(test) [ 800/4071] eta: 0:08:18 time: 0.1409 data_time: 0.0025 memory: 1528
03/29 14:01:29 - mmengine - INFO - Epoch(test) [ 850/4071] eta: 0:08:08 time: 0.1399 data_time: 0.0026 memory: 1509
03/29 14:01:36 - mmengine - INFO - Epoch(test) [ 900/4071] eta: 0:07:57 time: 0.1339 data_time: 0.0024 memory: 1516
03/29 14:01:43 - mmengine - INFO - Epoch(test) [ 950/4071] eta: 0:07:47 time: 0.1363 data_time: 0.0024 memory: 1501
03/29 14:01:50 - mmengine - INFO - Epoch(test) [1000/4071] eta: 0:07:38 time: 0.1418 data_time: 0.0025 memory: 1532
03/29 14:01:57 - mmengine - INFO - Epoch(test) [1050/4071] eta: 0:07:30 time: 0.1413 data_time: 0.0026 memory: 1521
03/29 14:02:04 - mmengine - INFO - Epoch(test) [1100/4071] eta: 0:07:21 time: 0.1400 data_time: 0.0025 memory: 1515
03/29 14:02:11 - mmengine - INFO - Epoch(test) [1150/4071] eta: 0:07:12 time: 0.1389 data_time: 0.0026 memory: 1513
03/29 14:02:18 - mmengine - INFO - Epoch(test) [1200/4071] eta: 0:07:04 time: 0.1396 data_time: 0.0025 memory: 1519
03/29 14:02:24 - mmengine - INFO - Epoch(test) [1250/4071] eta: 0:06:55 time: 0.1347 data_time: 0.0026 memory: 1465
03/29 14:02:31 - mmengine - INFO - Epoch(test) [1300/4071] eta: 0:06:47 time: 0.1365 data_time: 0.0028 memory: 1510
03/29 14:02:38 - mmengine - INFO - Epoch(test) [1350/4071] eta: 0:06:39 time: 0.1410 data_time: 0.0025 memory: 1512
03/29 14:02:45 - mmengine - INFO - Epoch(test) [1400/4071] eta: 0:06:31 time: 0.1406 data_time: 0.0025 memory: 1522
03/29 14:02:52 - mmengine - INFO - Epoch(test) [1450/4071] eta: 0:06:23 time: 0.1400 data_time: 0.0026 memory: 1511
Traceback (most recent call last):
File "test.py", line 146, in
main()
File "test.py", line 142, in main
runner.test()
File "/home/xhy/miniconda3/envs/frnet/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1823, in test
metrics = self.test_loop.run() # type: ignore
File "/home/xhy/miniconda3/envs/frnet/lib/python3.8/site-packages/mmengine/runner/loops.py", line 434, in run
for idx, data_batch in enumerate(self.dataloader):
File "/home/xhy/miniconda3/envs/frnet/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 517, in next
data = self._next_data()
File "/home/xhy/miniconda3/envs/frnet/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data
return self._process_data(data)
File "/home/xhy/miniconda3/envs/frnet/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data
data.reraise()
File "/home/xhy/miniconda3/envs/frnet/lib/python3.8/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
OSError: Caught OSError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/xhy/miniconda3/envs/frnet/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/home/xhy/miniconda3/envs/frnet/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/xhy/miniconda3/envs/frnet/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/xhy/miniconda3/envs/frnet/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 403, in getitem
data = self.prepare_data(idx)
File "/home/xhy/miniconda3/envs/frnet/lib/python3.8/site-packages/mmdet3d/datasets/seg3d_dataset.py", line 305, in prepare_data
return super().prepare_data(idx)
File "/home/xhy/miniconda3/envs/frnet/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 793, in prepare_data
return self.pipeline(data_info)
File "/home/xhy/miniconda3/envs/frnet/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 60, in call
data = t(data)
File "/home/xhy/miniconda3/envs/frnet/lib/python3.8/site-packages/mmcv/transforms/base.py", line 12, in call
return self.transform(results)
File "/home/xhy/miniconda3/envs/frnet/lib/python3.8/site-packages/mmdet3d/datasets/transforms/loading.py", line 646, in transform
points = self._load_points(pts_file_path)
File "/home/xhy/miniconda3/envs/frnet/lib/python3.8/site-packages/mmdet3d/datasets/transforms/loading.py", line 622, in _load_points
pts_bytes = get(pts_filename, backend_args=self.backend_args)
File "/home/xhy/miniconda3/envs/frnet/lib/python3.8/site-packages/mmengine/fileio/io.py", line 181, in get
return backend.get(filepath)
File "/home/xhy/miniconda3/envs/frnet/lib/python3.8/site-packages/mmengine/fileio/backends/local_backend.py", line 34, in get
value = f.read()
OSError: [Errno 5] Input/output error

Could you please advise on how to resolve this issue? Your help is greatly appreciated.

fast-FRNet

can you provide the config and weights about fast-FRNet, thinks

run on customized dataset

Hi! I found it performs wonderfully in semantic-kitti! But now, I have some data collected by myself, similar street scene like kitti. If I want to run your model on it directly, need I just format my data as kitti and then run as semantic-kitti test dataset? Or I need to do something more? Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.