Code Monkey home page Code Monkey logo

polylanenet's Introduction

PolyLaneNet

Method overview

Description

Code for the PolyLaneNet paper, accepted to ICPR 2020, by Lucas Tabelini, Thiago M. Paixão, Rodrigo F. Berriel, Claudine Badue, Alberto F. De Souza, and Thiago Oliveira-Santos.

News: The source code for our new state-of-the-art lane detection method, LaneATT, has been released. Check it out here.

Table of Contents

  1. Installation
  2. Usage
  3. Reproducing the paper results

The code requires Python 3, and has been tested on Python 3.5.2, but should work on newer versions of Python too.

Install dependencies:

pip install -r requirements.txt

Training

Every setting for a training is set through a YAML configuration file. Thus, in order to train a model you will have to setup the configuration file. An example is shown:

# Training settings
exps_dir: 'experiments' # Path to the root for the experiments directory (not only the one you will run)
iter_log_interval: 1 # Log training iteration every N iterations
iter_time_window: 100 # Moving average iterations window for the printed loss metric
model_save_interval: 1 # Save model every N epochs
seed: 0 # Seed for randomness
backup: drive:polylanenet-experiments # The experiment directory will be automatically uploaded using rclone after the training ends. Leave empty if you do not want this.
model:
  name: PolyRegression
  parameters:
    num_outputs: 35 # (5 lanes) * (1 conf + 2 (upper & lower) + 4 poly coeffs)
    pretrained: true
    backbone: 'efficientnet-b0'
    pred_category: false
loss_parameters:
  conf_weight: 1
  lower_weight: 1
  upper_weight: 1
  cls_weight: 0
  poly_weight: 300
batch_size: 16
epochs: 2695
optimizer:
  name: Adam
  parameters:
    lr: 3.0e-4
lr_scheduler:
  name: CosineAnnealingLR
  parameters:
    T_max: 385

# Testing settings
test_parameters:
  conf_threshold: 0.5 # Set predictions with confidence lower than this to 0 (i.e., set as invalid for the metrics)

# Dataset settings
datasets:
  train:
    type: PointsDataset
    parameters:
      dataset: tusimple
      split: train
      img_size: [360, 640]
      normalize: true
      aug_chance: 0.9090909090909091 # 10/11
      augmentations: # ImgAug augmentations
       - name: Affine
         parameters:
           rotate: !!python/tuple [-10, 10]
       - name: HorizontalFlip
         parameters:
           p: 0.5
       - name: CropToFixedSize
         parameters:
           width: 1152
           height: 648
      root: "datasets/tusimple" # Dataset root

  test: &test
    type: PointsDataset
    parameters:
      dataset: tusimple
      split: val
      img_size: [360, 640]
      root: "datasets/tusimple"
      normalize: true
      augmentations: []

  # val = test
  val:
    <<: *test

With the config file created, run the training script:

python train.py --exp_name tusimple --cfg config.yaml

This script's options are:

  --exp_name            Experiment name.
  --cfg                 Config file for the training (.yaml)
  --resume              Resume training. If a training session was interrupted, run it again with the same arguments and this option to resume the training from the last checkpoint.
  --validate            Wheter to validate during the training session. Was not in our experiments, which means it has not been thoroughly tested.
  --deterministic       set cudnn.deterministic = True and cudnn.benchmark = False

Testing

After training, run the test.py script to get the metrics:

python test.py --exp_name tusimple --cfg config.yaml --epoch 2695

This script's options are:

  --exp_name            Experiment name.
  --cfg                 Config file for the test (.yaml). (probably the same one used in the training)
  --epoch EPOCH         Epoch to test the model on
  --batch_size          Number of images per batch
  --view                Show predictions. Will draw the predictions in an image and then show it (cv.imshow)

If you have any issues with either training or testing feel free to open an issue.

Models

All models trained for the paper can be found here.

Datasets

How to

To reproduce the results, you can either retrain a model with the same settings (which should yield results pretty close to the reported ones) or just test the model. If you want to retrain, you only need the appropriate YAML settings file, which you can find in the cfgs directory. If you just want to reproduce the exact reported metrics by testing the model, you'll have to:

  1. Download the experiment directory. You don't need to download all model checkpoints if you want, you'll only need the last one (model_2695.pt, with the exception of the experiments on ELAS and LLAMAS).
  2. Modify all path related fields (i.e., dataset paths and exps_dir) in the config.yaml file inside the experiment directory.
  3. Move the downloaded experiment to your exps_dir folder.

Then, run:

python test.py --exp_name $exp_name --cfg $exps_dir/$exp_name/config.yaml --epoch 2695

Replacing $exp_name with the name of the directory you downloaded (the name of the experiment) and $exps_dir with the exps_dir value you defined inside the config.yaml file. The script will look for a directory named $exps_dir/$exp_name/models to load the model.

polylanenet's People

Contributors

dependabot[bot] avatar lucastabelini avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

polylanenet's Issues

93% Accuracy model?

Hi, thanks for the wonderful implementation of PolyLaneNet. In the google drive for the saved model, which folder should I be looking for to get the model with the ~93% accuracy?
Thanks!

problem about speed

Sorry, I have one more question. Unlike LANEATT, I did not find another file in polylanenet to test the speed. I got 1008 FPS after running test.py. Should I run another file to test the real speed?

A consultation on the GPU number

Dear Author:
Could you please tell me whether the code can run on multiple GPU devices during training, without any further laboring by me?

'device' not define

when I try to run 'python train.py --validate', something wrong happend.May be in test.py the test function should add a parameter 'device',so that train.py with '--validate 'can work.

A problem about modify polynomial degree

Congratulations on your great achievements, I am a beginner and I am very interested in your results. I want to reproduce the data in your paper. If I want to modify the polynomial degree, where should I modify it? Thank you very much.

A problem about Test.py

when I executed this:

python3 test.py --exp_name tusimple --cfg config.yaml --epoch 2695

[2021-03-09 08:49:42,416] [INFO] Starting testing.
[2021-03-09 08:49:42,583] [ERROR] Uncaught exception
Traceback (most recent call last):
File "test.py", line 159, in
, mean_loss = test(model, test_loader, evaluator, exp_root, cfg, epoch=test_epoch, view=args.view)
File "test.py", line 23, in test
model.load_state_dict(torch.load(os.path.join(exp_root, "models", "model
{:03d}.pt".format(epoch)))['model'])
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1224, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for PolyRegression:
Missing key(s) in state_dict: "model._conv_stem.weight", "model._bn0.weight", "model._bn0.bias", "model._bn0.running_mean", "model._bn0.running_var", "model._blocks.0._depthwise_conv.weight", "model._blocks.0._bn1.weight", "model._blocks.0._bn1.bias", "model._blocks.0._bn1.running_mean", "model._blocks.0._bn1.running_var", "model._blocks.0._se_reduce.weight", "model._blocks.0._se_reduce.bias", "model._blocks.0._se_expand.weight", "model._blocks.0._se_expand.bias", "model._blocks.0._project_conv.weight", "model._blocks.0._bn2.weight", "model._blocks.0._bn2.bias", "model._blocks.0._bn2.running_mean", "model._blocks.0._bn2.running_var", "model._blocks.1._expand_conv.weight", "model._blocks.1._bn0.weight", "model._blocks.1._bn0.bias", "model._blocks.1._bn0.running_mean", "model._blocks.1._bn0.running_var", "model._blocks.1._depthwise_conv.weight", "model._blocks.1._bn1.weight", "model._blocks.1._bn1.bias", "model._blocks.1._bn1.running_mean", "model._blocks.1._bn1.running_var", "model._blocks.1._se_reduce.weight", "model._blocks.1._se_reduce.bias", "model._blocks.1._se_expand.weight", "model._blocks.1._se_expand.bias", "model._blocks.1._project_conv.weight", "model._blocks.1._bn2.weight", "model._blocks.1._bn2.bias", "model._blocks.1._bn2.running_mean", "model._blocks.1._bn2.running_var", "model._blocks.2._expand_conv.weight", "model._blocks.2._bn0.weight", "model._blocks.2._bn0.bias", "model._blocks.2._bn0.running_mean", "model._blocks.2._bn0.running_var", "model._blocks.2._depthwise_conv.weight", "model._blocks.2._bn1.weight", "model._blocks.2._bn1.bias", "model._blocks.2._bn1.running_mean", "model._blocks.2._bn1.running_var", "model._blocks.2._se_reduce.weight", "model._blocks.2._se_reduce.bias", "model._blocks.2._se_expand.weight", "model._blocks.2._se_expand.bias", "model._blocks.2._project_conv.weight", "model._blocks.2._bn2.weight", "model._blocks.2._bn2.bias", "model._blocks.2._bn2.running_mean", "model._blocks.2._bn2.running_var", "model._blocks.3._expand_conv.weight", "model._blocks.3._bn0.weight", "model._blocks.3._bn0.bias", "model._blocks.3._bn0.running_mean", "model._blocks.3._bn0.running_var", "model._blocks.3._depthwise_conv.weight", "model._blocks.3._bn1.weight", "model._blocks.3._bn1.bias", "model._blocks.3._bn1.running_mean", "model._blocks.3._bn1.running_var", "model._blocks.3._se_reduce.weight", "model._blocks.3._se_reduce.bias", "model._blocks.3._se_expand.weight", "model._blocks.3._se_expand.bias", "model._blocks.3._project_conv.weight", "model._blocks.3._bn2.weight", "model._blocks.3._bn2.bias", "model._blocks.3._bn2.running_mean", "model._blocks.3._bn2.running_var", "model._blocks.4._expand_conv.weight", "model._blocks.4._bn0.weight", "model._blocks.4._bn0.bias", "model._blocks.4._bn0.running_mean", "model._blocks.4._bn0.running_var", "model._blocks.4._depthwise_conv.weight", "model._blocks.4._bn1.weight", "model._blocks.4._bn1.bias", "model._blocks.4._bn1.running_mean", "model._blocks.4._bn1.running_var", "model._blocks.4._se_reduce.weight", "model._blocks.4._se_reduce.bias", "model._blocks.4._se_expand.weight", "model._blocks.4._se_expand.bias", "model._blocks.4._project_conv.weight", "model._blocks.4._bn2.weight", "model._blocks.4._bn2.bias", "model._blocks.4._bn2.running_mean", "model._blocks.4._bn2.running_var", "model._blocks.5._expand_conv.weight", "model._blocks.5._bn0.weight", "model._blocks.5._bn0.bias", "model._blocks.5._bn0.running_mean", "model._blocks.5._bn0.running_var", "model._blocks.5._depthwise_conv.weight", "model._blocks.5._bn1.weight", "model._blocks.5._bn1.bias", "model._blocks.5._bn1.running_mean", "model._blocks.5._bn1.running_var", "model._blocks.5._se_reduce.weight", "model._blocks.5._se_reduce.bias", "model._blocks.5._se_expand.weight", "model._blocks.5._se_expand.bias", "model._blocks.5._project_conv.weight", "model._blocks.5._bn2.weight", "model._blocks.5._bn2.bias", "model._blocks.5._bn2.running_mean", "model._blocks.5._bn2.running_var", "model._blocks.6._expand_conv.weight", "model._blocks.6._bn0.weight", "model._blocks.6._bn0.bias", "model._blocks.6._bn0.running_mean", "model._blocks.6._bn0.running_var", "model._blocks.6._depthwise_conv.weight", "model._blocks.6._bn1.weight", "model._blocks.6._bn1.bias", "model._blocks.6._bn1.running_mean", "model._blocks.6._bn1.running_var", "model._blocks.6._se_reduce.weight", "model._blocks.6._se_reduce.bias", "model._blocks.6._se_expand.weight", "model._blocks.6._se_expand.bias", "model._blocks.6._project_conv.weight", "model._blocks.6._bn2.weight", "model._blocks.6._bn2.bias", "model._blocks.6._bn2.running_mean", "model._blocks.6._bn2.running_var", "model._blocks.7._expand_conv.weight", "model._blocks.7._bn0.weight", "model._blocks.7._bn0.bias", "model._blocks.7._bn0.running_mean", "model._blocks.7._bn0.running_var", "model._blocks.7._depthwise_conv.weight", "model._blocks.7._bn1.weight", "model._blocks.7._bn1.bias", "model._blocks.7._bn1.running_mean", "model._blocks.7._bn1.running_var", "model._blocks.7._se_reduce.weight", "model._blocks.7._se_reduce.bias", "model._blocks.7._se_expand.weight", "model._blocks.7._se_expand.bias", "model._blocks.7._project_conv.weight", "model._blocks.7._bn2.weight", "model._blocks.7._bn2.bias", "model._blocks.7._bn2.running_mean", "model._blocks.7._bn2.running_var", "model._blocks.8._expand_conv.weight", "model._blocks.8._bn0.weight", "model._blocks.8._bn0.bias", "model._blocks.8._bn0.running_mean", "model._blocks.8._bn0.running_var", "model._blocks.8._depthwise_conv.weight", "model._blocks.8._bn1.weight", "model._blocks.8._bn1.bias", "model._blocks.8._bn1.running_mean", "model._blocks.8._bn1.running_var", "model._blocks.8._se_reduce.weight", "model._blocks.8._se_reduce.bias", "model._blocks.8._se_expand.weight", "model._blocks.8._se_expand.bias", "model._blocks.8._project_conv.weight", "model._blocks.8._bn2.weight", "model._blocks.8._bn2.bias", "model._blocks.8._bn2.running_mean", "model._blocks.8._bn2.running_var", "model._blocks.9._expand_conv.weight", "model._blocks.9._bn0.weight", "model._blocks.9._bn0.bias", "model._blocks.9._bn0.running_mean", "model._blocks.9._bn0.running_var", "model._blocks.9._depthwise_conv.weight", "model._blocks.9._bn1.weight", "model._blocks.9._bn1.bias", "model._blocks.9._bn1.running_mean", "model._blocks.9._bn1.running_var", "model._blocks.9._se_reduce.weight", "model._blocks.9._se_reduce.bias", "model._blocks.9._se_expand.weight", "model._blocks.9._se_expand.bias", "model._blocks.9._project_conv.weight", "model._blocks.9._bn2.weight", "model._blocks.9._bn2.bias", "model._blocks.9._bn2.running_mean", "model._blocks.9._bn2.running_var", "model._blocks.10._expand_conv.weight", "model._blocks.10._bn0.weight", "model._blocks.10._bn0.bias", "model._blocks.10._bn0.running_mean", "model._blocks.10._bn0.running_var", "model._blocks.10._depthwise_conv.weight", "model._blocks.10._bn1.weight", "model._blocks.10._bn1.bias", "model._blocks.10._bn1.running_mean", "model._blocks.10._bn1.running_var", "model._blocks.10._se_reduce.weight", "model._blocks.10._se_reduce.bias", "model._blocks.10._se_expand.weight", "model._blocks.10._se_expand.bias", "model._blocks.10._project_conv.weight", "model._blocks.10._bn2.weight", "model._blocks.10._bn2.bias", "model._blocks.10._bn2.running_mean", "model._blocks.10._bn2.running_var", "model._blocks.11._expand_conv.weight", "model._blocks.11._bn0.weight", "model._blocks.11._bn0.bias", "model._blocks.11._bn0.running_mean", "model._blocks.11._bn0.running_var", "model._blocks.11._depthwise_conv.weight", "model._blocks.11._bn1.weight", "model._blocks.11._bn1.bias", "model._blocks.11._bn1.running_mean", "model._blocks.11._bn1.running_var", "model._blocks.11._se_reduce.weight", "model._blocks.11._se_reduce.bias", "model._blocks.11._se_expand.weight", "model._blocks.11._se_expand.bias", "model._blocks.11._project_conv.weight", "model._blocks.11._bn2.weight", "model._blocks.11._bn2.bias", "model._blocks.11._bn2.running_mean", "model._blocks.11._bn2.running_var", "model._blocks.12._expand_conv.weight", "model._blocks.12._bn0.weight", "model._blocks.12._bn0.bias", "model._blocks.12._bn0.running_mean", "model._blocks.12._bn0.running_var", "model._blocks.12._depthwise_conv.weight", "model._blocks.12._bn1.weight", "model._blocks.12._bn1.bias", "model._blocks.12._bn1.running_mean", "model._blocks.12._bn1.running_var", "model._blocks.12._se_reduce.weight", "model._blocks.12._se_reduce.bias", "model._blocks.12._se_expand.weight", "model._blocks.12._se_expand.bias", "model._blocks.12._project_conv.weight", "model._blocks.12._bn2.weight", "model._blocks.12._bn2.bias", "model._blocks.12._bn2.running_mean", "model._blocks.12._bn2.running_var", "model._blocks.13._expand_conv.weight", "model._blocks.13._bn0.weight", "model._blocks.13._bn0.bias", "model._blocks.13._bn0.running_mean", "model._blocks.13._bn0.running_var", "model._blocks.13._depthwise_conv.weight", "model._blocks.13._bn1.weight", "model._blocks.13._bn1.bias", "model._blocks.13._bn1.running_mean", "model._blocks.13._bn1.running_var", "model._blocks.13._se_reduce.weight", "model._blocks.13._se_reduce.bias", "model._blocks.13._se_expand.weight", "model._blocks.13._se_expand.bias", "model._blocks.13._project_conv.weight", "model._blocks.13._bn2.weight", "model._blocks.13._bn2.bias", "model._blocks.13._bn2.running_mean", "model._blocks.13._bn2.running_var", "model._blocks.14._expand_conv.weight", "model._blocks.14._bn0.weight", "model._blocks.14._bn0.bias", "model._blocks.14._bn0.running_mean", "model._blocks.14._bn0.running_var", "model._blocks.14._depthwise_conv.weight", "model._blocks.14._bn1.weight", "model._blocks.14._bn1.bias", "model._blocks.14._bn1.running_mean", "model._blocks.14._bn1.running_var", "model._blocks.14._se_reduce.weight", "model._blocks.14._se_reduce.bias", "model._blocks.14._se_expand.weight", "model._blocks.14._se_expand.bias", "model._blocks.14._project_conv.weight", "model._blocks.14._bn2.weight", "model._blocks.14._bn2.bias", "model._blocks.14._bn2.running_mean", "model._blocks.14._bn2.running_var", "model._blocks.15._expand_conv.weight", "model._blocks.15._bn0.weight", "model._blocks.15._bn0.bias", "model._blocks.15._bn0.running_mean", "model._blocks.15._bn0.running_var", "model._blocks.15._depthwise_conv.weight", "model._blocks.15._bn1.weight", "model._blocks.15._bn1.bias", "model._blocks.15._bn1.running_mean", "model._blocks.15._bn1.running_var", "model._blocks.15._se_reduce.weight", "model._blocks.15._se_reduce.bias", "model._blocks.15._se_expand.weight", "model._blocks.15._se_expand.bias", "model._blocks.15._project_conv.weight", "model._blocks.15._bn2.weight", "model._blocks.15._bn2.bias", "model._blocks.15._bn2.running_mean", "model._blocks.15._bn2.running_var", "model._conv_head.weight", "model._bn1.weight", "model._bn1.bias", "model._bn1.running_mean", "model._bn1.running_var", "model._fc.regular_outputs_layer.weight", "model._fc.regular_outputs_layer.bias".
Unexpected key(s) in state_dict: "model.conv1.weight", "model.bn1.weight", "model.bn1.bias", "model.bn1.running_mean", "model.bn1.running_var", "model.bn1.num_batches_tracked", "model.layer1.0.conv1.weight", "model.layer1.0.bn1.weight", "model.layer1.0.bn1.bias", "model.layer1.0.bn1.running_mean", "model.layer1.0.bn1.running_var", "model.layer1.0.bn1.num_batches_tracked", "model.layer1.0.conv2.weight", "model.layer1.0.bn2.weight", "model.layer1.0.bn2.bias", "model.layer1.0.bn2.running_mean", "model.layer1.0.bn2.running_var", "model.layer1.0.bn2.num_batches_tracked", "model.layer1.1.conv1.weight", "model.layer1.1.bn1.weight", "model.layer1.1.bn1.bias", "model.layer1.1.bn1.running_mean", "model.layer1.1.bn1.running_var", "model.layer1.1.bn1.num_batches_tracked", "model.layer1.1.conv2.weight", "model.layer1.1.bn2.weight", "model.layer1.1.bn2.bias", "model.layer1.1.bn2.running_mean", "model.layer1.1.bn2.running_var", "model.layer1.1.bn2.num_batches_tracked", "model.layer1.2.conv1.weight", "model.layer1.2.bn1.weight", "model.layer1.2.bn1.bias", "model.layer1.2.bn1.running_mean", "model.layer1.2.bn1.running_var", "model.layer1.2.bn1.num_batches_tracked", "model.layer1.2.conv2.weight", "model.layer1.2.bn2.weight", "model.layer1.2.bn2.bias", "model.layer1.2.bn2.running_mean", "model.layer1.2.bn2.running_var", "model.layer1.2.bn2.num_batches_tracked", "model.layer2.0.conv1.weight", "model.layer2.0.bn1.weight", "model.layer2.0.bn1.bias", "model.layer2.0.bn1.running_mean", "model.layer2.0.bn1.running_var", "model.layer2.0.bn1.num_batches_tracked", "model.layer2.0.conv2.weight", "model.layer2.0.bn2.weight", "model.layer2.0.bn2.bias", "model.layer2.0.bn2.running_mean", "model.layer2.0.bn2.running_var", "model.layer2.0.bn2.num_batches_tracked", "model.layer2.0.downsample.0.weight", "model.layer2.0.downsample.1.weight", "model.layer2.0.downsample.1.bias", "model.layer2.0.downsample.1.running_mean", "model.layer2.0.downsample.1.running_var", "model.layer2.0.downsample.1.num_batches_tracked", "model.layer2.1.conv1.weight", "model.layer2.1.bn1.weight", "model.layer2.1.bn1.bias", "model.layer2.1.bn1.running_mean", "model.layer2.1.bn1.running_var", "model.layer2.1.bn1.num_batches_tracked", "model.layer2.1.conv2.weight", "model.layer2.1.bn2.weight", "model.layer2.1.bn2.bias", "model.layer2.1.bn2.running_mean", "model.layer2.1.bn2.running_var", "model.layer2.1.bn2.num_batches_tracked", "model.layer2.2.conv1.weight", "model.layer2.2.bn1.weight", "model.layer2.2.bn1.bias", "model.layer2.2.bn1.running_mean", "model.layer2.2.bn1.running_var", "model.layer2.2.bn1.num_batches_tracked", "model.layer2.2.conv2.weight", "model.layer2.2.bn2.weight", "model.layer2.2.bn2.bias", "model.layer2.2.bn2.running_mean", "model.layer2.2.bn2.running_var", "model.layer2.2.bn2.num_batches_tracked", "model.layer2.3.conv1.weight", "model.layer2.3.bn1.weight", "model.layer2.3.bn1.bias", "model.layer2.3.bn1.running_mean", "model.layer2.3.bn1.running_var", "model.layer2.3.bn1.num_batches_tracked", "model.layer2.3.conv2.weight", "model.layer2.3.bn2.weight", "model.layer2.3.bn2.bias", "model.layer2.3.bn2.running_mean", "model.layer2.3.bn2.running_var", "model.layer2.3.bn2.num_batches_tracked", "model.layer3.0.conv1.weight", "model.layer3.0.bn1.weight", "model.layer3.0.bn1.bias", "model.layer3.0.bn1.running_mean", "model.layer3.0.bn1.running_var", "model.layer3.0.bn1.num_batches_tracked", "model.layer3.0.conv2.weight", "model.layer3.0.bn2.weight", "model.layer3.0.bn2.bias", "model.layer3.0.bn2.running_mean", "model.layer3.0.bn2.running_var", "model.layer3.0.bn2.num_batches_tracked", "model.layer3.0.downsample.0.weight", "model.layer3.0.downsample.1.weight", "model.layer3.0.downsample.1.bias", "model.layer3.0.downsample.1.running_mean", "model.layer3.0.downsample.1.running_var", "model.layer3.0.downsample.1.num_batches_tracked", "model.layer3.1.conv1.weight", "model.layer3.1.bn1.weight", "model.layer3.1.bn1.bias", "model.layer3.1.bn1.running_mean", "model.layer3.1.bn1.running_var", "model.layer3.1.bn1.num_batches_tracked", "model.layer3.1.conv2.weight", "model.layer3.1.bn2.weight", "model.layer3.1.bn2.bias", "model.layer3.1.bn2.running_mean", "model.layer3.1.bn2.running_var", "model.layer3.1.bn2.num_batches_tracked", "model.layer3.2.conv1.weight", "model.layer3.2.bn1.weight", "model.layer3.2.bn1.bias", "model.layer3.2.bn1.running_mean", "model.layer3.2.bn1.running_var", "model.layer3.2.bn1.num_batches_tracked", "model.layer3.2.conv2.weight", "model.layer3.2.bn2.weight", "model.layer3.2.bn2.bias", "model.layer3.2.bn2.running_mean", "model.layer3.2.bn2.running_var", "model.layer3.2.bn2.num_batches_tracked", "model.layer3.3.conv1.weight", "model.layer3.3.bn1.weight", "model.layer3.3.bn1.bias", "model.layer3.3.bn1.running_mean", "model.layer3.3.bn1.running_var", "model.layer3.3.bn1.num_batches_tracked", "model.layer3.3.conv2.weight", "model.layer3.3.bn2.weight", "model.layer3.3.bn2.bias", "model.layer3.3.bn2.running_mean", "model.layer3.3.bn2.running_var", "model.layer3.3.bn2.num_batches_tracked", "model.layer3.4.conv1.weight", "model.layer3.4.bn1.weight", "model.layer3.4.bn1.bias", "model.layer3.4.bn1.running_mean", "model.layer3.4.bn1.running_var", "model.layer3.4.bn1.num_batches_tracked", "model.layer3.4.conv2.weight", "model.layer3.4.bn2.weight", "model.layer3.4.bn2.bias", "model.layer3.4.bn2.running_mean", "model.layer3.4.bn2.running_var", "model.layer3.4.bn2.num_batches_tracked", "model.layer3.5.conv1.weight", "model.layer3.5.bn1.weight", "model.layer3.5.bn1.bias", "model.layer3.5.bn1.running_mean", "model.layer3.5.bn1.running_var", "model.layer3.5.bn1.num_batches_tracked", "model.layer3.5.conv2.weight", "model.layer3.5.bn2.weight", "model.layer3.5.bn2.bias", "model.layer3.5.bn2.running_mean", "model.layer3.5.bn2.running_var", "model.layer3.5.bn2.num_batches_tracked", "model.layer4.0.conv1.weight", "model.layer4.0.bn1.weight", "model.layer4.0.bn1.bias", "model.layer4.0.bn1.running_mean", "model.layer4.0.bn1.running_var", "model.layer4.0.bn1.num_batches_tracked", "model.layer4.0.conv2.weight", "model.layer4.0.bn2.weight", "model.layer4.0.bn2.bias", "model.layer4.0.bn2.running_mean", "model.layer4.0.bn2.running_var", "model.layer4.0.bn2.num_batches_tracked", "model.layer4.0.downsample.0.weight", "model.layer4.0.downsample.1.weight", "model.layer4.0.downsample.1.bias", "model.layer4.0.downsample.1.running_mean", "model.layer4.0.downsample.1.running_var", "model.layer4.0.downsample.1.num_batches_tracked", "model.layer4.1.conv1.weight", "model.layer4.1.bn1.weight", "model.layer4.1.bn1.bias", "model.layer4.1.bn1.running_mean", "model.layer4.1.bn1.running_var", "model.layer4.1.bn1.num_batches_tracked", "model.layer4.1.conv2.weight", "model.layer4.1.bn2.weight", "model.layer4.1.bn2.bias", "model.layer4.1.bn2.running_mean", "model.layer4.1.bn2.running_var", "model.layer4.1.bn2.num_batches_tracked", "model.layer4.2.conv1.weight", "model.layer4.2.bn1.weight", "model.layer4.2.bn1.bias", "model.layer4.2.bn1.running_mean", "model.layer4.2.bn1.running_var", "model.layer4.2.bn1.num_batches_tracked", "model.layer4.2.conv2.weight", "model.layer4.2.bn2.weight", "model.layer4.2.bn2.bias", "model.layer4.2.bn2.running_mean", "model.layer4.2.bn2.running_var", "model.layer4.2.bn2.num_batches_tracked", "model.fc.regular_outputs_layer.weight", "model.fc.regular_outputs_layer.bias".

load_state_dict maybe cause problem

how can i solve this?

thank you

The question about LPD

@lucastabelini Hi, PloyLaneNet is an amazing work. I find that you employ LPD in your paper, however, I cannot find the code of LPD script. I guess that LPD code is the function eval_json in utils\metric.py, but I am not quite sure about this. Could you tell me where is the LPD code?
Thank you very much!

problem about acc

Hello. Thank you for your contribution. I want to know what is the difference between the two Acc in the following picture? What do they mean? The result of my operation is about 88%.

image
image

trouble in Reproducing the paper results

Hello, my English level is very poor, I don't understand how to replace the parameter information in the text. Can you attach some pictures to it?

For example:

image

reproduce problem

Thanks for your contribution. when i try to reproduce the experiment results with tusimple dataset, i found that 1) the cls_loss is always 0 , is it normal? and the run test.py file with the 500th epoch .pth , it has a low accuracy,.can you give me some advices?
企业微信截图_16153423778073
企业微信截图_16153424394716

The problem of predicting performance

Hello, I used your method to train 2695 epoch, but the test results are not very satisfactory, as shown in the figure below, is this something wrong with me?
图片
图片

How do you arrange directory of dataset LLAMAS?

I have downloaded the dataset LLASMAS and there are two zip files: labels.zip and color_images.zip. It seems no official recommended way to organize directory. So what's your way to arrange directory?

Is it possible to detect lanes from a another point of view?

I would like to detect lanes in an image but not from the viewpoint of the moving car. For example, the lanes are horizontal in the image. Is this possible? Are there any restrictions implemented like the sorting of the lanes that prevent this task?

A light-weight backbone?

Thanks for sharing your great job with us. Is it possible to replace the feature extraction backbone with a smaller backbone? Such as MobileNet, ShuffleNet, and SqueezeNet, etc.

Using the results of linear regression to train the model

Hi, thanks for your code!

I noticed that the performance of the model is not as good as the performance of linear regression with the same MSE loss.
Do they have any difference?
Have you ever tried to let the model learn the coefficients(from linear regression) of polynomial directly?
(Although I think they should be the same.)

best regards

Division by 0 error

Hi,
I am trying to test the trained model on my custom image dataset. I modified the config.yaml file as per the suggestions in another issue but I get the following error

Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm\helpers\pydev\pydevd.py", line 2060, in
main()
File "C:\Program Files\JetBrains\PyCharm\helpers\pydev\pydevd.py", line 2054, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "C:\Program Files\JetBrains\PyCharm\helpers\pydev\pydevd.py", line 1405, in run
return self._exec(is_module, entry_point_fn, module_name, file, globals, locals)
File "C:\Program Files\JetBrains\PyCharm\helpers\pydev\pydevd.py", line 1412, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm\helpers\pydev_pydev_imps_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/Xyedj/PycharmProjects/PolyLaneNet-master/test.py", line 162, in
_, mean_loss = test(model, test_loader, evaluator, exp_root, cfg, epoch=test_epoch, view=args.view)
File "C:/Users/Xyedj/PycharmProjects/PolyLaneNet-master/test.py", line 78, in test
return evaluator, loss / total_iters
ZeroDivisionError: division by zero

I believe it has something to do with the dataset I am using. Can you specify the dataset format to be used with test.py. Do I only need the test images in .jpg or I should have their associated annotations as well?

About the GPU device

Hello, everyone.
As the paper said, the PolyLaneNet could run in 115 FPS with a recent GPU.
I am very interested in your work. We are also working on lane detection.
And I would like to know which kind of GPU do you used for this work?

Thanks for your answer.

Urage For Help

I just want to reproduce the exact reported metrics by testing the model;
it always shows the : AttributeError: 'NoneType' object has no attribute 'shape'
so maybe the code can't find the test image,so could tell me what's wrong with the below ;
I have download the dataset tusimple, there have two folder:test_set、train_set;
so i put the 0531 folder in the dataset folder(the 0531 file is from the test_set\clips\0531);
the detail is like that :

├─cfgs
├─figures
├─datasets
│ └─tusimple
│ └─label_data_0531.json
│ └─0531
│ ── └─1492626253262712112
│ ── ── └─1.jpg
│ ── ── └─2.jpg
│ ── ── └─......jpg
│ ── ── └─20.jpg
│ ── └─1492626271917313999
│ ── └─......
│ ── └─1495489446318620134
├─experiments
├─lib
│ └─datasets
└─utils
└─test.py
└─train.py
└─config.yaml

and the root in config.yaml is that root: "datasets/tusimple"

Tensorflow implementation

Hello. You did a very good job.
I am trying my own implementation in Tensorflow in Google Colab but it is not working as expected. The loss kept around 78 and could not decrease anymore.
Could you have sometime for reviewing?

Question about sharing the horizon

@lucastabelini Hi! Great codes you are providing here! But I have a question.
Is the sharing of horizon between lanes a possible reason for the low upperbound performance (since the labels don't accurately indicate horizon line)? My colleague just tried directly fitting each lane in the TuSimple test set with 2-order/3-order polynomials and got almost 100% accuracy. But in your paper the upperbound is only ~98%.
BTW, the FP & FN in your TABLE 3 seem rather too low for a <98% Acc. I'm not sure where the problem is.

why my output is like this

Hello,thanks for your great work,
when i train polyLaneNet with my dataset about 30000 images within about 50 epochs
i run the demo like this, what's the problem with it?

Test problem?

HI, I use the pretrained model_2695.pt to test my own dataset.but it shows like this:
pred_screenshot_02 06 2020

Below is my config.yaml :

Training settings

seed: 0
exps_dir: 'experiments'
iter_log_interval: 1
iter_time_window: 100
model_save_interval: 1
backup:
model:
name: PolyRegression
parameters:
num_outputs: 35 # (5 lanes) * (1 conf + 2 (upper & lower) + 4 poly coeffs)
pretrained: true
backbone: 'resnet50'
pred_category: false
curriculum_steps: [0, 0, 0, 0]
loss_parameters:
conf_weight: 1
lower_weight: 1
upper_weight: 1
cls_weight: 0
poly_weight: 300
batch_size: 16
epochs: 2695
optimizer:
name: Adam
parameters:
lr: 3.0e-4
lr_scheduler:
name: CosineAnnealingLR
parameters:
T_max: 385

Testing settings

test_parameters:
conf_threshold: 0.5

Dataset settings

datasets:
train:
type: LaneDataset
parameters:
dataset: nolabel_dataset
split: train
img_size: [540, 960]
normalize: true
aug_chance: 0.9090909090909091 # 10/11
augmentations:
- name: Affine
parameters:
rotate: !!python/tuple [-10, 10]
- name: HorizontalFlip
parameters:
p: 0.5
- name: CropToFixedSize
parameters:
width: 540
height: 960
root: "/home/share/make/PolyLaneNet/test_image"

test: &test
type: LaneDataset
parameters:
dataset: nolabel_dataset
normalize: true # Wheter to normalize the input data. Use the same value used in the pretrained model (all pretrained models that I provided used normalization, so you should leave it as it is)
augmentations: [] # List of augmentations. You probably want to leave this empty for testing
img_h: 540 # The height of your test images (they shoud all have the same size)
img_w: 960 # The width of your test images
img_size: [540, 960] # Yeah, this parameter is duplicated for some reason, will fix this when I get time (feel free to open a pull request :))
max_lanes: 5 # Same number used in the pretrained model. If you use a model pretrained on TuSimple (most likely case), you'll use 5 here
root: "/home/share/make/PolyLaneNet/test_image" # Path to the directory containing your test images. The loader will look recursively for image files in this directory
img_ext: ".jpg" # Test images extension (e.g., .png, .jpg)"

val = test

val:
<<: *test

Could you give me some advice ? Thanks

How to test my own dataset?

Hello.

I have my own dataset with a video. There is no clear solution for testing with different datasets. How can I test with my video? Must I convert it as .pt format?

Also When I test .pt file with --view argument it only shows the first image result and it doesn't show others.

Outputs

How can i get the outputs ?

Questions about training time

Hi,
I'm curious why the training epoch is so high e.g. 384 ~ 2000+.
Does it take most of time for the polynomials fitting?

  • What's the polynomials performance at the earlier epoch?
  • What's the training loss value e.g. tusimple data set after 2000 epoch?

a question, :-)

Hi, gratefully to share your project, and your models are so much, in what case run 115FPS, which backbone? what intput size? is that tusimple_320x180?

A problem about sharing the top-y (h) in lib/models.py line 103

if self.share_top_y:
    # inexistent lanes have -1e-5 as lower
    # i'm just setting it to a high value here so that the .min below works fine
    target_lowers[target_lowers < 0] = 1
    target_lowers[...] = target_lowers.min(dim=1, keepdim=True)[0]
    pred_lowers[...] = pred_lowers[:, 0].reshape(-1, 1).expand(pred.shape[0], pred.shape[1])

I am confused how the code implements the function of sharing top-y (h).
The following code seems to extract the lowest point , and share that point.

target_lowers[...] = target_lowers.min(dim=1, keepdim=True)[0]

And the code below seems to get the value of the first column and expand the other column. Why consider the first column as the lowest point?

    pred_lowers[...] = pred_lowers[:, 0].reshape(-1, 1).expand(pred.shape[0], pred.shape[1])

Thank you very much!

A problem about the "curriculum_steps" in lib/models.py line 65

def forward(self, x, epoch=None, **kwargs):
    output, extra_outputs = self.model(x, **kwargs)
    for i in range(len(self.curriculum_steps)):
        if epoch is not None and epoch < self.curriculum_steps[i]: 
            output[-len(self.curriculum_steps) + i] = 0
    return output, extra_outputs

I am confused about the implement of the code when conducting the 2order experiment. If the "self.curriculum_steps" is setted as [9000,0,0,0] and the batch size is 16. My understanding about the code is as follow, the shape of "output" is torch.Size([16, 35]), and the line 12 in "output" will be setted as 0 (i.e. [-4,35] = 0). If do this, some samples will be setted as 0 in the batch. To this end, it will not achieve the goal of setting high-order polynomial coefficients to 0. Maybe it should be applied to dimension 1 instead of 0? Could you tell me why use this way to achieve the function? Thank you!

problem in code

I'm here to bother again. I would like to ask what the following thresholds mean? What is the difference and connection between this and the confidence score threshold 0.5 mentioned in the article?

lib.models: line 91

image

Thanks.

script to test my own images

Do you have python script which can be used to test my own images?
I tried to modify test.py, but without any success.

Thanks,

regarding the dataloader

can you please explain the strategy behind this line of code?
link lanes = np.ones((self.dataset.max_lanes, 1 + 2 + 2 * self.dataset.max_points), dtype=np.float32) * -1e5
what/why we have 1, and 2,2* self.dataset.max_points?
why do we multiply it with * -1e5?

Can't visualize the output

Sorry if it seems a silly question, bu I do not know what should I do with the output.
I'm trying to predict the output of frames taken from video and the results are like this

tensor([[ 1.5390e+01,  2.6866e-01,  5.9231e-01, -3.7041e-01,  5.3317e-01,
         -2.5742e-01,  4.1398e-01,  1.5413e+01, -7.1203e-03,  9.4199e-01,
         -5.6711e-01,  2.8064e-01,  8.4359e-02,  4.6005e-01, -2.2299e+01,
          1.8078e-03,  8.2544e-01,  1.4614e-01,  1.0139e-01, -2.6998e-01,
          5.8584e-01, -1.2889e+00,  1.7570e-02,  7.5274e-01,  9.5632e-02,
          7.7286e-02,  3.0168e-01,  3.6816e-01,  5.5416e-01,  7.4196e-02,
          7.2463e-01, -7.6063e-02, -1.3095e-01,  7.5693e-01,  3.2309e-01]],
       grad_fn=<AddmmBackward>)

I tried to draw these points with opencv's polylines but I couldn't.
Can you please tell me how to visualize the output

How can I get the trained weight?

Hi, thanks for the project, it is very helpful for me.
But I cannot get the pretrained model in the google driver, only config and log inside.

How can i get the pretrained model?

training tusimple error

when I use “train.py”,
set "tusimple.py" --> 'train': ['label_data_0313.json'], there are error as:


total annos 2858
Transforming annotations...
Done.
Loaded pretrained weights for efficientnet-b0
[2020-05-21 17:43:48,538] [INFO] Starting training.
[2020-05-21 17:43:48,538] [INFO] Beginning epoch 1
[2020-05-21 17:43:50,452] [ERROR] Uncaught exception
Traceback (most recent call last):
File "train.py", line 237, in
train_state=train_state,
File "train.py", line 54, in train
loss, loss_dict_i = criterion(outputs, labels, **criterion_parameters)
File "/home/cuda/lzc/road/PolyLaneNet/lib/models.py", line 116, in loss
lower_loss = mse(target_lowers[valid_lanes_idx], pred_lowers[valid_lanes_idx])
IndexError: The shape of the mask [64, 1] at index 0does not match the shape of the indexed tensor [80, 1] at index 0

but when I set "tusimple.py" --> 'train': ['label_data_0601.json'], then begin training.

Do I need to label lanes in a fixed order?

Thanks for your sharing.

Tusimple labeled 4 lanes for one image, and they have fixed order like left-left, ego-left, ego-right, right-right from 0 to 3.
Does the output of PolyLaneNet(L1 ~ L5) have the save order, or it just predict lane randomly?

If I want training my own dataset, do I need to label lanes in a fixed order?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.