Code Monkey home page Code Monkey logo

2020_carla_challenge's People

Contributors

bradyz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

2020_carla_challenge's Issues

Alpha Random Controller Variable

Hello there;

In the image_model.py there is an random variable alpha, seems like mixing two outputs. Can you explain what are these?
-is lbl_cam target projected into the cam
-is out waypoint output generated by the neural network?

image

Is the data collected in map view?

Figure 1 in the paper is a bit confusing. But as far as I can understand, the data collected by autopilot is in map view, am I right?

Plotting image waypoints from the BEV perspective

Hey Brady,

I've been working on putting together some visualizations for the image agent and I've come up with the following:

image

This is basically just stitching together the left/center/right RGB images with the BEV route waypoint image that's generated from the Plotter class in leaderboard/team_code/planner.py. I was wondering if you had an idea of how to plot the image waypoints from the perspective of the BEV image? From what I can tell, the BEV image uses GPS coordinates but I couldn't figure out a combination of Converter methods and points_* objects that got it in the right perspective.

The following code snippet (update: link is out of date) is the closest I can get. I think points_world is in GPS coords, and the Plotter code seems to do something similar with the 5.5x multiplier. However, the direction of the image waypoints in BEV frame shown below looks wrong - is there something simple that I'm overlooking?

image

Update: I got it to work and I think I have a better idea of how world/map/cam are related, though I'm not entirely clear on some aspects. I took points_cam and transformed it to points_map using the appropriate converter method. Looks like 5.5x is the multiplier for going from world (gps?) to map units. I centered points_map at the origin by subtracting (128, 256) which I guess is the ego vehicle's location in the map frame. Then I rotated it to align with the BEV by saving the rotation matrix computed in the ImageAgent.tick method and applying its inverse/transpose. I guess this is the rotation from the world axis to the ego body frame axis or something? Then I multiply the rotated points_map by -1 (reflect over x/y axis) to get it to work - admittedly not clear as to why... and then I centered the points for plotting by adding 256/2 to the coordinates.

This produces the following plot (switched some colors up to make things consistent)
image

Segmentation model weights?

Hey Brady,

I'm interested in deploying an agent using the Segmentation model (I guess this would be the privileged agent referred to in the LBC writeup) on the Leaderboard routes. I tried looking for model weights on Weights and Biases, but I think the only ones available there are the sensorimotor (image model) weights. Are there any Seg model weights on WandB? If not, would you consider making them available?

Encountered an TypeError when trying to run the data collection as well as the pretrained model

Hello, I followed the instruction and was trying to run 1.data collection 2.the pre-trained model but encountered the same error message as the following. I would appreciate if you can shed some lights on the error message. Thanks!

Preparing scenario: RouteScenario_19
The scenario cannot be loaded
type object 'ParallelPolicy' has no attribute 'SUCCESS_ON_ONE'
Exception ignored in: <function RouteScenario.__del__ at 0x7f82d9d928c0>
Traceback (most recent call last):
  File "/home/zhongzzy9/Documents/self-driving-car/2020_CARLA_challenge/leaderboard/leaderboard/scenarios/route_scenario.py", line 604, in __del__
    self.remove_all_actors()
  File "/home/zhongzzy9/Documents/self-driving-car/2020_CARLA_challenge/scenario_runner/srunner/scenarios/basic_scenario.py", line 181, in remove_all_actors
    for i, _ in enumerate(self.other_actors):
AttributeError: 'RouteScenario' object has no attribute 'other_actors'
Traceback (most recent call last):
  File "leaderboard/leaderboard/leaderboard_evaluator.py", line 379, in main
    leaderboard_evaluator.run(arguments)
  File "leaderboard/leaderboard/leaderboard_evaluator.py", line 332, in run
    StatisticsManager.save_global_record(global_stats_record, self.sensors, args.checkpoint)
  File "/home/zhongzzy9/Documents/self-driving-car/2020_CARLA_challenge/leaderboard/leaderboard/utils/statistics_manager.py", line 256, in save_global_record
    '{:.3f}'.format(stats_dict['infractions']['collisions_layout']),
TypeError: unsupported format string passed to list.__format__
Done. See sample_data/route_19.txt for detailed results.
Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to /home/zhongzzy9/.cache/torch/checkpoints/resnet50-19c8e357.pth
100%|██████████████████████████████████████████████████████████████████████████████████████| 97.8M/97.8M [00:02<00:00, 43.8MB/s]
<All keys matched successfully>
Preparing scenario: RouteScenario_19
The scenario cannot be loaded
type object 'ParallelPolicy' has no attribute 'SUCCESS_ON_ONE'
Exception ignored in: <function RouteScenario.__del__ at 0x7f63028318c0>
Traceback (most recent call last):
  File "/home/zhongzzy9/Documents/self-driving-car/2020_CARLA_challenge/leaderboard/leaderboard/scenarios/route_scenario.py", line 604, in __del__
    self.remove_all_actors()
  File "/home/zhongzzy9/Documents/self-driving-car/2020_CARLA_challenge/scenario_runner/srunner/scenarios/basic_scenario.py", line 181, in remove_all_actors
    for i, _ in enumerate(self.other_actors):
AttributeError: 'RouteScenario' object has no attribute 'other_actors'
Traceback (most recent call last):
  File "leaderboard/leaderboard/leaderboard_evaluator.py", line 379, in main
    leaderboard_evaluator.run(arguments)
  File "leaderboard/leaderboard/leaderboard_evaluator.py", line 332, in run
    StatisticsManager.save_global_record(global_stats_record, self.sensors, args.checkpoint)
  File "/home/zhongzzy9/Documents/self-driving-car/2020_CARLA_challenge/leaderboard/leaderboard/utils/statistics_manager.py", line 256, in save_global_record
    '{:.3f}'.format(stats_dict['infractions']['collisions_layout']),
TypeError: unsupported format string passed to list.__format__
Done. See /home/zhongzzy9/Documents/self-driving-car/2020_CARLA_challenge/models/route_19.txt for detailed results.

Agent only moves if there are other vehicles around

Hi,
I noticed that, at the beginning of the episode, if there is no vehicle passing close to the agent, the first two points of LBC output are always together. As the first two points are used to calculate the desired velocity for the PID controller, the vehicle keeps standing still until another vehicle passes close to it, if there is no other vehicle, the agent does not move at all.

This behavior can be observed by modifying the function _build_background_scenario of the class RouteScenario(route_scenario.py), assigning a small value to the "amount" variable in line 437(say amount=0 or amount=3). I observed this while using the first route of routes_devtest.xml or routes_training.xml and the epoch=24.ckpt as the model checkpoint.

Is there a reason for the occurrence of this behavior i.e. at the beginning, only moves if there is another vehicle around?

Unknown error when submitting docker image to leaderboard benchmark

Hello, I am having a hard time submitting docker images to carla leaderboard right now.
It may not be an issue caused by this repo, since this repository is written as a baseline on the official alphadrive site(https://leaderboard.carla.org/get_started/), I'm asking here.
Any help would be appreciated.

If I run it on my local machine it works fine with no issues. (Tested on g3.8xlarge machine)
But as a test, when I submit it to the CARLA benchmark leaderboard,
it spins for 7 minutes and then terminated without any results.
Because it is a benchmark environment, I cannot see the error message.

I used carla from here (https://github.com/carla-simulator/carla/releases/tag/0.9.10.1)
and this is the Dockerfile.master I used.

If the Docker image you submitted remains on your local machine, we would appreciate it if you could share it.
If this is difficult, I'm curious how the printenv results in a working docker image.

FROM nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04

ARG HTTP_PROXY
ARG HTTPS_PROXY
ARG http_proxy

RUN apt-get update && apt-get install --reinstall -y locales && locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US
ENV LC_ALL en_US.UTF-8

RUN apt-get update && apt-get install -y --no-install-recommends \
         build-essential \
         cmake \
         git \
         curl \
         vim \
         ca-certificates \
         libjpeg-dev \
             libpng16-16 \
             libtiff5 \
         libpng-dev \
         python-dev \
         python3.5 \
         python3.5-dev \
         python-networkx \
         python-setuptools \
         python3-setuptools \
         python-pip \
         python3-pip && \
         pip install --upgrade "pip < 21.0" && \
         pip3 install --upgrade "pip < 21.0" && \
         rm -rf /var/lib/apt/lists/*

# installing conda
RUN curl -o ~/miniconda.sh -LO https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh  && \
     chmod +x ~/miniconda.sh && \
     ~/miniconda.sh -b -p /opt/conda && \
     rm ~/miniconda.sh && \
     /opt/conda/bin/conda clean -ya && \
     /opt/conda/bin/conda create -n python37 python=3.7 numpy networkx scipy six requests

RUN packages='py_trees==0.8.3 shapely six dictor requests' \
        && pip3 install ${packages}

WORKDIR /workspace
COPY .tmp/PythonAPI /workspace/CARLA/PythonAPI
ENV CARLA_ROOT /workspace/CARLA

ENV PATH "/workspace/CARLA/PythonAPI/carla/dist/carla-0.9.10-py3.7-linux-x86_64.egg":/opt/conda/envs/python37/bin:/opt/conda/envs/bin:$PATH

# adding CARLA egg to default python environment
RUN pip install --user setuptools py_trees==0.8.3 psutil shapely six dictor requests

ENV SCENARIO_RUNNER_ROOT "/workspace/scenario_runner"
ENV LEADERBOARD_ROOT "/workspace/leaderboard"
ENV TEAM_CODE_ROOT "/workspace/team_code"
ENV PYTHONPATH "/workspace/CARLA/PythonAPI/carla/dist/carla-0.9.10-py3.7-linux-x86_64.egg":"${SCENARIO_RUNNER_ROOT}":"${CARLA_ROOT}/PythonAPI/carla":"${LEADERBOARD_ROOT}":${PYTHONPATH}

COPY .tmp/scenario_runner /workspace/scenario_runner
COPY .tmp/leaderboard /workspace/leaderboard
COPY .tmp/team_code ${TEAM_CODE_ROOT}

RUN mkdir -p /workspace/results
RUN chmod +x /workspace/leaderboard/scripts/run_evaluation.sh

########################################################################################################################
########################################################################################################################
############                                BEGINNING OF USER COMMANDS                                      ############
########################################################################################################################
########################################################################################################################

COPY .tmp/carla_project /workspace/carla_project

ENV PYTHONPATH "/workspace/CARLA/PythonAPI/carla/dist/carla-0.9.10-py3.7-linux-x86_64.egg":${PYTHONPATH}
ENV PYTHONPATH "/workspace/":${PYTHONPATH}
ENV TEAM_AGENT /workspace/team_code/image_agent.py
ENV TEAM_CONFIG /workspace/team_code/model.ckpt
ENV CHALLENGE_PHASE_CODENAME SENSORS
ENV CHALLENGE_TRACK_CODENAME SENSORS

RUN apt-get update && apt-get install -y --no-install-recommends \
         libgtk2.0-dev

ENV PATH="/opt/conda/bin:${PATH}"
RUN /opt/conda/envs/python37/bin/pip install -r /workspace/carla_project/requirements.txt
RUN /opt/conda/bin/conda install -c anaconda libgcc=7.2.0
RUN /opt/conda/bin/conda init bash

RUN cp /opt/conda/lib/libstdc++.so.6.0.28 /usr/lib/x86_64-linux-gnu/libstdc++.so.6
ENV CONDA_DEFAULT_ENV python37
ENV PATH /opt/conda/envs/python37/bin:$PATH

RUN /opt/conda/envs/python37/bin/pip install tabulate ephem

########################################################################################################################
########################################################################################################################
############                                   END OF USER COMMANDS                                         ############
########################################################################################################################
########################################################################################################################

ENV SCENARIOS ${LEADERBOARD_ROOT}/data/all_towns_traffic_scenarios_public.json
ENV ROUTES ${LEADERBOARD_ROOT}/data/routes_training.xml
ENV REPETITIONS 1
ENV CHECKPOINT_ENDPOINT /workspace/results/results.json
ENV DEBUG_CHALLENGE 0

ENV HTTP_PROXY ""
ENV HTTPS_PROXY ""
ENV http_proxy ""
ENV https_proxy ""

CMD ["/bin/bash"]

How to collect the data with extra pedestrians in the background?

Hi,
I am trying to collect my own data using the documented way. However, it seems I could only control the number of vehicles in the background and I could not add pedestrians. I wonder if there is any way to add pedestrians to the scenario in your code?

And I also saw another data collection script https://github.com/bradyz/carla_project/blob/2f0c166167a8d44b8b720cdbcd5d56983fa71602/src/collect_data.py
I wonder what this one is used for? Thanks!

TrafficManager issue when running parallel instances

Hi @bradyz
I am trying to run multiple parallel instances of the leaderboard_evaluator.py_ with your agent. However I am getting the following error:

"The scenario cannot be loaded
trying to create rpc server for traffic manager; but the system failed to create because of bind error."

It seems to be an issue with setting different tm-port arguments for each client; all clients currently seem to be trying to attach to the original traffic manager server and fail. However I don't see where this can be done in the current code base.
Any idea on how to get around this?
Many thanks in advance

"TypeError: zip argument #1 must support iteration" when training map_model from scratch

Hi,
I downloaded data and tried to train the map_model from scratch by running:
python3 -m carla_project/src/map_model --dataset_dir /path/to/data.
But I encountered the following error:

191 | controller.layers.2                        | ReLU                    | 0     
192 | controller.layers.3                        | BatchNorm1d             | 64    
193 | controller.layers.4                        | Linear                  | 1 K   
194 | controller.layers.5                        | ReLU                    | 0     
195 | controller.layers.6                        | BatchNorm1d             | 64    
196 | controller.layers.7                        | Linear                  | 66    
../LBC_data/CARLA_challenge_autopilot/route_09_04_07_23_07_09
../LBC_data/CARLA_challenge_autopilot/route_19_04_08_16_31_51
../LBC_data/CARLA_challenge_autopilot/route_29_04_09_11_47_17
../LBC_data/CARLA_challenge_autopilot/route_39_04_06_09_50_43
../LBC_data/CARLA_challenge_autopilot/route_49_04_06_11_43_48
../LBC_data/CARLA_challenge_autopilot/route_59_04_06_13_26_15
../LBC_data/CARLA_challenge_autopilot/route_69_04_09_00_28_07
6593 frames.
[ 537  484 2226  752  527 1156  911]
Validation sanity check: 0it [00:00, ?it/s]Traceback (most recent call last):

  File "carla_project/src/map_model.py", line 236, in <module>
    main(parsed)
  File "carla_project/src/map_model.py", line 207, in main
    trainer.fit(model)
  File "/home/zhongzzy9/anaconda3/envs/carla99/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 759, in fit
    self.dp_train(model)
  File "/home/zhongzzy9/anaconda3/envs/carla99/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 563, in dp_train
    self.run_pretrain_routine(model)
  File "/home/zhongzzy9/anaconda3/envs/carla99/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 899, in run_pretrain_routine
    False)
  File "/home/zhongzzy9/anaconda3/envs/carla99/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 278, in _evaluate
    output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)
  File "/home/zhongzzy9/anaconda3/envs/carla99/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 421, in evaluation_forward
    output = model(*args)
  File "/home/zhongzzy9/anaconda3/envs/carla99/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/zhongzzy9/anaconda3/envs/carla99/lib/python3.7/site-packages/pytorch_lightning/overrides/data_parallel.py", line 66, in forward
    return self.gather(outputs, self.output_device)
  File "/home/zhongzzy9/anaconda3/envs/carla99/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 168, in gather
    return gather(outputs, output_device, dim=self.dim)
  File "/home/zhongzzy9/anaconda3/envs/carla99/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather
    res = gather_map(outputs)
  File "/home/zhongzzy9/anaconda3/envs/carla99/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 62, in gather_map
    for k in out))
  File "/home/zhongzzy9/anaconda3/envs/carla99/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.pwandb: Waiting for W&B process to finish, PID 15597
y", line 62, in <genexpr>
    for k in out))
  File "/home/zhongzzy9/anaconda3/envs/carla99/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map
    return type(out)(map(gather_map, zip(*outputs)))
TypeError: zip argument #1 must support iteration
wandb: Program failed with code 1. Press ctrl-c to abort syncing.
wandb: Process crashed early, not syncing files

Any help will be appreciated!

Where can I download the teacher model

Hi,
Thanks, both for the innovative research and for sharing the code with the community.
I didn't understand how to download the teacher model, in order to use the image_model.py to train the non-privilege agent.
Can you add a link for the model?
Thanks

DNN not working properly

Hi,

I am executing the steps mentioned in the docs but DNN seemed to be working incorrectly I have tried it for route 00 and route 19, and for both it goes out of lane (picture of route 00 attached).

Is it supposed to happen this way or am i doing something incorrect?
Screenshot from 2020-06-29 10-07-03

Generation of Routes and Scenarios

Can you please help me in understanding how you generate both route xml files and json scenario files (all_towns_traffic_scenarios_public.json?

Thanks

No Module Named Tabulate

Hello There;

I am running run_agent.sh but it gives an import error. I tried "pip3 install tabulate" and it is already installed, and I can import it in python3.

-Did you meet this error before?
image

-Also "--record=/home/ubuntu/learning-by-cheating/recordings/" has some error as well. Shall I give file path or directory path to the --record variable? I checked the "/home/ubuntu/learning-by-cheating/recordings/" directory, it is available.

Thanks a lot;

topdown segmentation images were all black

After I downloaded the full dataset provided in this project, I found that the topdown png picitures were all black. And I opened several route folders, but the segmentation images were the same (all black). Why this happened? Did that mean that I can not use the topdown segmentation images for training?

Trying To Record With run_agent.sh

Hello There;

I am trying to record with
"--record=/home/ubuntu/learning-by-cheating/recordings/"

-However it has some error. It says no such file or directory. Shall I give file path or directory path to the --record variable? I checked the "/home/ubuntu/learning-by-cheating/recordings/" directory, it is available.

https://files.slack.com/files-pri/TMMCBKX2N-F01DP9BQMC2/image.png

-Another error is failed to destroy actor xxx. Is it a common error?

Also a ss from detailed results:
https://files.slack.com/files-pri/TMMCBKX2N-F01D8JJE27R/image.png

Thanks a lot;

Map not found Error while running a docker file in local machine

Hi, thank you for your generous starter kit!

I'm using the newest Carla version 0.9.10.1 and I'm trying to submit your dockerfile to the leaderboard for test.
Before submit, I'm trying to run the docker file on my local machine.

When I set my ROUTES variable to routes_devtest.xml, it runs well in other scenarios, but it stops at RouteScenario_3 (Town 06), saying map not found.
I found Town06 does not exist in client.get_available_maps(). (the newest carla 0.9.10.1 seems to have only 5 towns, town01 to 05)

I want to ask that

  1. is it ok to submit this docker file to alphadrive leaderboard? or do I have to add extra Towns (town06 to 10) manually before submit?
  2. How can I add extra towns?

Thanks
0
1

model does not pass tests on 0.9.10

Hi,

I am trying to run the model (epoch=24.chkpt) mentioned on the readme doc. However, I see that the tests are failing especially with respect to steering.

Any pointers as to what may be going wrong. I am guessing it is the updated carla version that is not compatible with the trained model. .

Saurabh

Error when running run_evaluation.sh

Traceback (most recent call last):
File "/home/abdallahaymaan/2020_CARLA_challenge/leaderboard/leaderboard/leaderboard_evaluator.py", line 26, in
import carla
ModuleNotFoundError: No module named 'carla'

although the stated exports were done including the CARLA_ROOT

has anyone encountered such issue?
any help will be appreciated!

RawController And PidController

Hello there;

As far as I can see training in carla_project/src/image_model and carla_project/src/map_model is done with RawController which is a small network. However benchmark in leaderboard/team_code/image_agent uses PIDController. What is the reason for difference between training controller and benchmark controller?

Thank you for your attention;

Updated image model weights for CARLA 0.9.10

Hi Brady,

As the title says - is the LBC team planning on releasing weights for an image model trained on CARLA 0.9.10? I tried deploying the weights (linked in the README) on the Leaderboard routes in 0.9.10, but nearly every route had a 0% RouteCompletion rate. I suspect it's because those weights were trained in CARLA 0.9.9, which looks visually dissimilar to 0.9.9

Results of running a pretrained model is FAILURE

Hello, I followed the README.md and run a pretrained model. But the result is failure.

hardware

  • i7-10700F CPU @ 2.90GHz,
  • GTX3070(max fps is 13 with 3 cameras)

First Try

Service Start: ./CarlaUE4.sh -quality-level=Epic -world-port=2000 -resx=800 -resy=600 -opengl

Client Start:

export PORT=2000 
export ROUTES=leaderboard/data/routes_training/route_09.xml
export TEAM_AGENT=image_agent.py
export TEAM_CONFIG=../2020_CARLA_challenge_train_data/epoch=24.ckpt 
export HAS_DISPLAY=1

./run_agent.sh

Result

the car turned left at the first crossing, stopped in the yard by the side of the road(left turn failure). However I noticed that the car turned left at the second crossing when running Data Collection.

Second Try

I noticed that training data is 2Hz. I modify code leaderboard/autoagents/autonomous_agent.py.

import time first and modify __call__.

        # print('======[Agent] Wallclock_time = {} / {} / Sim_time = {} / {}x'.format(wallclock, wallclock_diff, timestamp, timestamp/(wallclock_diff+0.001))) 
        # 13frames/s -> 2frames/s  77ms -> 500ms  => diff 423ms
        time.sleep(0.423)

        control = self.run_step(input_data, timestamp)
        control.manual_gear_shift = False

        return control

I also add code for FPS show. And fps is 2 when run a pretrained model.

Result

the car turned left at the second crossing, but stopped in the yard by the side of the road((left turn failure)).

Could not setup required agent due to <urlopen error [Errno 110] Connection timed out>

I met this problem when I ran the test agent:

./run_agent.sh

I found the reason is that the ImageAgent could not load the model correctly.

class ImageAgent(BaseAgent):
    def setup(self, path_to_conf_file):
        super().setup(path_to_conf_file)

        self.converter = Converter()
        self.net = ImageModel.load_from_checkpoint(path_to_conf_file)
        self.net.cuda()
        self.net.eval()

The model file was downloaded from the website slack, and I printed the path_to_conf_file, which was set as the absolute path as :

/home/quan/2020_CARLA_challenge/epoch=24.ckpt

I wondered that if I should set a relative path. If so, where should I put the model file?

Malloc size error in CARLA 9.10.1

Hi Brady,

I thought I got to the point of successfully implementing your agent in CARLA 9.10.1 and the newest version of scenario runner. Everything is working as expected, except for the simulator crashing every so often after running multiple consecutive routes (with no predictable pattern, as far as I can tell) and returning error:

terminating with uncaught exception of type clmdep_msgpack::v1::type_error: std::bad_cast
Signal 6 caught.
Malloc Size=65538 LargeMemoryPoolOffset=65554 
Malloc Size=65535 LargeMemoryPoolOffset=131119 
Malloc Size=122688 LargeMemoryPoolOffset=253824 
Aborted (core dumped)

Is this something you have experienced before, or do you have an idea of what may be causing it?

Thanks in advance!

AttributeError: 'PosixPath' object has no attribute 'decode' when training stage2

Hi, I finished training stage 1 successfully and tried to train stage 2 model.

I notice that the command in README is python3 -m carla_project/src/image_model --dataset_dir /path/to/data.

One minor typo:
I think it should be python3 -m carla_project/src/image_model --dataset_dir /path/to/data --teacher_path /path/to/teacher/checkpoint since teacher_path is required field.

After running this command with the path to the checkpoint of my stage1 model, however, I encountered the following error:

Traceback (most recent call last):
  File "carla_project/src/image_model.py", line 364, in <module>
    main(parsed)
  File "carla_project/src/image_model.py", line 322, in main
    model = ImageModel(hparams, teacher_path=hparams.teacher_path)
  File "carla_project/src/image_model.py", line 100, in __init__
    self.teacher = MapModel.load_from_checkpoint(teacher_path)
  File "/home/zhongzzy9/anaconda3/envs/carla99/lib/python3.7/site-packages/pytorch_lightning/core/saving.py", line 142, in load_from_checkpoint
    checkpoint = pl_load(checkpoint_path, map_location=lambda storage, loc: storage)
  File "/home/zhongzzy9/anaconda3/envs/carla99/lib/python3.7/site-packages/pytorch_lightning/utilities/cloud_io.py", line 8, in load
    if urlparse(path_or_url).scheme == '' or Path(path_or_url).drive:  # no scheme or with a drive letter
  File "/home/zhongzzy9/anaconda3/envs/carla99/lib/python3.7/urllib/parse.py", line 367, in urlparse
    url, scheme, _coerce_result = _coerce_args(url, scheme)
  File "/home/zhongzzy9/anaconda3/envs/carla99/lib/python3.7/urllib/parse.py", line 123, in _coerce_args
    return _decode_args(args) + (_encode_result,)
  File "/home/zhongzzy9/anaconda3/envs/carla99/lib/python3.7/urllib/parse.py", line 107, in _decode_args
    return tuple(x.decode(encoding, errors) if x else '' for x in args)
  File "/home/zhongzzy9/anaconda3/envs/carla99/lib/python3.7/urllib/parse.py", line 107, in <genexpr>
    return tuple(x.decode(encoding, errors) if x else '' for x in args)
AttributeError: 'PosixPath' object has no attribute 'decode'

Did you encounter this error by any chance before?

The actor timed out

Hey please, I would like to know what that means when running run_agent.sh

FAILED: The actor timed out

the trained time took

Hey, bradyz.
I found you said that that training time took about 36h both stage1/2 in the issuse #17 . But I found the time took about 16h for stage 1 and 2d 11h for stage 2 in the WanB. And the epoch in th WandB are 50 and 100.
Does the 36 hours you mentioned is for training 24 epoch? Thanks!

Train-val-test split of dataset in the pre-trained model

Hi, first of all thanks for this repo.
I was wondering, for the pre-trained model provided by you, how did you split the dataset? Did you train the model on all 76 routes, if not, could you let me know which routes you used for training? Also, if I understand correctly, the 76 routes in the leaderboard/data/routes folder are extracted from the routes_training, routes_debug and routes_devtest files provided in the CARLA Challenge, right?

Thanks in advance. 😄

Trying to update to work with CARLA 0.9.13

I see that the scenario runner and leaderboard submodules in your repo are forks of the carla ones and seem to have been updated to 0.9.10 (from 0.9.7, it looks like using one large merge commit). Do you have any custom changes in here, or are they just the 0.9.10 compatible versions of those 2 repositories?

I ask because I'm trying to update this code to work with the latest Carla. However, the PythonAPI has several changes which will necessitate updates to these submodules and I don't want to miss any custom changes you may have had during that update if I update from the vanilla Carla versions

Not the exact same implementation as the original LBC code ?

Hi,

I wondering whether this base code implements exactly the original LBC algorithm. When I look at it closely
it seems that it doesn't use the branch selection based on command (follow lane, turn left, ...)
Instead it seems to use the target points coordinates.
Am I correct ?

Not able to train STAGE_1

Hi,

I have checked out the project master branch, run pre-trained models provided, and it worked as defined from your side. I am using CARLA version 0.9.10.1.

Then, I downloaded provided dataset to run a training, and got much worse results at the end. I used batch size 32 for both stages, tried both values for command coefficient (0.1 and 0.01), lr=0.0001, temp=10, sample_by=even, hack=True, and 50 (stage1) + 90 (stage2) epochs.
Since I am using the latest CARLA version and the master repo is updated according to it, I assumed the problem occurred because the provided dataset is collected using older version (for example different classes in semantic map). Is this correct?

Having this in mind, I used a provided autopilot to collect the same amount of data from the latest version. Nevertheless, the training results were still bad. Since the whole process is time consuming I wrote an evaluation script for stage1 to check if it is working properly. It worked great with the checkpoints you provided (epoch=34.ckpt for both cc values), but it didn't with mine. It looks like even stage1 part introduces a problem, which eventually causes stage2 to work poorly.
BTW, I also tested my stage1 checkpoints trained with your provided dataset with stage1 evaluation scripts and it also worked poorly.

Do you have any idea why is this happening?

Regards

Some issue in Data collection for Carla 0.9.10

While running the data collection code on 0.9.10 we found some issues that mostly seem like they were caused due to some breaking changes in Carla 0.9.9 -> 0.9.10

This one seems like an issue in the scenario runner import where the name of the AgentPool was changed to DataProvider,
https://github.com/bradyz/leaderboard/blob/35fb5f2d1e7d8884d20d92e492f5d49ecbae2b65/team_code/map_agent.py#L1

Over here the semantic segmentation remapping is not compatible with the larger semantic segmentation outputs in carla 0.9.10 (Old vs New)
https://github.com/bradyz/carla_project/blob/2f0c166167a8d44b8b720cdbcd5d56983fa71602/src/common.py#L4

I can submit a PR for the above changes if needed as well, I would just need some help with the new segmentation class mappings, do let me know if I can help,

Thanks!

Does map model training use data augmentations?

Hey Brady,

I'm retraining the map model with some modifications and I noticed that there doesn't seem to be any data augmentation done on the autopilot data that's fed to the map model at train time. However, the LBC paper describes shift/rotate data augmentations applied on the autopilot data - does this LBC implementation have those augmentations for the map model?

Dataset Collection Error

Hello There;

https://github.com/bradyz/2020_CARLA_challenge#data-collection
I am trying to follow your data collection, but it receives error, i think there is a problem with autopilot data-collection.

Also can I collect ground truth segmentation dataset with the autopilot, how can I do that?

Error:

> The sensor's configuration used is invalid:
> > Illegal sensor used. sensor.camera.semantic_segmentation are not allowed!
> 
> Traceback (most recent call last):
>   File "leaderboard/leaderboard/leaderboard_evaluator.py", line 273, in _load_and_run_scenario
>     AgentWrapper.validate_sensor_configuration(self.sensors, track, args.track)
>   File "/home/lenovo/carlaPath_0.9.10/CARLA_0.9.10.1/leaderboard/leaderboard/autoagents/agent_wrapper.py", line 205, in validate_sensor_configuration
>     raise SensorConfigurationInvalid("Illegal sensor used. {} are not allowed!".format(sensor['type']))
> leaderboard.envs.sensor_interface.SensorConfigurationInvalid: Illegal sensor used. sensor.camera.semantic_segmentation are not allowed!
> > Registering the route statistics
> Done. See sample_data/route_19.txt for detailed results.

./run_agent.sh core dump on carla0.9.10.1

(env) dcv-user@ip-172-31-0-220:~/2020_CARLA_challenge$ ./run_agent.sh
route_00_10_27_12_34_39
Preparing scenario: RouteScenario_0
ScenarioManager: Running scenario RouteScenario_0
index 21 is out of bounds for axis 0 with size 16
Traceback (most recent call last):
File "leaderboard/leaderboard/leaderboard_evaluator.py", line 379, in main
leaderboard_evaluator.run(arguments)
File "leaderboard/leaderboard/leaderboard_evaluator.py", line 331, in run
global_stats_record = self.statistics_manager.compute_global_statistics(route_indexer.total)
File "/home/dcv-user/2020_CARLA_challenge/leaderboard/leaderboard/utils/statistics_manager.py", line 206, in compute_global_statistics
route_length_kms = route_record.scores['score_route'] * route_record.meta['route_length'] / 1000.0
KeyError: 'route_length'
./run_agent.sh: line 23: 7717 Segmentation fault (core dumped) python leaderboard/leaderboard/leaderboard_evaluator.py --challenge-mode --track=dev_track_3 --scenarios=leaderboard/data/all_towns_traffic_scenarios_public.json --agent=${TEAM_AGENT} --agent-config=${TEAM_CONFIG} --routes=${ROUTES} --checkpoint=${CHECKPOINT_ENDPOINT} --port=${PORT}
Done. See sample_data1/route_00.txt for detailed results.

Should I expect the LBC agent code to run properly on CARLA 0.9.10.1?

The CARLA AD Leaderboard website has updated their "Getting started" docs to reflect an upgrade to CARLA 0.9.10.1. I tried deploying a pre-trained LBC model (downloading the weights recommended in this repo's README) in the new Leaderboard binary release, but the agent rarely moved in the routes I tested it in.

Should I be expecting the agent to move faster? I can see reasonable waypoints being produced in the image view, but the brake is almost always set to true, even if it's at a traffic signal that is green.

about sensormotor agent

Hi, does the agent in the stage2 use the DAgger method like the LBC? Or just trained on the same dataset same as the privilege agent off-line?

TypeError: unsupported format string passed to list.__format__

I met this error when I ran the run_agent.sh

Traceback (most recent call last):
File "leaderboard/leaderboard/leaderboard_evaluator.py", line 380, in main
leaderboard_evaluator.run(arguments)
File "leaderboard/leaderboard/leaderboard_evaluator.py", line 333, in run
StatisticsManager.save_global_record(global_stats_record, self.sensors, args.checkpoint)
File "/home/quan/2020_CARLA_challenge/leaderboard/leaderboard/utils/statistics_manager.py", line 257, in save_global_record
'{:.3f}'.format(stats_dict['infractions']['collisions_layout']),
TypeError: unsupported format string passed to list.format

and I printed stats_dict['infractions']['collisions_layout'] by adding the code

print('infraction:', stats_dict['infractions']['collisions_layout'], stats_dict['infractions']['collisions_pedestrian'])
the result was:

infraction: [] []

It seemed that the list object cannot using '{:.3f}'.format. I wanted to figure out what was the contend should be in the infraction list, the pedestrains' id or the total infraction numbers? And how can I fix this misformat properly? Thank you.

CoordConverter

Thanks for your great work, I have some questions about the CoordConverter in train_image1. I'd like to konw that if i change the image size to (192,192), shall I change the world_y and fixed_offset

Velocity, Target As Input

Hello;

I am confused about target entering to neural net as input. At first I thought it as velocity however after searching the issues I found that it is a 50 meter away future target.

-Don't you use velocity as input like in LBC original repo?
-How do you calculate position of the target? And what are the modifications happening to target, do you project the target position onto the camera and heatmap?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.