Code Monkey home page Code Monkey logo

opencda's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

opencda's Issues

how to save the data

Hi, author. When the simulation processing, how to save the image and point cloud data, as well as the annotation file?

LiDAR Fog Simulation

Since CARLA's fog does not influence the LiDAR sensor, we'd love to add a feature to simulate fog on LiDAR data as well.

Is reinforcement learning supported?

Thanks for creating this great project! I saw OpenCDA has some rule-based algorithms as planning module. So does it also support reinforcement learning behavior planning? Thanks!

How to generate annotations for LiDAR data?

Hello, author. I would like to ask why the collected LiDAR data does not have object annotations. How can we generate annotations for LiDAR data?I hope you could find some time to look into this issue amidst your busy schedule.

Is the branch of cooperative-lidar-communicate really realize lidar data transmission between CAVs?

I just find it realize that ego can find nearby car by v2x manager when perception activate mode is set to false.

this part of code is at opencda/core/sensing/perception/perception_manager.py line:486-490 of opencda/core/sensing/perception/perception_manager.py

if not self.activate: self.search_nearby_cav() objects = self.deactivate_mode(objects) # maybe 当不检测的时候通过v2x来查到周围的车的位置来作为障碍物的位置 else: objects = self.activate_mode(objects)

could you explain which part of code implements the function of cooperative lidar communicate?
I spent a long time not finding it.

float() argument must be a string or a number, not 'LineString' in single_intersection_town6_carla file

I am working in Carla simulator. I required specific scenario for my simulation. So I have tried to run file named single_intersection_town06_carla and it it showing me error like "float() argument must be a string or a number, not 'LineString'". I traced the error and I found that in the file named core/map/map_manager.py the input is actually linestring. But I am not able to understand that how to solve this as shapely polygon and boundary function of polygon are completely new to me. How to resolve this error?
Here I have written the same function of the file map_manager.py:
def associate_lane_tl(self, mid_lane):
"""
Given the waypoints for a certain lane, find the traffic light that
influence it.

    Parameters
    ----------
    mid_lane : np.ndarray
        The middle line of the lane.
    Returns
    -------
    associate_tl_id : str
        The associated traffic light id.
    """
    associate_tl_id = ''

    for tl_id, tl_content in self.traffic_light_info.items():
        
        trigger_poly = tl_content['corners']
        print("\n\n corners",tl_content['corners'])
        print("\n\n trigger_poly",trigger_poly) # printing values
        print(type(trigger_poly))
        # use Path to do fast computation

        trigger_path = Path(float(trigger_poly.boundary))

        print("trigger_path",type(trigger_poly.boundary))

        # check if any point in the middle line inside the trigger area
        check_array = trigger_path.contains_points(mid_lane[:, :2])

        if check_array.any():
            associate_tl_id = tl_id
    return associate_tl_id

LinestringError

Controller selection in OpenCDA

How to specify the controller in OpenCDA? I know there is a PID controller in the source code. But I cannot find where you define the controller type. Could you please provide some comments and help on this? Thx.

Sumo encountered the following

I ran the third example of quick start on the opencda official website, and Sumo encountered the following error:simulation ended at time :0.05.

.py not found ERROR

I am trying to run opencda on a remote server with Ubuntu16.04, I had a problem with open3d before, after I solved that problem. I got the following error:
image
I'm sure I followed the steps in the official documentation, what should I do to fix this error?Thanks!
By the way, does opencda support running on a remote server?
Carla: 0.9.11
Driver Version: 418.43
CUDA Version: 10.1

NoPackagesFoundError

when run the command "conda env create -f environment.yml",
there will be an error:

NoPackagesFoundError: Package missing in current linux-64 channels:
-pip ==21.1.2

Some errors encounted when running 'feature_reinforcement_learning' version of OpenCDA

First thanks for open-sourcing OpenCDA framework, and the code of main version is significantly clear to read through! I am so interested in applying multi-agent deep reinforcement learning with co-simulation mode on it, so I turn to the 'feature_reinforcement_learning' version. When I run the command python opencda.py -t single_rldqn_town06_carla -rl train, I encounter the following problems:

  • Using port 9000 cannot connect to my CARLA server, so I just set the client port to 2000, and it works.

  • The CarlaRLEnv may lack of the definitions of 'observation_space', 'action_space', and 'reward_space' which cannot be passed to openDI engine, so I simply assign empty dict spaces.Dict({}) to these variables within the CarlaRLEnv and the errors are gone.

  • Then I re-run the program and encounter the following problem, but I don't know how to solve it:

    Traceback (most recent call last):
    File "/home/ghz/PycharmProjects/OpenCDA-feature-reinforcement_learning/opencda.py", line 65, in <module>
      main()
    File "/home/ghz/PycharmProjects/OpenCDA-feature-reinforcement_learning/opencda.py", line 60, in main
      scenario_runner(opt, config_yaml)
    File "/home/ghz/PycharmProjects/OpenCDA-feature-reinforcement_learning/opencda/scenario_testing/single_rldqn_town06_carla.py", line 16, in run_scenario
      rl_train(opt, config_yaml)
    File "/home/ghz/PycharmProjects/OpenCDA-feature-reinforcement_learning/opencda/core/ml_libs/rl/rl_api.py", line 176, in rl_train
      tb_logger, exp_name=rl_cfg.exp_name)
    File "/home/ghz/anaconda3/envs/opencda/lib/python3.7/site-packages/ding/worker/replay_buffer/naive_buffer.py", line 81, in __init__
      self._instance_name, EasyDict(seconds=self._cfg.periodic_thruput_seconds), self._logger, self._tb_logger
    AttributeError: 'EasyDict' object has no attribute 'periodic_thruput_seconds'
    Exception ignored in: <function NaiveReplayBuffer.__del__ at 0x7f1f6c5df0e0>
    Traceback (most recent call last):
    File "/home/ghz/anaconda3/envs/opencda/lib/python3.7/site-packages/ding/worker/replay_buffer/naive_buffer.py", line 277, in __del__
      self.close()
    File "/home/ghz/anaconda3/envs/opencda/lib/python3.7/site-packages/ding/worker/replay_buffer/naive_buffer.py", line 97, in close
      self.clear()
    File "/home/ghz/anaconda3/envs/opencda/lib/python3.7/site-packages/ding/worker/replay_buffer/naive_buffer.py", line 268, in clear
      self._periodic_thruput_monitor.valid_count = self._valid_count
    AttributeError: 'NaiveReplayBuffer' object has no attribute '_periodic_thruput_monitor'
    terminate called without an active exception
    
  • Plus, I also want to ask about the version of openDI-engine (ding) used in this RL version of OpenCDA, I download the latest version of '0.4.3'.

Thanks a lot!

License clarification questions. Can we distribute generated data?

Hello! OpenCDA is a really good project and I am enjoying using it. I appreciate the thorough documentation and the clear code style.

I understand the license permits researchers to use the code with credit. But we cannot make our forked code public, and if we do, the license is terminated and we can no longer use OpenCDA. (I.e. A normal "non-distrib" license as I understand it.)

I wanted to clarify the following:

  1. Can we publicly share scenario definitions? E.g. A simulation yaml and the corresponding Python code.
  2. Does this apply to pull requests? (I think this license would technically prohibit others from making pull requests. I have nothing ready to contribute, but I just want to be sure.)
  3. Can we publicly share generated data? E.g. Just the dataset output from OpenCDA, and none of the code.

Where to find the coordinates for a spawn location

Hello,

My goal is to spawn the ego-vehicle at a specific location in Town07 (e.g., singleTown07_carla).
In the picture, the red dot is the spawn position.
The blue dot is the destination.
Town07

I do not understand where/how to retrieve the coordinates used in spawn_position and destination (see lines below)

scenario:
single_cav_list: # this is for merging vehicle or single cav without v2x
- <<: *vehicle_base
spawn_position: [600, 51, 50, 0, 0, 0]
destination: [600, 145.51, 50]

I have tried to use the x, y , z coordinates provided in the Unreal Engine Editor but it seems off.
Can you provide me some guidance or an example?

Thank you!

About the algorithm of BehaviorAgent planning driving behavior for a single CAV

Hello, what are the behavior planning and trajectory planning algorithms for CAVs in OpenCDA? Where can I clearly understand the flow of these algorithms?
The original paper provides the flow of this framework, with less introduction to BehaviorAgent (mathematical formulas, etc.).

In addition, I added your WeChat account on March 17th. Is it convenient to pass through? Thank you!

Is CARLA 0.9.9 supported?

Huge thanks to this great project, it looks amazing! I have a question about the supported version of Carla. I saw on the installation page, both carla 0.9.11 and 0.9.12 are supported, but due to the current projects we have to continue to use the version 0.9.9. Does your project also support carla 0.9.9? If not, would you please provide any ideas on how we could modify this great project so that it could be fitted for carla 0.9.9? Thanks!

Ubuntu16.04 can NOT run Two-lane highway test

Hi,
Thanks for the great work

I try to run the single_2lanefree_carla on Ubuntu 16.04, but it failed:


~/OpenCDA$ python opencda.py -t single_2lanefree_carla
OpenCDA Version: 0.1.0
Traceback (most recent call last):
File "opencda.py", line 56, in
main()
File "opencda.py", line 40, in main
testing_scenario = importlib.import_module("opencda.scenario_testing.%s" % opt.test_scenario)
...
import open3d as o3d
File "/home/anaconda3/envs/opencda/lib/python3.7/site-packages/open3d/init.py", line 56, in
_CDLL(str(next((_Path(file).parent / 'cpu').glob('pybind*'))))
File "/home/anaconda3/envs/opencda/lib/python3.7/ctypes/init.py", line 364, in init
self._handle = _dlopen(self._name, mode)
OSError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.27' not found (required by /home/anaconda3/envs/opencda/lib/python3.7/site-packages/open3d/cpu/pybind.cpython-37m-x86_64-linux-gnu.so)

I searched in google and found that it maybe the problem of open3d which uses glibc 2.27
Ubuntu16.04 seems to not be supported anymore. Ubuntu 16.04 use only glibc 2.23

isl-org/Open3D#1898

so Am I must upgrade my Ubuntu to 18.04?

Spawn a new CAV at a certain simulation time step

I was wondering if it is possible to generate a new single CAV on the on-ramp particularly for the scenario "platoon_joining_2lanefree_cosim". I tried to spawn a single cav on the on-ramp but when it reached to the merging area, about the same time as a mainline platoon (and it should perform a cut-in merge). It did not merge into the platoon.

Please advise if OpenCDA allows us to do this. My intent is to have the simulation run longer with more CAVs. (Spawning multiple CAVs at the simulation start is possible but is limited by space of link.)

Thank you,
Thod

Better Stop Sign Behavior

Currently, OpenCDA regards all traffic lights with id -1 as stop signs and will stop there for 2 seconds before moving. However, once a stop sign is activated by a vehicle, its id will change to a positive int, and won't change back to -1 until a while. Thus, current stop sign behavior is not robust. We will try to make a better stop sign behavior in the next version.

RuntimeError: opendrive could not be correctly parsed

Not sure if I missed anything but I cannot get the basic example working.

OS: Ubuntu 2004
GPU: RTX2080

Carla itself is working fine.

Command for starting carla server:

/opt/carla-simulator/CarlaUE4.sh 
4.24.3-0+++UE4+Release-4.24 518 0
Disabling core dumps.

command for starting opencda:

$ python opencda.py -t single_2lanefree_carla
OpenCDA Version: 0.1.0
load opendrive map '2lane_freeway_simplified.xodr'.
Traceback (most recent call last):
  File "/home/yanghao/external/OpenCDA/opencda/scenario_testing/single_2lanefree_carla.py", line 35, in run_scenario
    cav_world=cav_world)
  File "/home/yanghao/external/OpenCDA/opencda/scenario_testing/utils/sim_api.py", line 114, in __init__
    self.world = load_customized_world(xodr_path, self.client)
  File "/home/yanghao/external/OpenCDA/opencda/scenario_testing/utils/customized_map_api.py", line 54, in load_customized_world
    enable_mesh_visibility=True))
RuntimeError: opendrive could not be correctly parsed

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "opencda.py", line 56, in <module>
    main()
  File "opencda.py", line 51, in main
    scenario_runner(opt, config_yaml)
  File "/home/yanghao/external/OpenCDA/opencda/scenario_testing/single_2lanefree_carla.py", line 75, in run_scenario
    eval_manager.evaluate()
UnboundLocalError: local variable 'eval_manager' referenced before assignment

Support for OpenCDA on Newer Ubuntu Versions

Since Both Ubuntu 18.04 and Python 3.7 are nearing EOL, I was wondering if you have any plans for adding support for newer OS, CARLA, and Python versions. I built CARLA 0.9.14 on Ubuntu 22.04 from source with Python 3.10.6 (and also other versions of Python), but none seem to work with OpenCDA.

OpenCDA能否导入Intereaction数据集,并将数据集中的场景进行仿真及车辆行为分析?

您好,很高兴可以了解OpenCAD。目前我只是拜读了您的文献,还没开始深入学习OpenCDA的具体操作。现在有一些问题想请问:

1.OpenCDA能否支持导入Interaction数据集,对其进行场景的还原仿真?比如复现地图,汽车驾驶轨迹,行为分析等。导入的过程中是否要对Interaction数据集中的数据类型进行转换?其他数据集呢?(InD数据集等,主要是一些汽车行为与轨迹的数据集)
2.仿真之后如果要对一些行为进行分析,或者加入一些算法进行一些研究(比如,加入LSTM进行轨迹预测,采用MPC控制动力学模型等等),能否将数据结果进行保存或者实现算法开发?

以上功能的实现,包括了OpenCDA自带的内置功能,或者我也可以自己进行算法编写(只要OpenCDA提供相应接口)。如果可以实现,我将进一步深入学习OpenCDA。

期待回复

Insallation Problems (local installation and docker)

Hello,
I want to use OpenCDA and make some tests for V2X scenarios. However, I have encountered installation problems even I complete all steps correctly.

Errors for Local installation

  1. I have an conda environment and install all dependencies correctly without any error.
  2. Also, I installed carla prebuild version, and extracted additional maps.

If I run single_2lanefree_carla, I got an error as below:

image

I also tested with another scenario single_town06_carla but I got extra error with the same error. However, the maps from link are already downloaded and extracted to Carla with ./ImportAssets.sh command. I can see the town06 at the folder as below:

image

However, I still get the following error and the eval_manager error is still present:

image

Errors for Docker installation

First small error, during the build of docker image, the name was selected as opencda_container, but then in the running command, it is called as opencda_docker, so it produces an error

image

I had set OPENCDA_FULL_INSTALL to true, I did not run setup.sh
Then I realized the python version of carla dist is not correct:

image

Naturally, carla module could not found:

image

Then, I run setup.sh to install carla

image

Then, I got an version error about numpy and I upgraded,

image

After that, when I run the scenario, I got exact same error with local installation:
image

I know it's a long topic but I wanted to show the errors I encountered during installation. Thank you in advance for replying.

Running opencda in docker support

This is not a real issue, but just some notes for those who want to running opencda in docker environment.

  1. Base Docker Image: I already have a base docker image(ubuntu 18.04) with carla client lib(0.9.11) installed. ie. import carla will not generate any error messages.
  2. OpenCDA installation: Get a copy of the source code, and mount it to the docker container based on image in the previous step using the docker -v options. So you'll get access to the opencda source in the docker container.
  3. X11 support: using docker run option -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY

Possible errors:

  1. In the container shell, try to run a scenario, for example. the single_2lanefree_carla, you may get some libs (limSM.so, libGL.so) missing messages. To fix those errors: using sudo apt-get update && sudo apt-get install -y libsm6 libgl1-mesa-glx to install the dependencies.
  2. You may get some errors like "X error: BadShmSeg, blabla", set environment variable using export QT_X11_NO_MITSHM=1 in the container will fix it.

If you see some other errors, leave a message here, I'll see if I can help.

CARLA installation

I encounter the following issue when installing CARLA with the command 'make launch' (make PythonAPI is successfully compiled):

8 warnings and 18 errors generated.
5 warnings and 10 errors generated.
make[1]: *** [Makefile:315: CarlaUE4Editor] Error 6
make[1]: Leaving directory '/home/admin1/carla/Unreal/CarlaUE4'
make: *** [Util/BuildTools/Linux.mk:7: launch] Error 2

Please help me!!! Many thanks.

0xc4: MoveToXY vehicle should obtain: edgeID, lane, x, y, angle and optionally keepRouteFlag.

Hi All,

I was able to run the co-sim scenario with SUMO in the past weeks, however, now I receive error messages as the following.

In SUMO, this error showed up, Error: Answered with error to command 0xc4: MoveToXY vehicle should obtain: edgeID, lane, x, y, angle and optionally keepRouteFlag.

The following error message is shown in the terminal.

(opencda) 09:25:25 carma3@carma ~/OpenCDA (main) $ python opencda.py -t platoon_joining_2lanefree_cosim -v 0.9.12
OpenCDA Version: 0.1.1
load opendrive map '2lane_freeway_simplified.xodr'.
INFO - 2022-03-09 09:25:35,271 - sumo_simulation - Starting new sumo server...
INFO - 2022-03-09 09:25:35,272 - sumo_simulation - Remember to press the play button to start the simulation
Retrying in 1 seconds
Loading configuration ... done.
Creating platoons/
Creating single CAVs.
WARNING - 2022-03-09 09:25:40,696 - bridge_helper - sumo vtype DEFAULT_VEHTYPE not found in carla. The following blueprint will be used: vehicle.chevrolet.impala
/home/carma3/anaconda3/envs/opencda/lib/python3.7/site-packages/numpy/core/fromnumeric.py:3373: RuntimeWarning: Mean of empty slice.
out=out, **kwargs)
/home/carma3/anaconda3/envs/opencda/lib/python3.7/site-packages/numpy/core/_methods.py:170: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
/home/carma3/anaconda3/envs/opencda/lib/python3.7/site-packages/numpy/core/fromnumeric.py:3373: RuntimeWarning: Mean of empty slice.
out=out, **kwargs)
/home/carma3/anaconda3/envs/opencda/lib/python3.7/site-packages/numpy/core/_methods.py:170: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)

Does anybody have an idea why this happened? Please help. Thank you.

Is python 2.7 supported?

First of all, thanks for this great project, it looks amazing! I have a question about the supported version of Python. I saw on the installation page, python 3.7 is supported, does your project also support python2.7? Thanks!

Error when running with perception enabled

Hello, first of all, thanks for the amazing work!
I am currently trying to run OpenCDA from the very beginning, I managed to run the very first test mentioned in the tutorial which is the Two-lane highway test with the command python opencda.py -t single_2lanefree_carla -v 0.9.12 It works perfectly, however when I want to enable the perception with means adding the extra --apply_ml, I run with command
python opencda.py -t single_town06_carla -v 0.9.12 --apply_ml
And I got some errors which look like this:

Connected to pydev debugger (build 223.7571.203)
OpenCDA Version: 0.1.2
Using cache found in /home/shule/.cache/torch/hub/ultralytics_yolov5_master
YOLOv5 🚀 2022-12-13 Python-3.7.10 torch-1.8.0 CUDA:0 (NVIDIA GeForce RTX 2080 Ti, 11019MiB)

Fusing layers... 
YOLOv5m summary: 290 layers, 21172173 parameters, 0 gradients, 48.9 GFLOPs
Adding AutoShape... 
Creating single CAVs.
Traceback (most recent call last):
  File "/home/shule/PycharmProjects/OpenCDA/opencda/scenario_testing/single_town06_carla.py", line 33, in run_scenario
    scenario_manager.create_vehicle_manager(application=['single'])
  File "/home/shule/PycharmProjects/OpenCDA/opencda/scenario_testing/utils/sim_api.py", line 325, in create_vehicle_manager
    data_dumping=data_dump)
  File "/home/shule/PycharmProjects/OpenCDA/opencda/core/common/vehicle_manager.py", line 109, in __init__
    map_config)
  File "/home/shule/PycharmProjects/OpenCDA/opencda/core/map/map_manager.py", line 127, in __init__
    self.generate_lane_cross_info()
  File "/home/shule/PycharmProjects/OpenCDA/opencda/core/map/map_manager.py", line 318, in generate_lane_cross_info
    tl_id = self.associate_lane_tl(mid_lane)
  File "/home/shule/PycharmProjects/OpenCDA/opencda/core/map/map_manager.py", line 268, in associate_lane_tl
    trigger_path = Path(trigger_poly.boundary)
  File "/home/shule/anaconda3/envs/opencda/lib/python3.7/site-packages/matplotlib/path.py", line 127, in __init__
    vertices = _to_unmasked_float_array(vertices)
  File "/home/shule/anaconda3/envs/opencda/lib/python3.7/site-packages/matplotlib/cbook/__init__.py", line 1317, in _to_unmasked_float_array
    return np.asarray(x, float)
TypeError: float() argument must be a string or a number, not 'LineString'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/shule/PycharmProjects/OpenCDA/opencda.py", line 56, in <module>
    main()
  File "/home/shule/PycharmProjects/OpenCDA/opencda.py", line 51, in main
    scenario_runner(opt, config_yaml)
  File "/home/shule/PycharmProjects/OpenCDA/opencda/scenario_testing/single_town06_carla.py", line 64, in run_scenario
    eval_manager.evaluate()
UnboundLocalError: local variable 'eval_manager' referenced before assignment
python-BaseException
WARNING: sensor object went out of the scope but the sensor is still alive in the simulation: Actor 1109 (sensor.other.gnss) 
WARNING: sensor object went out of the scope but the sensor is still alive in the simulation: Actor 1110 (sensor.other.imu) 
WARNING: sensor object went out of the scope but the sensor is still alive in the simulation: Actor 1114 (sensor.camera.rgb) 
WARNING: sensor object went out of the scope but the sensor is still alive in the simulation: Actor 1113 (sensor.camera.rgb) 
WARNING: sensor object went out of the scope but the sensor is still alive in the simulation: Actor 1112 (sensor.camera.rgb) 
WARNING: sensor object went out of the scope but the sensor is still alive in the simulation: Actor 1111 (sensor.camera.rgb) 
WARNING: sensor object went out of the scope but the sensor is still alive in the simulation: Actor 1115 (sensor.lidar.ray_cast) 

Process finished with exit code 134 (interrupted by signal 6: SIGABRT) 

I am running under Ubuntu 20.04 and my pytorch version is 1.8. Plus I am amateur for machine learning so I just follow the steps to install yolov5 and have done nothing else.
Looking forward for your answers, thanks in advance.

Best,
Shule

opencda.py: error: unrecognized arguments: -v 0.9.12

Hi, when I changed my carla version this error occurred. Is there any mistake in my command?

(opencda) anyu@anyu_2019:~/OpenCDA$ python opencda.py -t single_2lanefree_carla -v 0.9.12
usage: opencda.py [-h] -t TEST_SCENARIO [--record] [--apply_ml]
opencda.py: error: unrecognized arguments: -v 0.9.12

RuntimeError: time-out of 10000ms while waiting for the simulator

python opencda.py -t platoon_joining_2lanefree_cosim
OpenCDA Version: 0.1.0
load opendrive map '2lane_freeway_simplified.xodr'.
Traceback (most recent call last):
File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/platoon_joining_2lanefree_cosim.py", line 42, in run_scenario
sumo_file_parent_path=sumo_cfg)
File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/utils/cosim_api.py", line 64, in init
cav_world)
File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/utils/sim_api.py", line 114, in init
self.world = load_customized_world(xodr_path, self.client)
File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/utils/customized_map_api.py", line 54, in load_customized_world
enable_mesh_visibility=True))
RuntimeError: time-out of 10000ms while waiting for the simulator, make sure the simulator is ready and connected to localhost:2000

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "opencda.py", line 56, in
main()
File "opencda.py", line 51, in main
scenario_runner(opt, config_yaml)
File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/platoon_joining_2lanefree_cosim.py", line 86, in run_scenario
eval_manager.evaluate()
UnboundLocalError: local variable 'eval_manager' referenced before assignment

目前代码里的目录检测用的yolov5的pt文件的标签好像固定的80个标签,如果我想自己指定标签应该怎么做?

目前代码里的目录检测用的yolov5的pt文件的标签好像是固定的80个标签,我自己训练出来的模型不是按这80个标签来的,只有车辆,行人等标签,请问应该在哪里修改,才能让我的模型用例子的方法跑起来也不撞车?
例子用的:python opencda.py -t single_town06_carla -v 0.9.13 --apply_ml

Typos discovered by codespell

pip install codespell
codespell --ignore-words-list="nd,te"

./OpenCDA/setup.sh:36: Sucessful ==> Successful
./OpenCDA/README.md:18: customzied ==> customized
./OpenCDA/README.md:88: impplementation ==> implementation
./OpenCDA/opencda/core/sensing/perception/obstacle_vehicle.py:23: lable ==> label
./OpenCDA/opencda/core/sensing/perception/o3d_lidar_libs.py:76: correclty ==> correctly
./OpenCDA/opencda/core/sensing/perception/o3d_lidar_libs.py:245: covert ==> convert
./OpenCDA/opencda/core/sensing/perception/o3d_lidar_libs.py:248: homogenous ==> homogeneous
./OpenCDA/opencda/core/sensing/perception/perception_manager.py:277: Currenly ==> Currently
./OpenCDA/opencda/core/sensing/perception/perception_manager.py:667: controled ==> controlled
./OpenCDA/opencda/core/sensing/localization/localization_debug_helper.py:177: datas ==> data
./OpenCDA/opencda/core/sensing/localization/coordinate_transform.py:13: writen ==> written
./OpenCDA/opencda/core/plan/behavior_agent.py:356: temporarely ==> temporarily
./OpenCDA/opencda/core/plan/behavior_agent.py:831: handeling ==> handling
./OpenCDA/opencda/core/plan/spline.py:70: calcualtion ==> calculation
./OpenCDA/opencda/core/plan/spline.py:224: Caculate ==> Calculate
./OpenCDA/opencda/core/plan/spline.py:260: intepolated ==> interpolated
./OpenCDA/opencda/core/plan/local_planner_behavior.py:46: contorl ==> control
./OpenCDA/opencda/core/plan/global_route_planner_dao.py:78: lcoation ==> location
./OpenCDA/opencda/core/common/data_dumper.py:153: spped ==> speed, sped, sipped, sapped, supped, sopped
./OpenCDA/opencda/core/application/platooning/platooning_manager.py:38: destiantion ==> destination
./OpenCDA/opencda/core/application/platooning/platooning_manager.py:84: memeber ==> member
./OpenCDA/opencda/core/application/platooning/platooning_manager.py:199: desination ==> destination
./OpenCDA/opencda/core/application/platooning/fsm.py:39: ABONDON ==> ABANDON
./OpenCDA/opencda/core/application/platooning/fsm.py:54: ABONDON ==> ABANDON
./OpenCDA/opencda/core/application/platooning/platoon_behavior_agent.py:184: finshed ==> finished
./OpenCDA/opencda/core/application/platooning/platoon_behavior_agent.py:196: ABONDON ==> ABANDON
./OpenCDA/opencda/core/application/platooning/platoon_behavior_agent.py:884: ABONDON ==> ABANDON
./OpenCDA/opencda/core/actuation/pid_controller.py:194: loaction ==> location
./OpenCDA/opencda/core/actuation/control_manager.py:24: framwork ==> framework
./OpenCDA/opencda/scenario_testing/utils/customized_map_api.py:40: readed ==> read, readd, readded
./OpenCDA/opencda/scenario_testing/utils/cosim_api.py:248: ligth ==> light
./OpenCDA/opencda/scenario_testing/utils/sim_api.py:65: backgound ==> background
./OpenCDA/opencda/co_simulation/sumo_integration/sumo_simulation.py:186: ligth ==> light
./OpenCDA/opencda/co_simulation/sumo_integration/sumo_simulation.py:199: ligth ==> light
./OpenCDA/opencda/co_simulation/sumo_integration/sumo_simulation.py:340: asign ==> assign
./OpenCDA/opencda/co_simulation/sumo_integration/sumo_simulation.py:450: ligth ==> light
./OpenCDA/opencda/co_simulation/sumo_integration/carla_simulation.py:84: ligth ==> light
./OpenCDA/opencda/co_simulation/sumo_integration/bridge_helper.py:32: methos ==> methods, method
./OpenCDA/opencda/co_simulation/sumo_integration/bridge_helper.py:322: ligth ==> light
./OpenCDA/opencda/customize/core/sensing/localization/extented_kalman_filter.py:120: Initalization ==> Initialization
./OpenCDA/docs/opencda.customize.core.sensing.localization.rst:7: extented ==> extended
./OpenCDA/docs/md_files/introduction.md:33: customzied ==> customized
./OpenCDA/docs/md_files/yaml_define.md:21: parmaters ==> parameters
./OpenCDA/docs/md_files/yaml_define.md:24: controled ==> controlled
./OpenCDA/docs/md_files/developer_tutorial.md:46: signle ==> single, signal
./OpenCDA/docs/md_files/developer_tutorial.md:208: apperance ==> appearance
./OpenCDA/docs/md_files/getstarted.md:86: detetion ==> detection, deletion
./OpenCDA/docs/md_files/logic_flow.md:79: addtional ==> additional
./OpenCDA/docs/md_files/codebase_structure.md:16: adn ==> and
./OpenCDA/test/test_ekf.py:3: Extented ==> Extended

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.