ucla-mobility / opencda Goto Github PK
View Code? Open in Web Editor NEWA generalized framework for prototyping full-stack cooperative driving automation applications under CARLA+SUMO.
License: Other
A generalized framework for prototyping full-stack cooperative driving automation applications under CARLA+SUMO.
License: Other
Hi, author. When the simulation processing, how to save the image and point cloud data, as well as the annotation file?
请问下:如果我要更换感知算法和模型,比如换yolov4,应该怎么操作,有相关说明吗
Since CARLA's fog does not influence the LiDAR sensor, we'd love to add a feature to simulate fog on LiDAR data as well.
python opencda.py -tsingle_town05_cosim -v 0.9.11 and got ERROR: single_town05_cosim.py not found under opencda/scenario_testing, any idea? @DerrickXuNu
Thanks for creating this great project! I saw OpenCDA has some rule-based algorithms as planning module. So does it also support reinforcement learning behavior planning? Thanks!
Hello, author. I would like to ask why the collected LiDAR data does not have object annotations. How can we generate annotations for LiDAR data?I hope you could find some time to look into this issue amidst your busy schedule.
I just find it realize that ego can find nearby car by v2x manager when perception activate mode is set to false.
this part of code is at opencda/core/sensing/perception/perception_manager.py line:486-490 of opencda/core/sensing/perception/perception_manager.py
if not self.activate: self.search_nearby_cav() objects = self.deactivate_mode(objects) # maybe 当不检测的时候通过v2x来查到周围的车的位置来作为障碍物的位置 else: objects = self.activate_mode(objects)
could you explain which part of code implements the function of cooperative lidar communicate?
I spent a long time not finding it.
I am working in Carla simulator. I required specific scenario for my simulation. So I have tried to run file named single_intersection_town06_carla and it it showing me error like "float() argument must be a string or a number, not 'LineString'". I traced the error and I found that in the file named core/map/map_manager.py the input is actually linestring. But I am not able to understand that how to solve this as shapely polygon and boundary function of polygon are completely new to me. How to resolve this error?
Here I have written the same function of the file map_manager.py:
def associate_lane_tl(self, mid_lane):
"""
Given the waypoints for a certain lane, find the traffic light that
influence it.
Parameters
----------
mid_lane : np.ndarray
The middle line of the lane.
Returns
-------
associate_tl_id : str
The associated traffic light id.
"""
associate_tl_id = ''
for tl_id, tl_content in self.traffic_light_info.items():
trigger_poly = tl_content['corners']
print("\n\n corners",tl_content['corners'])
print("\n\n trigger_poly",trigger_poly) # printing values
print(type(trigger_poly))
# use Path to do fast computation
trigger_path = Path(float(trigger_poly.boundary))
print("trigger_path",type(trigger_poly.boundary))
# check if any point in the middle line inside the trigger area
check_array = trigger_path.contains_points(mid_lane[:, :2])
if check_array.any():
associate_tl_id = tl_id
return associate_tl_id
How to specify the controller in OpenCDA? I know there is a PID controller in the source code. But I cannot find where you define the controller type. Could you please provide some comments and help on this? Thx.
I ran the third example of quick start on the opencda official website, and Sumo encountered the following error:simulation ended at time :0.05.
I am trying to run opencda on a remote server with Ubuntu16.04, I had a problem with open3d before, after I solved that problem. I got the following error:
I'm sure I followed the steps in the official documentation, what should I do to fix this error?Thanks!
By the way, does opencda support running on a remote server?
Carla: 0.9.11
Driver Version: 418.43
CUDA Version: 10.1
when run the command "conda env create -f environment.yml",
there will be an error:
NoPackagesFoundError: Package missing in current linux-64 channels:
-pip ==21.1.2
First thanks for open-sourcing OpenCDA framework, and the code of main version is significantly clear to read through! I am so interested in applying multi-agent deep reinforcement learning with co-simulation mode on it, so I turn to the 'feature_reinforcement_learning' version. When I run the command python opencda.py -t single_rldqn_town06_carla -rl train
, I encounter the following problems:
Using port 9000 cannot connect to my CARLA server, so I just set the client port to 2000, and it works.
The CarlaRLEnv
may lack of the definitions of 'observation_space', 'action_space', and 'reward_space' which cannot be passed to openDI engine, so I simply assign empty dict spaces.Dict({})
to these variables within the CarlaRLEnv
and the errors are gone.
Then I re-run the program and encounter the following problem, but I don't know how to solve it:
Traceback (most recent call last):
File "/home/ghz/PycharmProjects/OpenCDA-feature-reinforcement_learning/opencda.py", line 65, in <module>
main()
File "/home/ghz/PycharmProjects/OpenCDA-feature-reinforcement_learning/opencda.py", line 60, in main
scenario_runner(opt, config_yaml)
File "/home/ghz/PycharmProjects/OpenCDA-feature-reinforcement_learning/opencda/scenario_testing/single_rldqn_town06_carla.py", line 16, in run_scenario
rl_train(opt, config_yaml)
File "/home/ghz/PycharmProjects/OpenCDA-feature-reinforcement_learning/opencda/core/ml_libs/rl/rl_api.py", line 176, in rl_train
tb_logger, exp_name=rl_cfg.exp_name)
File "/home/ghz/anaconda3/envs/opencda/lib/python3.7/site-packages/ding/worker/replay_buffer/naive_buffer.py", line 81, in __init__
self._instance_name, EasyDict(seconds=self._cfg.periodic_thruput_seconds), self._logger, self._tb_logger
AttributeError: 'EasyDict' object has no attribute 'periodic_thruput_seconds'
Exception ignored in: <function NaiveReplayBuffer.__del__ at 0x7f1f6c5df0e0>
Traceback (most recent call last):
File "/home/ghz/anaconda3/envs/opencda/lib/python3.7/site-packages/ding/worker/replay_buffer/naive_buffer.py", line 277, in __del__
self.close()
File "/home/ghz/anaconda3/envs/opencda/lib/python3.7/site-packages/ding/worker/replay_buffer/naive_buffer.py", line 97, in close
self.clear()
File "/home/ghz/anaconda3/envs/opencda/lib/python3.7/site-packages/ding/worker/replay_buffer/naive_buffer.py", line 268, in clear
self._periodic_thruput_monitor.valid_count = self._valid_count
AttributeError: 'NaiveReplayBuffer' object has no attribute '_periodic_thruput_monitor'
terminate called without an active exception
Plus, I also want to ask about the version of openDI-engine (ding
) used in this RL version of OpenCDA, I download the latest version of '0.4.3'.
Thanks a lot!
Hello! OpenCDA is a really good project and I am enjoying using it. I appreciate the thorough documentation and the clear code style.
I understand the license permits researchers to use the code with credit. But we cannot make our forked code public, and if we do, the license is terminated and we can no longer use OpenCDA. (I.e. A normal "non-distrib" license as I understand it.)
I wanted to clarify the following:
Hello,
My goal is to spawn the ego-vehicle at a specific location in Town07 (e.g., singleTown07_carla).
In the picture, the red dot is the spawn position
.
The blue dot is the destination
.
I do not understand where/how to retrieve the coordinates used in spawn_position
and destination
(see lines below)
scenario:
single_cav_list: # this is for merging vehicle or single cav without v2x
- <<: *vehicle_base
spawn_position: [600, 51, 50, 0, 0, 0]
destination: [600, 145.51, 50]
I have tried to use the x, y , z coordinates provided in the Unreal Engine Editor but it seems off.
Can you provide me some guidance or an example?
Thank you!
Hello, what are the behavior planning and trajectory planning algorithms for CAVs in OpenCDA? Where can I clearly understand the flow of these algorithms?
The original paper provides the flow of this framework, with less introduction to BehaviorAgent (mathematical formulas, etc.).
In addition, I added your WeChat account on March 17th. Is it convenient to pass through? Thank you!
Huge thanks to this great project, it looks amazing! I have a question about the supported version of Carla. I saw on the installation page, both carla 0.9.11 and 0.9.12 are supported, but due to the current projects we have to continue to use the version 0.9.9. Does your project also support carla 0.9.9? If not, would you please provide any ideas on how we could modify this great project so that it could be fitted for carla 0.9.9? Thanks!
Hello.
When I run the code python opencda.py -t single_2lanefree_carla
.there is a error below:
ERROR: single_2lanefree_carla.py not found under opencda/scenario_testing
. How should I solved it?
Hello, everyone. Have you ever encountered this problem? Please help me, Thanks!!
Hi,
Thanks for the great work
I try to run the single_2lanefree_carla on Ubuntu 16.04, but it failed:
I searched in google and found that it maybe the problem of open3d which uses glibc 2.27
Ubuntu16.04 seems to not be supported anymore. Ubuntu 16.04 use only glibc 2.23
so Am I must upgrade my Ubuntu to 18.04?
I was wondering if it is possible to generate a new single CAV on the on-ramp particularly for the scenario "platoon_joining_2lanefree_cosim". I tried to spawn a single cav on the on-ramp but when it reached to the merging area, about the same time as a mainline platoon (and it should perform a cut-in merge). It did not merge into the platoon.
Please advise if OpenCDA allows us to do this. My intent is to have the simulation run longer with more CAVs. (Spawning multiple CAVs at the simulation start is possible but is limited by space of link.)
Thank you,
Thod
Currently, OpenCDA regards all traffic lights with id -1 as stop signs and will stop there for 2 seconds before moving. However, once a stop sign is activated by a vehicle, its id will change to a positive int, and won't change back to -1 until a while. Thus, current stop sign behavior is not robust. We will try to make a better stop sign behavior in the next version.
when use command :./Setup.sh && ./GenerateProjectFiles.sh && make
i mer the error:
Failed to download 'http://cdn.unrealengine.com/dependencies/UnrealEngine-7235308-3ea1d61ea5264fd9a0aba5ac630f4e2a/034e7074fbe6077510c864a8b6b23e73a2aa0f29': The remote server returned an error: (403) Forbidden. (WebException)
how can i solve this problem
Not sure if I missed anything but I cannot get the basic example working.
OS: Ubuntu 2004
GPU: RTX2080
Carla itself is working fine.
Command for starting carla server:
/opt/carla-simulator/CarlaUE4.sh
4.24.3-0+++UE4+Release-4.24 518 0
Disabling core dumps.
command for starting opencda:
$ python opencda.py -t single_2lanefree_carla
OpenCDA Version: 0.1.0
load opendrive map '2lane_freeway_simplified.xodr'.
Traceback (most recent call last):
File "/home/yanghao/external/OpenCDA/opencda/scenario_testing/single_2lanefree_carla.py", line 35, in run_scenario
cav_world=cav_world)
File "/home/yanghao/external/OpenCDA/opencda/scenario_testing/utils/sim_api.py", line 114, in __init__
self.world = load_customized_world(xodr_path, self.client)
File "/home/yanghao/external/OpenCDA/opencda/scenario_testing/utils/customized_map_api.py", line 54, in load_customized_world
enable_mesh_visibility=True))
RuntimeError: opendrive could not be correctly parsed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "opencda.py", line 56, in <module>
main()
File "opencda.py", line 51, in main
scenario_runner(opt, config_yaml)
File "/home/yanghao/external/OpenCDA/opencda/scenario_testing/single_2lanefree_carla.py", line 75, in run_scenario
eval_manager.evaluate()
UnboundLocalError: local variable 'eval_manager' referenced before assignment
Since Both Ubuntu 18.04 and Python 3.7 are nearing EOL, I was wondering if you have any plans for adding support for newer OS, CARLA, and Python versions. I built CARLA 0.9.14 on Ubuntu 22.04 from source with Python 3.10.6 (and also other versions of Python), but none seem to work with OpenCDA.
Numpy>=1.22 required by yolo5 don't support Python3.7 anymore. But the suggested Python version in your document is still 3.7.
您好,很高兴可以了解OpenCAD。目前我只是拜读了您的文献,还没开始深入学习OpenCDA的具体操作。现在有一些问题想请问:
1.OpenCDA能否支持导入Interaction数据集,对其进行场景的还原仿真?比如复现地图,汽车驾驶轨迹,行为分析等。导入的过程中是否要对Interaction数据集中的数据类型进行转换?其他数据集呢?(InD数据集等,主要是一些汽车行为与轨迹的数据集)
2.仿真之后如果要对一些行为进行分析,或者加入一些算法进行一些研究(比如,加入LSTM进行轨迹预测,采用MPC控制动力学模型等等),能否将数据结果进行保存或者实现算法开发?
以上功能的实现,包括了OpenCDA自带的内置功能,或者我也可以自己进行算法编写(只要OpenCDA提供相应接口)。如果可以实现,我将进一步深入学习OpenCDA。
期待回复
how to Estimate Yaw Angle and speed of surrounding vehicles, the source code has not finished~
When running the Code python opencda.py -t platoon_joining_town06_carla -v 0.9.11 --apply_ml
I found that the frame rate was very low and the picture was not smooth. How should I solve it?
Hello, how can I manually take over an autonomous vehicle in OpenCDA? That is, after taking over the vehicle I can drive it manually on the simulator?
Hello,
I want to use OpenCDA and make some tests for V2X scenarios. However, I have encountered installation problems even I complete all steps correctly.
If I run single_2lanefree_carla
, I got an error as below:
I also tested with another scenario single_town06_carla
but I got extra error with the same error. However, the maps from link are already downloaded and extracted to Carla with ./ImportAssets.sh
command. I can see the town06 at the folder as below:
However, I still get the following error and the eval_manager
error is still present:
First small error, during the build of docker image, the name was selected as opencda_container
, but then in the running command, it is called as opencda_docker
, so it produces an error
I had set OPENCDA_FULL_INSTALL
to true
, I did not run setup.sh
Then I realized the python version of carla dist is not correct:
Naturally, carla module
could not found:
Then, I run setup.sh
to install carla
Then, I got an version error about numpy and I upgraded,
After that, when I run the scenario, I got exact same error with local installation:
I know it's a long topic but I wanted to show the errors I encountered during installation. Thank you in advance for replying.
This is not a real issue, but just some notes for those who want to running opencda in docker environment.
import carla
will not generate any error messages.-v
options. So you'll get access to the opencda source in the docker container.-v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY
Possible errors:
sudo apt-get update && sudo apt-get install -y libsm6 libgl1-mesa-glx
to install the dependencies.export QT_X11_NO_MITSHM=1
in the container will fix it.If you see some other errors, leave a message here, I'll see if I can help.
I encounter the following issue when installing CARLA with the command 'make launch' (make PythonAPI is successfully compiled):
8 warnings and 18 errors generated.
5 warnings and 10 errors generated.
make[1]: *** [Makefile:315: CarlaUE4Editor] Error 6
make[1]: Leaving directory '/home/admin1/carla/Unreal/CarlaUE4'
make: *** [Util/BuildTools/Linux.mk:7: launch] Error 2
Please help me!!! Many thanks.
Hi team,
Thanks for your great work! I just have a question about your future development plan. Does ROS/Apollo will be integrated into OpenCDA?
Hi All,
I was able to run the co-sim scenario with SUMO in the past weeks, however, now I receive error messages as the following.
In SUMO, this error showed up, Error: Answered with error to command 0xc4: MoveToXY vehicle should obtain: edgeID, lane, x, y, angle and optionally keepRouteFlag.
The following error message is shown in the terminal.
(opencda) 09:25:25 carma3@carma ~/OpenCDA (main) $ python opencda.py -t platoon_joining_2lanefree_cosim -v 0.9.12
OpenCDA Version: 0.1.1
load opendrive map '2lane_freeway_simplified.xodr'.
INFO - 2022-03-09 09:25:35,271 - sumo_simulation - Starting new sumo server...
INFO - 2022-03-09 09:25:35,272 - sumo_simulation - Remember to press the play button to start the simulation
Retrying in 1 seconds
Loading configuration ... done.
Creating platoons/
Creating single CAVs.
WARNING - 2022-03-09 09:25:40,696 - bridge_helper - sumo vtype DEFAULT_VEHTYPE not found in carla. The following blueprint will be used: vehicle.chevrolet.impala
/home/carma3/anaconda3/envs/opencda/lib/python3.7/site-packages/numpy/core/fromnumeric.py:3373: RuntimeWarning: Mean of empty slice.
out=out, **kwargs)
/home/carma3/anaconda3/envs/opencda/lib/python3.7/site-packages/numpy/core/_methods.py:170: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
/home/carma3/anaconda3/envs/opencda/lib/python3.7/site-packages/numpy/core/fromnumeric.py:3373: RuntimeWarning: Mean of empty slice.
out=out, **kwargs)
/home/carma3/anaconda3/envs/opencda/lib/python3.7/site-packages/numpy/core/_methods.py:170: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
Does anybody have an idea why this happened? Please help. Thank you.
Hi! Will OpenCDA currently support multi agent RL(MARL) algos like CoPO?
First of all, thanks for this great project, it looks amazing! I have a question about the supported version of Python. I saw on the installation page, python 3.7 is supported, does your project also support python2.7? Thanks!
Hello, first of all, thanks for the amazing work!
I am currently trying to run OpenCDA from the very beginning, I managed to run the very first test mentioned in the tutorial which is the Two-lane highway test with the command python opencda.py -t single_2lanefree_carla -v 0.9.12
It works perfectly, however when I want to enable the perception with means adding the extra --apply_ml, I run with command
python opencda.py -t single_town06_carla -v 0.9.12 --apply_ml
And I got some errors which look like this:
Connected to pydev debugger (build 223.7571.203)
OpenCDA Version: 0.1.2
Using cache found in /home/shule/.cache/torch/hub/ultralytics_yolov5_master
YOLOv5 🚀 2022-12-13 Python-3.7.10 torch-1.8.0 CUDA:0 (NVIDIA GeForce RTX 2080 Ti, 11019MiB)
Fusing layers...
YOLOv5m summary: 290 layers, 21172173 parameters, 0 gradients, 48.9 GFLOPs
Adding AutoShape...
Creating single CAVs.
Traceback (most recent call last):
File "/home/shule/PycharmProjects/OpenCDA/opencda/scenario_testing/single_town06_carla.py", line 33, in run_scenario
scenario_manager.create_vehicle_manager(application=['single'])
File "/home/shule/PycharmProjects/OpenCDA/opencda/scenario_testing/utils/sim_api.py", line 325, in create_vehicle_manager
data_dumping=data_dump)
File "/home/shule/PycharmProjects/OpenCDA/opencda/core/common/vehicle_manager.py", line 109, in __init__
map_config)
File "/home/shule/PycharmProjects/OpenCDA/opencda/core/map/map_manager.py", line 127, in __init__
self.generate_lane_cross_info()
File "/home/shule/PycharmProjects/OpenCDA/opencda/core/map/map_manager.py", line 318, in generate_lane_cross_info
tl_id = self.associate_lane_tl(mid_lane)
File "/home/shule/PycharmProjects/OpenCDA/opencda/core/map/map_manager.py", line 268, in associate_lane_tl
trigger_path = Path(trigger_poly.boundary)
File "/home/shule/anaconda3/envs/opencda/lib/python3.7/site-packages/matplotlib/path.py", line 127, in __init__
vertices = _to_unmasked_float_array(vertices)
File "/home/shule/anaconda3/envs/opencda/lib/python3.7/site-packages/matplotlib/cbook/__init__.py", line 1317, in _to_unmasked_float_array
return np.asarray(x, float)
TypeError: float() argument must be a string or a number, not 'LineString'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/shule/PycharmProjects/OpenCDA/opencda.py", line 56, in <module>
main()
File "/home/shule/PycharmProjects/OpenCDA/opencda.py", line 51, in main
scenario_runner(opt, config_yaml)
File "/home/shule/PycharmProjects/OpenCDA/opencda/scenario_testing/single_town06_carla.py", line 64, in run_scenario
eval_manager.evaluate()
UnboundLocalError: local variable 'eval_manager' referenced before assignment
python-BaseException
WARNING: sensor object went out of the scope but the sensor is still alive in the simulation: Actor 1109 (sensor.other.gnss)
WARNING: sensor object went out of the scope but the sensor is still alive in the simulation: Actor 1110 (sensor.other.imu)
WARNING: sensor object went out of the scope but the sensor is still alive in the simulation: Actor 1114 (sensor.camera.rgb)
WARNING: sensor object went out of the scope but the sensor is still alive in the simulation: Actor 1113 (sensor.camera.rgb)
WARNING: sensor object went out of the scope but the sensor is still alive in the simulation: Actor 1112 (sensor.camera.rgb)
WARNING: sensor object went out of the scope but the sensor is still alive in the simulation: Actor 1111 (sensor.camera.rgb)
WARNING: sensor object went out of the scope but the sensor is still alive in the simulation: Actor 1115 (sensor.lidar.ray_cast)
Process finished with exit code 134 (interrupted by signal 6: SIGABRT)
I am running under Ubuntu 20.04 and my pytorch version is 1.8. Plus I am amateur for machine learning so I just follow the steps to install yolov5 and have done nothing else.
Looking forward for your answers, thanks in advance.
Best,
Shule
Hi, when I changed my carla version this error occurred. Is there any mistake in my command?
(opencda) anyu@anyu_2019:~/OpenCDA$ python opencda.py -t single_2lanefree_carla -v 0.9.12
usage: opencda.py [-h] -t TEST_SCENARIO [--record] [--apply_ml]
opencda.py: error: unrecognized arguments: -v 0.9.12
Hi, author. I am confused how to modify specific setting to generate V2X or V2V data?
Can Opencda run in windows 10?
python opencda.py -t platoon_joining_2lanefree_cosim
OpenCDA Version: 0.1.0
load opendrive map '2lane_freeway_simplified.xodr'.
Traceback (most recent call last):
File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/platoon_joining_2lanefree_cosim.py", line 42, in run_scenario
sumo_file_parent_path=sumo_cfg)
File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/utils/cosim_api.py", line 64, in init
cav_world)
File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/utils/sim_api.py", line 114, in init
self.world = load_customized_world(xodr_path, self.client)
File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/utils/customized_map_api.py", line 54, in load_customized_world
enable_mesh_visibility=True))
RuntimeError: time-out of 10000ms while waiting for the simulator, make sure the simulator is ready and connected to localhost:2000
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "opencda.py", line 56, in
main()
File "opencda.py", line 51, in main
scenario_runner(opt, config_yaml)
File "/home/idriver/liutao/github/OpenCDA/opencda/scenario_testing/platoon_joining_2lanefree_cosim.py", line 86, in run_scenario
eval_manager.evaluate()
UnboundLocalError: local variable 'eval_manager' referenced before assignment
目前代码里的目录检测用的yolov5的pt文件的标签好像是固定的80个标签,我自己训练出来的模型不是按这80个标签来的,只有车辆,行人等标签,请问应该在哪里修改,才能让我的模型用例子的方法跑起来也不撞车?
例子用的:python opencda.py -t single_town06_carla -v 0.9.13 --apply_ml
pip install codespell
codespell --ignore-words-list="nd,te"
./OpenCDA/setup.sh:36: Sucessful ==> Successful
./OpenCDA/README.md:18: customzied ==> customized
./OpenCDA/README.md:88: impplementation ==> implementation
./OpenCDA/opencda/core/sensing/perception/obstacle_vehicle.py:23: lable ==> label
./OpenCDA/opencda/core/sensing/perception/o3d_lidar_libs.py:76: correclty ==> correctly
./OpenCDA/opencda/core/sensing/perception/o3d_lidar_libs.py:245: covert ==> convert
./OpenCDA/opencda/core/sensing/perception/o3d_lidar_libs.py:248: homogenous ==> homogeneous
./OpenCDA/opencda/core/sensing/perception/perception_manager.py:277: Currenly ==> Currently
./OpenCDA/opencda/core/sensing/perception/perception_manager.py:667: controled ==> controlled
./OpenCDA/opencda/core/sensing/localization/localization_debug_helper.py:177: datas ==> data
./OpenCDA/opencda/core/sensing/localization/coordinate_transform.py:13: writen ==> written
./OpenCDA/opencda/core/plan/behavior_agent.py:356: temporarely ==> temporarily
./OpenCDA/opencda/core/plan/behavior_agent.py:831: handeling ==> handling
./OpenCDA/opencda/core/plan/spline.py:70: calcualtion ==> calculation
./OpenCDA/opencda/core/plan/spline.py:224: Caculate ==> Calculate
./OpenCDA/opencda/core/plan/spline.py:260: intepolated ==> interpolated
./OpenCDA/opencda/core/plan/local_planner_behavior.py:46: contorl ==> control
./OpenCDA/opencda/core/plan/global_route_planner_dao.py:78: lcoation ==> location
./OpenCDA/opencda/core/common/data_dumper.py:153: spped ==> speed, sped, sipped, sapped, supped, sopped
./OpenCDA/opencda/core/application/platooning/platooning_manager.py:38: destiantion ==> destination
./OpenCDA/opencda/core/application/platooning/platooning_manager.py:84: memeber ==> member
./OpenCDA/opencda/core/application/platooning/platooning_manager.py:199: desination ==> destination
./OpenCDA/opencda/core/application/platooning/fsm.py:39: ABONDON ==> ABANDON
./OpenCDA/opencda/core/application/platooning/fsm.py:54: ABONDON ==> ABANDON
./OpenCDA/opencda/core/application/platooning/platoon_behavior_agent.py:184: finshed ==> finished
./OpenCDA/opencda/core/application/platooning/platoon_behavior_agent.py:196: ABONDON ==> ABANDON
./OpenCDA/opencda/core/application/platooning/platoon_behavior_agent.py:884: ABONDON ==> ABANDON
./OpenCDA/opencda/core/actuation/pid_controller.py:194: loaction ==> location
./OpenCDA/opencda/core/actuation/control_manager.py:24: framwork ==> framework
./OpenCDA/opencda/scenario_testing/utils/customized_map_api.py:40: readed ==> read, readd, readded
./OpenCDA/opencda/scenario_testing/utils/cosim_api.py:248: ligth ==> light
./OpenCDA/opencda/scenario_testing/utils/sim_api.py:65: backgound ==> background
./OpenCDA/opencda/co_simulation/sumo_integration/sumo_simulation.py:186: ligth ==> light
./OpenCDA/opencda/co_simulation/sumo_integration/sumo_simulation.py:199: ligth ==> light
./OpenCDA/opencda/co_simulation/sumo_integration/sumo_simulation.py:340: asign ==> assign
./OpenCDA/opencda/co_simulation/sumo_integration/sumo_simulation.py:450: ligth ==> light
./OpenCDA/opencda/co_simulation/sumo_integration/carla_simulation.py:84: ligth ==> light
./OpenCDA/opencda/co_simulation/sumo_integration/bridge_helper.py:32: methos ==> methods, method
./OpenCDA/opencda/co_simulation/sumo_integration/bridge_helper.py:322: ligth ==> light
./OpenCDA/opencda/customize/core/sensing/localization/extented_kalman_filter.py:120: Initalization ==> Initialization
./OpenCDA/docs/opencda.customize.core.sensing.localization.rst:7: extented ==> extended
./OpenCDA/docs/md_files/introduction.md:33: customzied ==> customized
./OpenCDA/docs/md_files/yaml_define.md:21: parmaters ==> parameters
./OpenCDA/docs/md_files/yaml_define.md:24: controled ==> controlled
./OpenCDA/docs/md_files/developer_tutorial.md:46: signle ==> single, signal
./OpenCDA/docs/md_files/developer_tutorial.md:208: apperance ==> appearance
./OpenCDA/docs/md_files/getstarted.md:86: detetion ==> detection, deletion
./OpenCDA/docs/md_files/logic_flow.md:79: addtional ==> additional
./OpenCDA/docs/md_files/codebase_structure.md:16: adn ==> and
./OpenCDA/test/test_ekf.py:3: Extented ==> Extended
file single_2lanefree_carla.yaml miss key "activate"
map_manager: &base_map_manager
activate: true
pixels_per_meter: 2
raster_size: [224, 224]
lane_sample_resolution: 0.1
visualize: true
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.