jedeschaud / kitti_carla_simulator Goto Github PK
View Code? Open in Web Editor NEWKITTI-CARLA: Python scripts to generate the KITTI-CARLA dataset
License: MIT License
KITTI-CARLA: Python scripts to generate the KITTI-CARLA dataset
License: MIT License
hi, i want to download dataset form https://npm3d.fr/kitti-carla , but the speed is too slow,the download failed soon, Is there any solution to this problem?
KITTI paper:
But in the scripts :
Height of LIDAR to be: 1.80m
"The LiDAR is attached to the vehicle body with a global
translation of (X = 0.0m, Y = 0.0m, Z = 1.80m)"
Height of cameras to be: 1.70m
Distance between the stereo cameras to be: 0.50m
"Cameras are attached to the vehicle body with a global translation of (X = 0.30m, Y = 0.0m, Z = 1.70m)
for camera 0 and (X = 0.30m, Y = 0.50m, Z = 1.70m) for
camera 1"
Are these ok for a working code ? Or does usage of CARLA have something to do with this ?
Since the KITTI-CARLA is an (almost) exact simulation of the KITTI, can I reuse the calib.txt file in data_odometry_calib.zip file ?
The contents of the calib.txt file in each of the sequences is as follows:
P0: 7.188560000000e+02 0.000000000000e+00 6.071928000000e+02 0.000000000000e+00 0.000000000000e+00 7.188560000000e+02 1.852157000000e+02 0.000000000000e+00 0.000000000000e+00 0.000000000000e+00 1.000000000000e+00 0.000000000000e+00
P1: 7.188560000000e+02 0.000000000000e+00 6.071928000000e+02 -3.861448000000e+02 0.000000000000e+00 7.188560000000e+02 1.852157000000e+02 0.000000000000e+00 0.000000000000e+00 0.000000000000e+00 1.000000000000e+00 0.000000000000e+00
P2: 7.188560000000e+02 0.000000000000e+00 6.071928000000e+02 4.538225000000e+01 0.000000000000e+00 7.188560000000e+02 1.852157000000e+02 -1.130887000000e-01 0.000000000000e+00 0.000000000000e+00 1.000000000000e+00 3.779761000000e-03
P3: 7.188560000000e+02 0.000000000000e+00 6.071928000000e+02 -3.372877000000e+02 0.000000000000e+00 7.188560000000e+02 1.852157000000e+02 2.369057000000e+00 0.000000000000e+00 0.000000000000e+00 1.000000000000e+00 4.915215000000e-03
Tr: 4.276802385584e-04 -9.999672484946e-01 -8.084491683471e-03 -1.198459927713e-02 -7.210626507497e-03 8.081198471645e-03 -9.999413164504e-01 -5.403984729748e-02 9.999738645903e-01 4.859485810390e-04 -7.206933692422e-03 -2.921968648686e-01
When I run the data generation code, after one map there are always errors ERROR: Invalid session: no stream available with id, do you know how to solve it? It seems that it is because the sensors are not deleted totally.
I changed number of frames from 5000 to 50 and when loading the second map the program seg faults
Export : KITTI_Dataset_CARLA_v0.9.12/Carla/Maps/Town01/generated/images_depth/0049_20.png
Export : KITTI_Dataset_CARLA_v0.9.12/Carla/Maps/Town01/generated/images_depth/0049_21.png
Export : KITTI_Dataset_CARLA_v0.9.12/Carla/Maps/Town01/generated/frames/frame_0049.ply
Export : KITTI_Dataset_CARLA_v0.9.12/Carla/Maps/Town01/generated/poses_lidar.ply
Stop record
Destroying 50 vehicles
Destroying 32 walkers
Destroying KITTI
Elapsed time : 71.06980013847351
Map Town02
Segmentation fault (core dumped)
Anyone seen this problem before ?
It seems to work only one town at a time.
python KITTI_data_generator.py
Map Town02
Created Actor(id=135, type=vehicle.tesla.model3)
Number of spawn points : 101
..
Stop record
Destroying 50 vehicles
Destroying 33 walkers
Destroying KITTI
Elapsed time : 65.43478298187256
Map Town03
Segmentation fault (core dumped)
The self.set_attributes is a method in the sub-class. Is this a good design ? Perhaps the super class should only do a basic setup and actual setup should be sub-classes ?
class Sensor:
initial_ts = 0.0
initial_loc = carla.Location()
initial_rot = carla.Rotation()
def __init__(self, vehicle, world, actor_list, output_folder, transform):
self.queue = queue.Queue()
self.bp = self.set_attributes(world.get_blueprint_library())
Map Town01
Created Actor(id=1669, type=vehicle.tesla.model3)
('Number of spawn points : ', 255)
Spawned 1 vehicles and 1 walkers
Waiting for KITTI to stop ...
KITTI stopped
('Elapsed total time : ', 35.62052607536316)
Traceback (most recent call last):
File "KITTI_data_generator.py", line 177, in
main()
File "KITTI_data_generator.py", line 86, in main
gen.screenshot(KITTI, world, actor_list, folder_output, carla.Transform(carla.Location(x=0.0, y=0, z=2.0), carla.Rotation(pitch=0, yaw=0, roll=0)))
File "/home/libing/source/simulator/kitti_carla_simulator/generator_KITTI.py", line 297, in screenshot
RGB.set_attributes(world.get_blueprint_library())
TypeError: unbound method set_attributes() must be called with RGB instance as first argument (got BlueprintLibrary instance instead)
Can anyone say what the benefit of having the LIDAR point backwards ?
lidar_transform = carla.Transform(carla.Location(x=0, y=0, z=1.80), carla.Rotation(pitch=0, yaw=180, roll=0))
Also the lidar to camera translation makes difference of lidar to camera
T_lidar_camera = R_camera_vehicle.T.dot(translation_carla(np.array([[lidar_transform.location.x],[lidar_transform.location.y],[lidar_transform.location.z]])-np.array([[camera_transform.location.x],[camera_transform.location.y],[camera_transform.location.z]])))
Should it not be the other way around ?
@jedeschaud, Could you give some hints on this ?
Also the sensors are not synchronized:
Camera data timestamp: 3.4262565394164994
Export: KITTI_Dataset_CARLA_v0.9.12/Carla/Maps/Town07/generated/image_2/000000.pngSaving lidar pointcloud at timestamp 3.525256544118747
Export: KITTI_Dataset_CARLA_v0.9.12/Carla/Maps/Town07/generated/velodyne/000000.bin
The current version of KITTI 3D dataset has the points clouds in .bin format, and I noticed that the code saves in .ply. Am I missing something?
Thanks for sharing the code!
Hello @jedeschaud ,
The code for the function:
def transform_lidar_to_camera(lidar_transform, camera_transform):
R_camera_vehicle = rotation_carla(camera_transform.rotation)
R_lidar_vehicle = np.identity(3) #rotation_carla(lidar_tranform.rotation) #we want the lidar frame to have x forward
R_lidar_camera = R_camera_vehicle.T.dot(R_lidar_vehicle)
T_lidar_camera = R_camera_vehicle.T.dot(translation_carla(np.array([[lidar_transform.location.x],[lidar_transform.location.y],[lidar_transform.location.z]])-np.array([[camera_transform.location.x],[camera_transform.location.y],[camera_transform.location.z]])))
return np.vstack((np.hstack((R_lidar_camera, T_lidar_camera)), [0,0,0,1]))
According to this post: https://robotics.stackexchange.com/questions/21401/how-to-make-two-frames-relative-to-each-other
R_lidar_camera = R_camera_vehicle.T.dot(R_lidar_vehicle) should be
R_lidar_camera = R_lidar_vehicle.T.dot(R_camera_vehicle)
Also the difference in translation is: translation_carla(np.array([[lidar_transform.location.x],[lidar_transform.location.y],[lidar_transform.location.z]])-np.array([[camera_transform.location.x],[camera_transform.location.y],[camera_transform.location.z]])))
Should it not be camera - lidar ?
Hello @jedeschaud ,
How are these fixed transforms are determined ?
As I understand the (0,0,0) of a car is fixed to be the middle of the rear axle
# Set sensors transformation from KITTI
lidar_transform = carla.Transform(carla.Location(x=0, y=0, z=1.80), carla.Rotation(pitch=0, yaw=180, roll=0))
cam0_transform = carla.Transform(carla.Location(x=0.30, y=0, z=1.70), carla.Rotation(pitch=0, yaw=0, roll=0))
cam1_transform = carla.Transform(carla.Location(x=0.30, y=0.50, z=1.70), carla.Rotation(pitch=0, yaw=0, roll=0))
Atleast from the KITTI sensor setup diagram I cant seem to figure out how atleast the Location is decided from a CARLA left hand rule perspective..
camera_bp.set_attribute('image_size_x', '1392')
camera_bp.set_attribute('image_size_y', '1024')
KITTI specifies 1392x512.
And the focal_distance is set to default value. ?
Hello @jedeschaud I have found that the labels are ot present in the given synthetic dataset. I am working on synthetic to realistic point cloud data conversion. Please help on this
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.