Code Monkey home page Code Monkey logo

kitti_carla_simulator's People

Contributors

jedeschaud avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

kitti_carla_simulator's Issues

Transforms with reference to KITTI paper

KITTI paper:

  1. Height of LIDAR: 1.73m
  2. Height of Cameras: 1.65m
  3. Distance between the stereo cameras is 0.54m

But in the scripts :

  1. Height of LIDAR to be: 1.80m
    "The LiDAR is attached to the vehicle body with a global
    translation of (X = 0.0m, Y = 0.0m, Z = 1.80m)"

  2. Height of cameras to be: 1.70m

  3. Distance between the stereo cameras to be: 0.50m
    "Cameras are attached to the vehicle body with a global translation of (X = 0.30m, Y = 0.0m, Z = 1.70m)
    for camera 0 and (X = 0.30m, Y = 0.50m, Z = 1.70m) for
    camera 1"

Are these ok for a working code ? Or does usage of CARLA have something to do with this ?

Can I reuse the KITTI calib.txt file without changes ?

Since the KITTI-CARLA is an (almost) exact simulation of the KITTI, can I reuse the calib.txt file in data_odometry_calib.zip file ?

The contents of the calib.txt file in each of the sequences is as follows:

P0: 7.188560000000e+02 0.000000000000e+00 6.071928000000e+02 0.000000000000e+00 0.000000000000e+00 7.188560000000e+02 1.852157000000e+02 0.000000000000e+00 0.000000000000e+00 0.000000000000e+00 1.000000000000e+00 0.000000000000e+00
P1: 7.188560000000e+02 0.000000000000e+00 6.071928000000e+02 -3.861448000000e+02 0.000000000000e+00 7.188560000000e+02 1.852157000000e+02 0.000000000000e+00 0.000000000000e+00 0.000000000000e+00 1.000000000000e+00 0.000000000000e+00
P2: 7.188560000000e+02 0.000000000000e+00 6.071928000000e+02 4.538225000000e+01 0.000000000000e+00 7.188560000000e+02 1.852157000000e+02 -1.130887000000e-01 0.000000000000e+00 0.000000000000e+00 1.000000000000e+00 3.779761000000e-03
P3: 7.188560000000e+02 0.000000000000e+00 6.071928000000e+02 -3.372877000000e+02 0.000000000000e+00 7.188560000000e+02 1.852157000000e+02 2.369057000000e+00 0.000000000000e+00 0.000000000000e+00 1.000000000000e+00 4.915215000000e-03
Tr: 4.276802385584e-04 -9.999672484946e-01 -8.084491683471e-03 -1.198459927713e-02 -7.210626507497e-03 8.081198471645e-03 -9.999413164504e-01 -5.403984729748e-02 9.999738645903e-01 4.859485810390e-04 -7.206933692422e-03 -2.921968648686e-01

ERROR: Invalid session: no stream available with id

When I run the data generation code, after one map there are always errors ERROR: Invalid session: no stream available with id, do you know how to solve it? It seems that it is because the sensors are not deleted totally.

Seg fault when using with CARLA 0.9.12 on Ubuntu 20.04

I changed number of frames from 5000 to 50 and when loading the second map the program seg faults

Export : KITTI_Dataset_CARLA_v0.9.12/Carla/Maps/Town01/generated/images_depth/0049_20.png
Export : KITTI_Dataset_CARLA_v0.9.12/Carla/Maps/Town01/generated/images_depth/0049_21.png
Export : KITTI_Dataset_CARLA_v0.9.12/Carla/Maps/Town01/generated/frames/frame_0049.ply
Export : KITTI_Dataset_CARLA_v0.9.12/Carla/Maps/Town01/generated/poses_lidar.ply
Stop record
Destroying 50 vehicles
Destroying 32 walkers
Destroying KITTI
Elapsed time : 71.06980013847351

Map Town02
Segmentation fault (core dumped)

Anyone seen this problem before ?

It seems to work only one town at a time.

python KITTI_data_generator.py
Map Town02
Created Actor(id=135, type=vehicle.tesla.model3)
Number of spawn points : 101

..

Stop record
Destroying 50 vehicles
Destroying 33 walkers
Destroying KITTI
Elapsed time : 65.43478298187256

Map Town03
Segmentation fault (core dumped)

Code improvement

The self.set_attributes is a method in the sub-class. Is this a good design ? Perhaps the super class should only do a basic setup and actual setup should be sub-classes ?

class Sensor:
    initial_ts = 0.0
    initial_loc = carla.Location()
    initial_rot = carla.Rotation()

    def __init__(self, vehicle, world, actor_list, output_folder, transform):
        self.queue = queue.Queue()
        self.bp = self.set_attributes(world.get_blueprint_library())

test on carla 0.9.13

python KITTI_data_generator.py

Map Town01
Created Actor(id=1669, type=vehicle.tesla.model3)
('Number of spawn points : ', 255)
Spawned 1 vehicles and 1 walkers
Waiting for KITTI to stop ...
KITTI stopped
('Elapsed total time : ', 35.62052607536316)
Traceback (most recent call last):
File "KITTI_data_generator.py", line 177, in
main()
File "KITTI_data_generator.py", line 86, in main
gen.screenshot(KITTI, world, actor_list, folder_output, carla.Transform(carla.Location(x=0.0, y=0, z=2.0), carla.Rotation(pitch=0, yaw=0, roll=0)))
File "/home/libing/source/simulator/kitti_carla_simulator/generator_KITTI.py", line 297, in screenshot
RGB.set_attributes(world.get_blueprint_library())
TypeError: unbound method set_attributes() must be called with RGB instance as first argument (got BlueprintLibrary instance instead)

Whats the benefit of LIDAR yaw angle at 180 ?

Can anyone say what the benefit of having the LIDAR point backwards ?
lidar_transform = carla.Transform(carla.Location(x=0, y=0, z=1.80), carla.Rotation(pitch=0, yaw=180, roll=0))

Also the lidar to camera translation makes difference of lidar to camera

T_lidar_camera = R_camera_vehicle.T.dot(translation_carla(np.array([[lidar_transform.location.x],[lidar_transform.location.y],[lidar_transform.location.z]])-np.array([[camera_transform.location.x],[camera_transform.location.y],[camera_transform.location.z]])))

Should it not be the other way around ?
@jedeschaud, Could you give some hints on this ?

Also the sensors are not synchronized:

Camera data timestamp:  3.4262565394164994
Export: KITTI_Dataset_CARLA_v0.9.12/Carla/Maps/Town07/generated/image_2/000000.pngSaving lidar pointcloud at timestamp  3.525256544118747
Export: KITTI_Dataset_CARLA_v0.9.12/Carla/Maps/Town07/generated/velodyne/000000.bin

Point clouds in .bin format

The current version of KITTI 3D dataset has the points clouds in .bin format, and I noticed that the code saves in .ply. Am I missing something?

Thanks for sharing the code!

Possible errors in the transform_lidar_to_camera function ?

Hello @jedeschaud ,

The code for the function:

def transform_lidar_to_camera(lidar_transform, camera_transform):
    R_camera_vehicle = rotation_carla(camera_transform.rotation)
    R_lidar_vehicle = np.identity(3) #rotation_carla(lidar_tranform.rotation) #we want the lidar frame to have x forward
    R_lidar_camera = R_camera_vehicle.T.dot(R_lidar_vehicle)
    T_lidar_camera = R_camera_vehicle.T.dot(translation_carla(np.array([[lidar_transform.location.x],[lidar_transform.location.y],[lidar_transform.location.z]])-np.array([[camera_transform.location.x],[camera_transform.location.y],[camera_transform.location.z]])))
    return np.vstack((np.hstack((R_lidar_camera, T_lidar_camera)), [0,0,0,1]))


According to this post: https://robotics.stackexchange.com/questions/21401/how-to-make-two-frames-relative-to-each-other

R_lidar_camera = R_camera_vehicle.T.dot(R_lidar_vehicle) should be 
R_lidar_camera = R_lidar_vehicle.T.dot(R_camera_vehicle)

Also the difference in translation is: translation_carla(np.array([[lidar_transform.location.x],[lidar_transform.location.y],[lidar_transform.location.z]])-np.array([[camera_transform.location.x],[camera_transform.location.y],[camera_transform.location.z]])))

Should it not be camera - lidar ?

How are the transforms determined ?

Hello @jedeschaud ,

How are these fixed transforms are determined ?
As I understand the (0,0,0) of a car is fixed to be the middle of the rear axle

 # Set sensors transformation from KITTI
            lidar_transform = carla.Transform(carla.Location(x=0, y=0, z=1.80), carla.Rotation(pitch=0, yaw=180, roll=0))
            cam0_transform = carla.Transform(carla.Location(x=0.30, y=0, z=1.70), carla.Rotation(pitch=0, yaw=0, roll=0))
            cam1_transform = carla.Transform(carla.Location(x=0.30, y=0.50, z=1.70), carla.Rotation(pitch=0, yaw=0, roll=0))


Atleast from the KITTI sensor setup diagram I cant seem to figure out how atleast the Location is decided from a CARLA left hand rule perspective..

Is the sensor setup correct ?

        camera_bp.set_attribute('image_size_x', '1392')
        camera_bp.set_attribute('image_size_y', '1024')

KITTI specifies 1392x512.

And the focal_distance is set to default value. ?

Labels files required

Hello @jedeschaud I have found that the labels are ot present in the given synthetic dataset. I am working on synthetic to realistic point cloud data conversion. Please help on this

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.