Code Monkey home page Code Monkey logo

tum-traffic-dataset-dev-kit's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

tum-traffic-dataset-dev-kit's Issues

visualization in R0

Thank you very much for the great work. I wonder if there is source code that represents the visualization in S0 and S1 for the R0. I was trying to visualize the projected key point, but I got the wrong projection. I have attached my code.

import cv2
import json
import numpy as np
open("/home/sun/Downloads/a9_dataset_r00_s02/_labels/1616762521_089000000_s40_camera_basler_south_50mm.json", "r") as file:
data = json.load(file)

image=cv2.imread("/home/sun/Downloads/a9_dataset_r00_s02/_images/1616762521_089000000_s40_camera_basler_north_50mm.jpg")

for label in data["labels"]:

box3d_projected = label["box3d_projected"]
print("this is projected boxes ", box3d_projected)


points_order = [
    "bottom_left_front", "bottom_right_front", "top_right_front", "top_left_front",
    "bottom_left_back", "bottom_right_back", "top_right_back", "top_left_back"
]


corners_3d_img = np.array([box3d_projected[point] for point in points_order], dtype=np.float32)


corners_3d_img = corners_3d_img.reshape((-1, 1, 2))


scale_factor = 1000
corners_3d_img = (corners_3d_img * scale_factor).astype(np.int32)



color = (0, 255, 0) 
thickness = 2
cv2.line(image, tuple(corners_3d_img[0, 0]), tuple(corners_3d_img[1, 0]), color=color, thickness=thickness)
cv2.line(image, tuple(corners_3d_img[1, 0]), tuple(corners_3d_img[2, 0]), color=color, thickness=thickness)
cv2.line(image, tuple(corners_3d_img[2, 0]), tuple(corners_3d_img[3, 0]), color=color, thickness=thickness)
cv2.line(image, tuple(corners_3d_img[3, 0]), tuple(corners_3d_img[0, 0]), color=color, thickness=thickness)
cv2.line(image, tuple(corners_3d_img[4, 0]), tuple(corners_3d_img[5, 0]), color=color, thickness=thickness)
cv2.line(image, tuple(corners_3d_img[5, 0]), tuple(corners_3d_img[6, 0]), color=color, thickness=thickness)
cv2.line(image, tuple(corners_3d_img[6, 0]), tuple(corners_3d_img[7, 0]), color=color, thickness=thickness)
cv2.line(image, tuple(corners_3d_img[7, 0]), tuple(corners_3d_img[4, 0]), color=color, thickness=thickness)
cv2.line(image, tuple(corners_3d_img[0, 0]), tuple(corners_3d_img[4, 0]), color=color, thickness=thickness)
cv2.line(image, tuple(corners_3d_img[1, 0]), tuple(corners_3d_img[5, 0]), color=color, thickness=thickness)
cv2.line(image, tuple(corners_3d_img[2, 0]), tuple(corners_3d_img[6, 0]), color=color, thickness=thickness)
cv2.line(image, tuple(corners_3d_img[3, 0]), tuple(corners_3d_img[7, 0]), color=color, thickness=thickness)

cv2.imshow("Image with 3D lines", image)
cv2.waitKey(0)
cv2.destroyAllWindows()

Query on Annotation Process: S1 and S2 from R0 Sensor Data

Thank you for the paper and your work. In the paper, you mentioned, "All sensor data was labeled in the JSON format using an extended version of the 3D Bounding Box Annotation Tool (3D BAT) [17]. One JSON label file was created for each frame." I would like to inquire about how you annotated the S1 and S2 from R0 using this annotator. Additionally, could you provide some details about, "The location of the objects in the sets S1 and S2 is given in a locally defined coordinate frame on road level. Its origin is set to the GPS position 48.241537 11.639538, and it is oriented following the freeway in the direction of the south."" i have images data and i want to annotate it , thank you in advance

Wrong 3D bounding boxes in release 2

I just found a couple of 3D bounding boxes that seem to be wrong in the dataset (or maybe it is just a bug in the visualization script). I am using the visualize_image_with_3d_boxes.py script to generate the visualization.

Here are some examples of strange 3D bounding boxes:

  • 1651673147_573134089_s110_camera_basler_south1_8mm
    image
  • 1651673169_758980572_s110_camera_basler_south1_8mm
    image
  • 1651673156_142369872_s110_camera_basler_south1_8mm
    image

Label Conversion error of OpenLABEL to nuS

Traceback (most recent call last):
  File "tum-traffic-dataset-dev-kit/src/label_conversion/conversion_openlabel_to_nuscenes.py", line 274, in <module>
    converter.convert_openlabel_to_nuscenes()
  File "tum-traffic-dataset-dev-kit/src/label_conversion/conversion_openlabel_to_nuscenes.py", line 88, in convert_openlabel_to_nuscenes
    infos_list += self._fill_infos(labels_list, camera_labels_list)
  File "tum-traffic-dataset-dev-kit/src/label_conversion/conversion_openlabel_to_nuscenes.py", line 158, in _fill_infos
    cam_annotations = [next(iter(json.load(open(camera_labels_list[0][j]))['openlabel']['frames'].values())) \
IndexError: list index out of range

Code related to this error is(I think):

camera_labels_list = []
for camera_list in sorted(glob(os.path.join(self.load_dir, self.map_version_to_dir[split], 'labels_images',
                                                        '*'))):
    camera_labels_list.append(sorted(glob(os.path.join(camera_list, '*'))))

When I print the camera_labels_list, it shows empty:

Start converting ...
Converting split: training...
[]

module 'genpy' has no attribute 'Message'

When trying to launch the conversion script to nuscenes, I get the following error:

(tum-traffic-dataset-dev-kit) paul@paul-B450NH:~/tum-traffic-dataset-dev-kit$ python src/label_conversion/conversion_openlabel_to_nuscenes.py --root-path r02_sequences_split \ --out-dir innuscenes
Traceback (most recent call last):
File "/home/paul/tum-traffic-dataset-dev-kit/src/label_conversion/conversion_openlabel_to_nuscenes.py", line 5, in
from pypcd import pypcd # solution to correctly installing this :dimatura/pypcd#28
File "/home/paul/miniconda3/envs/tum-traffic-dataset-dev-kit/lib/python3.9/site-packages/pypcd/pypcd.py", line 20, in
from sensor_msgs.msg import PointField
File "/home/paul/miniconda3/envs/tum-traffic-dataset-dev-kit/lib/python3.9/site-packages/sensor_msgs/msg/init.py", line 1, in
from ._BatteryState import *
File "/home/paul/miniconda3/envs/tum-traffic-dataset-dev-kit/lib/python3.9/site-packages/sensor_msgs/msg/_BatteryState.py", line 8, in
import std_msgs.msg
File "/home/paul/miniconda3/envs/tum-traffic-dataset-dev-kit/lib/python3.9/site-packages/std_msgs/msg/init.py", line 1, in
from ._Bool import *
File "/home/paul/miniconda3/envs/tum-traffic-dataset-dev-kit/lib/python3.9/site-packages/std_msgs/msg/_Bool.py", line 9, in
class Bool(genpy.Message):
AttributeError: module 'genpy' has no attribute 'Message'

Could it be that the conda installs a wrong version of genpy?

A9 dataset for monocular 3D object detection

Hi, I would be interest in using your dataset for monocular 3D object detection. However, I noticed that unfortunately you provide mostly bounding box corner points projected onto the image. That's what I found:

a9_dataset_r00_s00: ca. 800 images, but only projected bboxes
a9_dataset_r00_s01: ca. 200 images, projected bboxes + location
a9_dataset_r00_s02: ca. 60 images, projected bboxes + location
a9_dataset_r01_s01: ca. 1500 images, but only projected bboxes
a9_dataset_r01_s01: ca. 1500 images, but only projected bboxes
a9_dataset_r01_s03: ca. 3000 images, but only projected bboxes

Would you also be able to provide full 3D informationframe for all these cases? With that I would mean:

  • 3D dimension (height, width, length)
  • 3D location in camera frame (X, Y, Z)
  • 3D rotation. Ideally the rotation matrix in camera frame.

R00 Coordinate

Thank you for your great work. I am trying to use R00 set for object detection and I found that have some problems with the label. I some label files, the object orientation :

"orientation": { "rotationYaw": 3.14, "rotationPitch": 0, "rotationRoll": 0 }
and like that in R00/a9_dataset_r00_s03/_labels/1607434806_258307000_s50_lidar_ouster_south.json
"orientation": { "rotationYaw": 0, "rotationPitch": 3.14, "rotationRoll": 0 }
I quite confuse about this coordinate. So what is exaclty coordinate for R00 set?

Thank you !!!!

kitti format

thank you for your great work , i read your paper and now i have registered and waiting your data , my question did you test the kitti format , and for kitti beside the label we need calibration file" p2 for example" to visualize the object , could you give an example for visualization ? thank you very much

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.