Code Monkey home page Code Monkey logo

conescenes's Introduction

coneScenes Dataset

Website GitHub Discussions

coneScenes is a collaborative LiDAR pointcloud dataset with 3D bounding box annotations for cones, specifically designed to support the development of perception algorithms used by Formula Student driverless teams.

This repository provides the command-line interface (CLI) and tools for working with the coneScenes dataset. It serves as the central hub for the dataset, where users can:

  • Manage the dataset.
  • Contribute annotations and extensions to the dataset (community effort!).
  • Discuss and collaborate on future improvements through GitHub Issues and Discussions.
  • Auto annotate your own data using the provided tools.

Get Access to the Dataset

A sample scene is provided here.

To get full access to the dataset, your team must contribute to the dataset with your own data. To learn how to use our auto annotation and data generation tools, please refer to our website here.

Data Collection and Contribution

As a a collaborative dataset, to get full access to the dataset the team must also contribute bu providing their own annotated scenes. This is done to ensure that the dataset grows and that the teams that use it also contribute to its growth.

The coneScenes dataset is a collaborative effort, and we highly encourage contributions from the Formula Student community. Discussions regarding future additions, improvements, and potential roadmap changes are facilitated through GitHub Discussions. If you have any ideas, suggestions, or feedback, please feel free to share them with all of us here!

Getting Started

This repository includes the CLI and tools for interacting with the coneScenes dataset. Refer to the included documentation for detailed instructions on installation, usage, and contribution guidelines.

Thanks

A big thank you to all the teams involved in the data collection and annotation process! We are excited to see the coneScenes dataset continue to grow and empower the development of robust perception algorithms for Formula Student driverless vehicles.

conescenes's People

Contributors

bertaveira avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

conescenes's Issues

Clarification on Cone Label Structure and Introduction of New Label Type

Context:

Hi there! I'm from the OTH Regensburg Formula Student Team and we are looking to contribute and utilize the dataset from your repository. Currently, I am in the process of exporting our data into the required format. However, I have encountered some confusion regarding the cone labels.

Questions:

  1. Cone Bounding Box Values:
    The instructions mention using the mmdetection3d label standard, which is primarily designed for bounding boxes. Upon inspecting the generate_scene.py file, I noticed the following hardcoded values:
# Save to file
with open(f"{folder_path}/labels/{filenum:07d}.txt", "w") as f:
    for x, y, c in zip(x_values, y_values, colors):
        if c == YELLOR_C:
            cone_type = 'Cone_Yellow'
        elif c == BLUE_C:
            cone_type = 'Cone_Blue'
        elif c == BIG_C:
            cone_type = 'Cone_Big'
        elif c == ORANGE_C:
            cone_type = 'Cone_Orange'
        else:
            continue
        f.write(f"{x} {y} 0.0 0.23 0.23 0.33 0.0 {cone_type}\n")

Should everyone use these fixed bounding box values (0.0 0.23 0.23 0.33 0.0) for all cones, or should we create individual bounding boxes based on the point cloud points associated with each cone?

  1. Unlabeled Point Clouds:
    Are unlabeled point clouds required or optional? Currently, we have labels for every point cloud.

  2. Introducing a New Label Type:
    In our current data, we do not have left/right labels for our cone positions, as this is handled independently by our pathplanning module. To accommodate this, would it be possible to introduce a new label type such as Cone_Unsorted or something similar? This would help us integrate our data without the need for left/right classification.

Best regards,
Tim

Use UUID to identify scenes

Using UUID can help identify scenes.

We could use a custom UUID schema where it would incldue date of scene submission, a team identifier and of course random numbers

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.