Code Monkey home page Code Monkey logo

workzone's Introduction

workzone

Workzone Boundary Detection

workzone's People

Contributors

nlitz88 avatar

Stargazers

 avatar

Watchers

 avatar  avatar

workzone's Issues

Create pipeline metapackage

The workzone repository will serve as the wrapper node that contains scripts for setting up a workspace with all the repositories and dependencies needed for the pipeline and any packages and launch files to run the pipeline.

  • Create a Dockerfile with all the dependencies needed for all the packages.
  • Create a .repos file for the project and commit it to this repo.
  • Create a metapackage that will house any parameter files and launch files that we want to use to run the end-to-end pipeline.
  • Save rqt configuration

Create Workzone Boundary Detection ROS Node

Basically, we need a ROS node that takes the OpenCV code we have written and wraps it up in a ROS node so that it can ingest 2D costmap images with bounding boxes (or whatever format the data comes in as) and spit out a 2D occupancy grid OR image (or whatever we decide on) with the workzone segmented.

Some rough ideas/requirements/notes:

  • Probably fastest to implement this in Python: https://docs.ros.org/en/humble/Tutorials/Beginner-Client-Libraries/Writing-A-Simple-Py-Publisher-And-Subscriber.html#writing-a-simple-publisher-and-subscriber-python, particularly because we're using OpenCV and the whole pipeline is already written in Python. Won't be blazingly fast--but it'll be lightyears faster than the BEVFusion node, so it's fine!
  • If we decide our nodes our going to be sending/receiving images rather than OccupancyGrid messages, probably want to use the sensor_msgs/Image message
  • While it's not critical, as far as how to structure this: Create a new ROS package WITHIN THIS REPOSITORY -- don't make the package occupy / exist as the root of the repository. Not a huge deal, just good practice :)
  • I will (probably) make a second ROS meta-package later that we use to launch all the nodes in the pipeline--and I will also probably create that in this repository. That'll contain a launch file to launch the nuScenes BEVFusion wrapper, the BEVFusion node itself, our workzone detection node, and then maybe something like RQT or RVIZ to visualize what's going on. UPDATE: #19

Prepare data for quantitative analysis

Create demo slides or get some pictures together

I will take maybe 30 minutes to an hour and put some pictures or slides together real quick demonstrating some of the high level ideas I have had so far.

Leave a comment below if there are other ideas you want to add on, or even just pitch to Raj.

I will link the document below when created.

Create nuScenes streaming wrapper

Basically, we just need a way to play-back the images from the nuScenes dataset so that they can be ingested in realtime from the processing pipeline.

There are some different ways we can approach this / some different approaches that will work, depending on how streamlined/robust we want the implementation to be.

  1. Could use the nuScenes--ROS wrapper created by the same people who created the ROS2 wrapper around BEVFusion: https://github.com/linClubs/nuscenes2rosbag
  2. The UVA team that made the RACECAR dataset provided a rosbag to nuscenes wrapper: https://github.com/linklab-uva/rosbag2nuscenes/tree/b501b4f1eb81395ec1ee4fdea80035fbda1e9bf6 I'm not sure if it works in the opposite direction though (doubt it).
  3. Use the nuScenes dev-kit in a python script and just grab sequential inputs. This is the most primitive, but probably most simple answer only if if we're not using ROS to stitch these two stages together to create the end-to-end pipeline.

Get predictions from patched BEVFusion

For the sake of simplicity, for the time being, we have just been using a visualization of the ground truth annotations from the nuScenes dataset.

However, in the future, we ideally want live-predictions from the model itself. Therefore, see if we can get live predictions from either the patched model repository package or the TensorRT ROS wrapper version

Prepare slides for in-class presentation

Outline of the Initial Project Presentation

The initial project presentation can follow a roughly similar format as the project scope document (as listed in the previous announcement). However, it is an in-class presentation in front of a live audience. Hence, it needs to be much more visual.

Use lots of figures - and make them colorful and appealing.

Use animation where appropriate. Do not overuse animation, but it can present concepts coherently in an otherwise-visually cluttered slidescape).

Make your use cases very clear. And present them from the simplest to the most complex.

List your anticipated demo sequences for the intermediate and final demos, as discussed during the finalization of your project scope.

Time duration: ~20 minutes for each group (~5-6 minutes for each project member + 5 minutes for Q&A from the entire class + transition time between projects).

We have a good set of projects in store.

IMPORTANT: You MUST send your project presentations to me ([email protected]) before 3pm on the day of your presentation. All presentations will be made from my laptop in class to avoid technical glitches with the presentation equipment. Powerpoint, pdf and Google Slides can be used.

Looking forward to your presentations and the scope documents next week.

  • Raj

Create initial system architecture diagram for initial presentation

Made a task specifically for this part as this may prove to be a tiny bit more involved then the rest of the contents of the presentation. Although, for a first draft, maybe not!

Made a draw.io document here: https://drive.google.com/file/d/1mE79ahct1FuurYCvGkRm1PcesUFvAQa6/view?usp=sharing

At the very least, we can just depict the flow of sensor data (images, maybe a lidar point cloud) into the pipeline, maybe previewing what the output looks like at each stage. Honestly, for our first presentation, the rough drawing I had in the demo slides wouldn't be horrible.

Review available construction datasets

Find one construction dataset that we can start with. Traffic cones, construction lights, barriers, construction vehicles (like excavators, work lights, generators, bulldozers, pavers, etc.--anything like that). Anything you might find in a construction zone. We can definitely combine multiple datasets if you can't find a single one containing all relevant construction objects.

Update: Maybe to be more specific, I think there are two different kinds of datasets that we might want to look for:

  1. Datasets that just contain images of of construction objects or objects you'd find in a work zone on the road (mostly what I was talking about above).
  2. General driving datasets (like KITTI) that are just pictures from a car driving around, but ones that include construction zones. We could even use dashcam videos from YouTube that capture a car driving through/past some sort of construction zone. While we will also want to set up a simulation environment in Carla to test our pipeline, running it on driving datasets or dashcam footage is probably a better, more real test of our pipeline.

Also, in looking for these datasets, it's good to check out all different kinds. We can use ones that look kinda scrappy/thrown together from roboflow--but we may have better luck with some of the better known, somewhat "vetted" datasets that are cited/used in other people's research.

Complete project scope document

Project Scope Document
The recommended outline for the project scope document is as follows:

Cover Page with Project Title and Group Members
Project Summary
Project Motivation
Project Goals
Use Cases
Methodology
What are your inputs, outputs and intermediate processing steps?
How will you tackle this problem? What is your proposed solution?
System Design โ€“ include block diagrams, subsystems and components
Demonstration Sequences (from the simplest to the most complex)
Final Demonstration
Intermediate Demonstration (a subset of the Final Demo)
Development Milestones
What will be accomplished by the Intermediate Demo and the Final Demo?
Make this schedule fine-grained to guide your own project development process.
Work Partitioning
How will you partition the work among team-members?
Conclusions
References
Include a few relevant pieces of work available in the literature โ€“ ideally, these references are cited in the rest of the document.
Submission: Please submit your project scope document by email to [email protected] and [email protected]

Decide on Interface Format / Message Type Between BEVFusion ROS Node and Workzone Boundary Detection Node

Basically, in order to work with the costmap received from the first stage, we may need some kind of parsing code that takes the costmap and converts it into whatever internal representation is desired. I.e., maybe you instantiate a new costmap class and include code internally to create a graph of the constructon objects as an adjacency list.

Of course, the implementation details are up to you--this is just a high level task to track that step in the process if applicable. Feel free to @me and change this up if the plan changes or you have some other ideas :)

Survey local costmap generation approaches

I'll add more to this later, but the short of it is: There are many ways to take sensor inputs, detect objects, figure out where they are in 3D space, and plot those on a 2D occupancy grid (which lots of people call a "costmap." Specifically, a "local" costmap just means an occupancy grid around our vehicle with all of the objects positioned relative to it). I.e., this is a very open-ended task in AV, and there is definitely no single, de-facto approach.

However, in thinking about the scope of our project, creating this local costmap around the car isn't really the task that we should be stressing over. Rather, our project is more focused on "given a local costmap of all the objects detected around our car--how do we detect a construction zone and draw a boundary around it?" That is, we shouldn't spend all our energy figuring out how to construct the costmap, but instead focus our energy on that second part: identifying and drawing a boundary around construction zones.

Having said that, I'm thinking it would be good for us to go out and do a small "literature review" on some of the "out of the box" approaches to obtaining a local costmap around our vehicle (which will be situated in Carla, if I'm remembering correctly what he told us in class). This could mean going out and looking for research papers, open source projects, YouTube tutorials, etc.

Generate rough/initial workzone boundaries from mock costmap

This task is to track the implementation of whatever approach we want to experiment with first. I.e., if you want to try a clustering algorithm or maybe BFS--or maybe some completely different idea.

I encourage you to create tasks that parallel this one if you are attempting multiple approaches at once--we can just link this to those issues. This task is mainly for gantt chart purposes.

Create fork of BEVFusion ROS repo that includes Chi's Fixes

  • Create a new fork of BEVFusion
  • Apply the various changes/fixes that were necessary
  • Possibly update the dockerfile so that some of these fixes are baked in (versioning and what not)
  • Also, create more robust docker run script that doesn't require rocker?

Generate or create "mock" local costmap containing projected 2D footprints of construction objects

This task is essential for work on the two stages of this pipeline to be done in parallel.

Essentially, the first stage of the system is supposed to produce a 2D costmap with detected construction objects scattered around it. Then, the second stage is supposed to take that 2D array (basically a 2D image or 2D array) and somehow identify groups of construction objects and draw a line around each grouping to define the boundary (which denotes "non-drivable" area.

In order to not hold back those working on the second stage, we NEED a "fake" costmap to be made (perhaps by hand in something like photoshop, mspaint, or that weird photo editor available on Ubuntu @CMUBOB97 you know what I'm talking about) so that work can be started on the "grouping" stage algorithm (or whatever approach we end up going with). This 2D grid/costmap will essentially be the interface between the two stages for now, so the sooner we create a mock output for that second stage to work on, the better.

Down below, whenever you get some time to think about it, can you think of any requirements we need to have on these mock costmaps? Also, maybe it'd be good to look at some sample outputs from algorithms like BEVFusion so that our mock costmap is somewhat representative of what we will really see. Hell, even a screenshot of one of their sample outputs could work pretty well.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.