This is a selection of tools to map LEDs into 2D and 3D space using only your webcam!
Tip
All scripts can be run with the --help
argument to list optional parameters such as resolution, exposure and latency.
After downloading this repository and installing Python, run pip install -r requirements.txt
Run python scripts/check_camera.py
to ensure your camera is compatible with MariMapper, or check the list below:
Working Cameras
- HP 4310 (settings may not revert)
- Logitech C920
- Dell Lattitude 5521 built-in
- HP Envy x360 built-in
Test LED identification by turning down the lights and holding a torch or led up to the camera This should start with few warnings, no errors and produce a very dark image with a single crosshair on centered on your LED:
Important
This works best in a dim environment so please make sure your camera isn't pointing at any other light sources!
Your LEDs are as unique as you are, so the fastest way to connect MariMapper to your system is to fill in the blanks in backends/custom/custom_backend.py:
# import some_led_library
class Backend:
def __init__(self, led_count: int):
# Remove the following line after you have implemented the set_led function!
raise NotImplementedError("Could not load backend as it has not been implemented, go implement it!")
def set_led(self, led_index: int, on: bool):
# Write your code for controlling your LEDs here
# It should turn on or off the led at the led_index depending on the "on" variable
# For example:
# if on:
# some_led_library.set_led(led_index, (255, 255, 255))
# else:
# some_led_library.set_led(led_index, (0, 0, 0))
pass
You can test your backend with python scripts/check_backend.py
MariMapper also support the following pre-made backends. This can be selected in the following steps using the --backend
argument.
Set up your LEDs in front of your camera and
run python scripts/capture_sequence.py my_scan --led_count 64 --backend fadecandy
Change --led_count
to however many LEDs you want to scan and --backend
to whatever backend you're using
This will produce a timestamped CSV file in the my_scan
folder with led index, u and v values.
Run python scripts/visualise.py <filename>
to visualise 2D or 3D map files.
Speed this up with an extra step
Place one of your addressable LEDs in front of your camera
Run python scripts/check_latency.py --backend fadecandy
Change --backend
to whatever backend you're using.
Once complete, the recommended latency will be listed in the console in milliseconds.
This can be used in capture_sequence.py
using the --latency
argument to speed up scans
To create a 3D map, run capture_sequence
multiple times from different views of your LEDs,
this can either be done by moving your webcam around your LEDs or rotating your LEDs.
Tip
You can move your webcam to wherever you like as long as some of your leds are mostly in view Try and get at least 3 views between 6° - 20° apart
Once you have a selection of 2D maps captured with the capture_sequence.py
script,
run python scripts/reconstruct.py my_scan
This may take a while, however once complete will generate reconstruction.csv
in the my_scan
folder.
Here is an example reconstruction of Highbeam's body LEDs
How to move the model around
- Click and drag to rotate the model around.
- Hold shift to roll the camera
- Use the scroll wheel to zoom in / out
- Use the
n
key to hide / show normals - Use the
+
/-
keys to increase / decrease point sizes - Use
1
,2
&3
keys to change colour scheme
If you have a high enough density 3d map, you can use the remesh tool to create a 3D mesh based on your leds!
Run python scripts/remesh.py reconstruction.csv my_mesh.ply
This will generate a ply file which you can open and look at with your eyes
I would really love to hear what you think and if you have any bugs or improvements, please raise them here or drop me a line on Telegram.
If you implement a backend that you think others might use, please raise a pull request or just drop me a message on Telegram!
If you want a super speed PR, run flake8, flake8-bugbear and black before submitting changes!