Super simple one line of code to use semantic-segment-anything to auto annotate your datasets.
This repo is 100% based on the amazing Semantic-Segment-Anything project but an ultra-simplified version for those who just wanna run one line of code to annotate your datasets and don't give a heck to the underlying processes.
The follows are important
- GPU Memory >= 13 GB (as SAM is GPU memory consuming)
- nvidia docker installed
- nvidia driver installed on host machine
This repo is tested at the following environment
- Ubuntu 22.04
- Single GPU 3090 Ti
- NVIDIA Driver 530
make run
In the input, you will see a demo picture:
whereas the output folder is empty.
Once you are inside the container, run:
make annotate-single-gpu
You will see that the demo image will be annotated and yields results in output folder.
Put all images that you would like to annotate under input, such as:
├── input
│ ├── whatever_name_you_like.jpg
│ ├── whatever_format_you_like.png
│ ├──...
Then:
make annotate-single-gpu
Then, you will get all output in the output folder. That's it.
If you have a really dense objects, the visualization can be hard to tell.
I have prepared one script simple_visualizer.py to parse the json and re-draw the image.
Simply do:
python simple_visualizer.py redraw \
--image_path=input/demo.jpg \
--json_path=output/demo_semantic.json \
--classes_to_keep="[\"person\"]"
And you will see that a new visualization is available: