Code Monkey home page Code Monkey logo

nanosam's Introduction

NanoSAM

๐Ÿ‘ Usage - โฑ๏ธ Performance - ๐Ÿ› ๏ธ Setup - ๐Ÿคธ Examples - ๐Ÿ‹๏ธ Training
- ๐Ÿง Evaluation - ๐Ÿ‘ Acknowledgment - ๐Ÿ”— See also

NanoSAM is a Segment Anything (SAM) model variant that is capable of running in ๐Ÿ”ฅ real-time ๐Ÿ”ฅ on NVIDIA Jetson Orin Platforms with NVIDIA TensorRT.

NanoSAM is trained by distilling the MobileSAM image encoder on unlabeled images. For an introduction to knowledge distillation, we recommend checking out this tutorial.

๐Ÿ‘ Usage

Using NanoSAM from Python looks like this

from nanosam.utils.predictor import Predictor

predictor = Predictor(
    image_encoder="data/resnet18_image_encoder.engine",
    mask_decoder="data/mobile_sam_mask_decoder.engine"
)

image = PIL.Image.open("dog.jpg")

predictor.set_image(image)

mask, _, _ = predictor.predict(np.array([[x, y]]), np.array([1]))
Notes The point labels may be
Point Label Description
0 Background point
1 Foreground point
2 Bounding box top-left
3 Bounding box bottom-right

Follow the instructions below for how to build the engine files.

โฑ๏ธ Performance

NanoSAM runs real-time on Jetson Orin Nano.

Model โ€  โฑ๏ธ Jetson Orin Nano (ms) โฑ๏ธ Jetson AGX Orin (ms) ๐ŸŽฏ Accuracy (mIoU) โ€ก
Image Encoder Full Pipeline Image Encoder Full Pipeline All Small Medium Large
MobileSAM TBD 146 35 39 0.728 0.658 0.759 0.804
NanoSAM (ResNet18) TBD 27 4.2 8.1 0.706 0.624 0.738 0.796
Notes

โ€  The MobileSAM image encoder is optimized with FP32 precision because it produced erroneous results when built for FP16 precision with TensorRT. The NanoSAM image encoder is built with FP16 precision as we did not notice a significant accuracy degredation. Both pipelines use the same mask decoder which is built with FP32 precision. For all models, the accuracy reported uses the same model configuration used to measure latency.

โ€ก Accuracy is computed by prompting SAM with ground-truth object bounding box annotations from the COCO 2017 validation dataset. The IoU is then computed between the mask output of the SAM model for the object and the ground-truth COCO segmentation mask for the object. The mIoU is the average IoU over all objects in the COCO 2017 validation set matching the target object size (small, medium, large).

๐Ÿ› ๏ธ Setup

NanoSAM is fairly easy to get started with.

  1. Install the dependencies

    1. Install PyTorch

    2. Install torch2trt

    3. Install NVIDIA TensorRT

    4. (optional) Install TRTPose - For the pose example.

      git clone https://github.com/NVIDIA-AI-IOT/trt_pose
      cd trt_pose
      python3 setup.py develop --user
    5. (optional) Install the Transformers library - For the OWL ViT example.

      python3 -m pip install transformers
  2. Install the NanoSAM Python package

    git clone https://github.com/NVIDIA-AI-IOT/nanosam
    cd nanosam
    python3 setup.py develop --user
  3. Build the TensorRT engine for the mask decoder

    1. Export the MobileSAM mask decoder ONNX file (or download directly from here)

      python3 -m nanosam.tools.export_sam_mask_decoder_onnx \
          --model-type=vit_t \
          --checkpoint=assets/mobile_sam.pt \
          --output=data/mobile_sam_mask_decoder.onnx
    2. Build the TensorRT engine

      trtexec \
          --onnx=data/mobile_sam_mask_decoder.onnx \
          --saveEngine=data/mobile_sam_mask_decoder.engine \
          --minShapes=point_coords:1x1x2,point_labels:1x1 \
          --optShapes=point_coords:1x1x2,point_labels:1x1 \
          --maxShapes=point_coords:1x10x2,point_labels:1x10

      This assumes the mask decoder ONNX file is downloaded to data/mobile_sam_mask_decoder.onnx

      Notes This command builds the engine to support up to 10 keypoints. You can increase this limit as needed by specifying a different max shape.
  4. Build the TensorRT engine for the NanoSAM image encoder

    1. Download the image encoder: resnet18_image_encoder.onnx

    2. Build the TensorRT engine

      trtexec \
          --onnx=data/resnet18_image_encoder.onnx \
          --saveEngine=data/resnet18_image_encoder.engine \
          --fp16
  5. Run the basic usage example

    python3 examples/basic_usage.py \
        --image_encoder=data/resnet18_image_encoder.engine \
        --mask_decoder=data/mobile_sam_mask_decoder.engine
    

    This outputs a result to data/basic_usage_out.jpg

That's it! From there, you can read the example code for examples on how to use NanoSAM with Python. Or try running the more advanced examples below.

๐Ÿคธ Examples

NanoSAM can be applied in many creative ways.

Example 1 - Segment with bounding box

This example uses a known image with a fixed bounding box to control NanoSAM segmentation.

To run the example, call

python3 examples/basic_usage.py \
    --image_encoder="data/resnet18_image_encoder.engine" \
    --mask_decoder="data/mobile_sam_mask_decoder.engine"

Example 2 - Segment with bounding box (using OWL-ViT detections)

This example demonstrates using OWL-ViT to detect objects using a text prompt(s), and then segmenting these objects using NanoSAM.

To run the example, call

python3 examples/segment_from_owl.py \
    --prompt="A tree" \
    --image_encoder="data/resnet18_image_encoder.engine" \
    --mask_decoder="data/mobile_sam_mask_decoder.engine
Notes - While OWL-ViT does not run real-time on Jetson Orin Nano (3sec/img), it is nice for experimentation as it allows you to detect a wide variety of objects. You could substitute any other real-time pre-trained object detector to take full advantage of NanoSAM's speed.

Example 3 - Segment with keypoints (offline using TRTPose detections)

This example demonstrates how to use human pose keypoints from TRTPose to control NanoSAM segmentation.

To run the example, call

python3 examples/segment_from_pose.py

This will save an output figure to data/segment_from_pose_out.png.

Example 4 - Segment with keypoints (online using TRTPose detections)

This example demonstrates how to use human pose to control segmentation on a live camera feed. This example requires an attached display and camera.

To run the example, call

python3 examples/demo_pose_tshirt.py

Example 5 - Segment and track (experimental)

This example demonstrates a rudimentary segmentation tracking with NanoSAM. This example requires an attached display and camera.

To run the example, call

python3 examples/demo_click_segment_track.py <image_encoder_engine> <mask_decoder_engine>

Once the example is running double click an object you want to track.

Notes This tracking method is very simple and can get lost easily. It is intended to demonstrate creative ways you can use NanoSAM, but would likely be improved with more work.

๐Ÿ‹๏ธ Training

You can train NanoSAM on a single GPU

  1. Download and extract the COCO 2017 train images

    # mkdir -p data/coco  # uncomment if it doesn't exist
    mkdir -p data/coco
    cd data/coco
    wget http://images.cocodataset.org/zips/train2017.zip
    unzip train2017.zip
    cd ../..
  2. Build the MobileSAM image encoder (used as teacher model)

    1. Export to ONNX

      python3 -m nanosam.tools.export_sam_image_encoder_onnx \
          --checkpoint="assets/mobile_sam.pt" \
          --output="data/mobile_sam_image_encoder_bs16.onnx" \
          --model_type=vit_t \
          --batch_size=16
    2. Build the TensorRT engine with batch size 16

      trtexec \
          --onnx=data/mobile_sam_image_encoder_bs16.onnx \
          --shapes=image:16x3x1024x1024 \
          --saveEngine=data/mobile_sam_image_encoder_bs16.engine
  3. Train the NanoSAM image encoder by distilling MobileSAM

    python3 -m nanosam.tools.train \
        --images=data/coco/train2017 \
        --output_dir=data/models/resnet18 \
        --model_name=resnet18 \
        --teacher_image_encoder_engine=data/mobile_sam_image_encoder_bs16.engine \
        --batch_size=16
    Notes Once training, visualizations of progress and checkpoints will be saved to the specified output directory. You can stop training and resume from the last saved checkpoint if needed.

    For a list of arguments, you can type

    python3 -m nanosam.tools.train --help
  4. Export the trained NanoSAM image encoder to ONNX

    python3 -m nanosam.tools.export_image_encoder_onnx \
        --model_name=resnet18 \
        --checkpoint="data/models/resnet18/checkpoint.pth" \
        --output="data/resnet18_image_encoder.onnx"

You can then build the TensorRT engine as detailed in the getting started section.

๐Ÿง Evaluation

You can reproduce the accuracy results above by evaluating against COCO ground truth masks

  1. Download and extract the COCO 2017 validation set.

    # mkdir -p data/coco  # uncomment if it doesn't exist
    cd data/coco
    wget http://images.cocodataset.org/zips/val2017.zip
    wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
    unzip val2017.zip
    unzip annotations_trainval2017.zip
    cd ../..
  2. Compute the IoU of NanoSAM mask predictions against the ground truth COCO mask annotation.

    python3 -m nanosam.tools.eval_coco \
        --coco_root=data/coco/val2017 \
        --coco_ann=data/coco/annotations/instances_val2017.json \
        --image_encoder=data/resnet18_image_encoder.engine \
        --mask_decoder=data/mobile_sam_mask_decoder.engine \
        --output=data/resnet18_coco_results.json

    This uses the COCO ground-truth bounding boxes as inputs to NanoSAM

  3. Compute the average IoU over a selected category or size

    python3 -m nanosam.tools.compute_eval_coco_metrics \
        data/efficientvit_b0_coco_results.json \
        --size="all"
    Notes For all options type ``python3 -m nanosam.tools.compute_eval_coco_metrics --help``.

    To compute the mIoU for a specific category id.

    python3 -m nanosam.tools.compute_eval_coco_metrics \
        data/resnet18_coco_results.json \
        --category_id=1

๐Ÿ‘ Acknowledgement

This project is enabled by the great projects below.

  • SAM - The original Segment Anything model.
  • MobileSAM - The distilled Tiny ViT Segment Anything model.

๐Ÿ”— See also

nanosam's People

Contributors

chaoningzhang avatar dhkim2810 avatar dongshenhan avatar jaybdub avatar killian31 avatar ksugar avatar qiaoyu1002 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.