Code Monkey home page Code Monkey logo

serve's Introduction

TorchServe

TorchServe is a flexible and easy to use tool for serving PyTorch models.

For full documentation, see Model Server for PyTorch Documentation.

Contents of this Document

Install TorchServe

Conda instructions are provided in more detail, but you may also use pip and virtualenv if that is your preference. Note: Java 11 is required. Instructions for installing Java 11 for Ubuntu or macOS are provided in the Install with Conda section.

Install with pip

To use pip to install TorchServe and the model archiver:

pip install torch torchtext torchvision sentencepiece
pip install torchserve torch-model-archiver

Install with Conda

Ubuntu

  1. Install Java 11
    sudo apt-get install openjdk-11-jdk
  2. Install Conda (https://docs.conda.io/projects/conda/en/latest/user-guide/install/linux.html)
  3. Create an environment and install torchserve and torch-model-archiver For CPU
    conda create --name torchserve torchserve torch-model-archiver pytorch torchtext torchvision -c pytorch -c powerai
    For GPU
    conda create --name torchserve torchserve torch-model-archiver pytorch torchtext torchvision cudatoolkit=10.1 -c pytorch -c powerai
  4. Activate the environment
    source activate torchserve

macOS

  1. Install Java 11
    brew tap AdoptOpenJDK/openjdk
    brew cask install adoptopenjdk11
  2. Install Conda (https://docs.conda.io/projects/conda/en/latest/user-guide/install/macos.html)
  3. Create an environment and install torchserve and torch-model-archiver
    conda create --name torchserve torchserve torch-model-archiver pytorch torchtext torchvision -c pytorch -c powerai
  4. Activate the environment
    source activate torchserve

Now you are ready to package and serve models with TorchServe.

Install TorchServe for development

If you plan to develop with TorchServe and change some of the source code, you must install it from source code. First, clone the repo with:

git clone https://github.com/pytorch/serve
cd serve

Then make your changes executable with this command:

pip install -e .
  • To develop with torch-model-archiver:
cd serve/model-archiver
pip install -e .
  • To upgrade TorchServe or model archiver from source code and make changes executable, run:
pip install -U -e .

For information about the model archiver, see detailed documentation.

Serve a model

This section shows a simple example of serving a model with TorchServe. To complete this example, you must have already installed TorchServe and the model archiver.

To run this example, clone the TorchServe repository and navigate to the root of the repository:

git clone https://github.com/pytorch/serve.git
cd serve

Then run the following steps from the root of the repository.

Store a Model

To serve a model with TorchServe, first archive the model as a MAR file. You can use the model archiver to package a model. You can also create model stores to store your archived models.

  1. Create a directory to store your models.

    mkdir ~/model_store
    cd ~/model_store
  2. Download a trained model.

    wget https://download.pytorch.org/models/densenet161-8d451a50.pth
  3. Archive the model by using the model archiver. The extra-files param uses fa file from the TorchServe repo, so update the path if necessary.

    torch-model-archiver --model-name densenet161 --version 1.0 --model-file ~/serve/examples/image_classifier/densenet_161/model.py --serialized-file ~/model_store/densenet161-8d451a50.pth --extra-files ~/serve/examples/image_classifier/index_to_name.json --handler image_classifier

For more information about the model archiver, see Torch Model archiver for TorchServe

Start TorchServe to serve the model

After you archive and store the model, use the torchserve command to serve the model.

torchserve --start --model-store model_store --models ~/model_store/densenet161=densenet161.mar

After you execute the torchserve command above, TorchServe runs on your host, listening for inference requests.

Note: If you specify model(s) when you run TorchServe, it automatically scales backend workers to the number equal to available vCPUs (if you run on a CPU instance) or to the number of available GPUs (if you run on a GPU instance). In case of powerful hosts with a lot of compute resoures (vCPUs or GPUs). This start up and autoscaling process might take considerable time. If you want to minimize TorchServe start up time you avoid registering and scaling the model during start up time and move that to a later point by using corresponding Management API, which allows finer grain control of the resources that are allocated for any particular model).

Get predictions from a model

To test the model server, send a request to the server's predictions API.

Complete the following steps:

  • Open a new terminal window (other than the one running TorchServe).
  • Use curl to download one of these cute pictures of a kitten and use the -o flag to name it kitten.jpg for you.
  • Use curl to send POST to the TorchServe predict endpoint with the kitten's image.

kitten

The following code completes all three steps:

curl -O https://s3.amazonaws.com/model-server/inputs/kitten.jpg
curl -X POST http://127.0.0.1:8080/predictions/densenet161 -T kitten.jpg

The predict endpoint returns a prediction response in JSON. It will look something like the following result:

[
  {
    "tiger_cat": 0.46933549642562866
  },
  {
    "tabby": 0.4633878469467163
  },
  {
    "Egyptian_cat": 0.06456148624420166
  },
  {
    "lynx": 0.0012828214094042778
  },
  {
    "plastic_bag": 0.00023323034110944718
  }
]

You will see this result in the response to your curl call to the predict endpoint, and in the server logs in the terminal window running TorchServe. It's also being logged locally with metrics.

Now you've seen how easy it can be to serve a deep learning model with TorchServe! Would you like to know more?

Stop the running TorchServe

To stop the currently running TorchServe instance, run the following command:

torchserve --stop

You see output specifying that TorchServe has stopped.

Quick Start with Docker

Prerequisites

git clone https://github.com/pytorch/serve.git
cd serve

Build the TorchServe Docker image

The following are examples on how to use the build_image.sh script to build Docker images to support CPU or GPU inference.

To build the TorchServe image for a CPU device using the master branch, use the following command:

./build_image.sh

To create a Docker image for a specific branch, use the following command:

./build_image.sh -b <branch_name>

To create a Docker image for a GPU device, use the following command:

./build_image.sh --gpu

To create a Docker image for a GPU device with a specific branch, use following command:

./build_image.sh -b <branch_name> --gpu

To run your TorchServe Docker image and start TorchServe inside the container with a pre-registered resnet-18 image classification model, use the following command:

./start.sh

Learn More

Contributing

We welcome all contributions!

To learn more about how to contribute, see the contributor guide here.

To file a bug or request a feature, please file a GitHub issue. For filing pull requests, please use the template here. Cheers!

serve's People

Contributors

aaronmarkham avatar adityabindal avatar alexwong avatar chauhang avatar dhaniram-kshirsagar avatar eslesar-aws avatar fbbradheintz avatar harshbafna avatar jlin27 avatar jspisak avatar juliensimon avatar mjpsl avatar mycpuorg avatar vdantu avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.