Code Monkey home page Code Monkey logo

Comments (4)

nv-hwoo avatar nv-hwoo commented on June 12, 2024

Hi @ZhanqiuHu,

I am trying to profile our decoupled models (python backend) with perf_analyzer, and I'm curious how the following latency metrics are calculated?

Please see here for the details of the metrics being calculated.

Also, when using grpc or http endpoints, is it possible to measure the latencies spend on network overhead and (un)marshalling protobuf?

I believe if you run perf analyzer with -i grpc, you should see output like this

*** Measurement Settings ***
  Batch size: 1
  Service Kind: Triton
  Using "time_windows" mode for stabilization
  Measurement window: 5000 msec
  Using synchronous calls for inference
  Stabilizing using average latency

Request concurrency: 1
  Client:
    Request count: 30375
    Throughput: 1685.54 infer/sec
    Avg latency: 591 usec (standard deviation 144 usec)
    p50 latency: 569 usec
    p90 latency: 710 usec
    p95 latency: 891 usec
    p99 latency: 1105 usec
    Avg gRPC time: 578 usec ((un)marshal request/response 6 usec + response wait 572 usec)
  Server:
    Inference count: 30376
    Execution count: 30376
    Successful request count: 30376
    Avg request latency: 319 usec (overhead 107 usec + queue 26 usec + compute input 46 usec + compute infer 85 usec + compute output 53 usec)

Inferences/Second vs. Client Average Batch Latency
Concurrency: 1, throughput: 1685.54 infer/sec, latency 591 usec

from server.

ZhanqiuHu avatar ZhanqiuHu commented on June 12, 2024

Thanks a lot for providing the details! I was more interested in what "Compute Input", "Compute Output", and "Network+Server Send/Recv" specifically are. When I use -i grpc the flag, it doesn't seem to report the gRPC time, and I was wondering if it is because I'm using a custom decoupled python model.

Thank you very much!

from server.

nv-hwoo avatar nv-hwoo commented on June 12, 2024

For compute input, compute infer, and compute output metrics, you could read the Triton doc here for more details.

When I use -i grpc the flag, it doesn't seem to report the gRPC time, and I was wondering if it is because I'm using a custom decoupled python model.

Yes you are correct. The gRPC time reports are not supported in decoupled model.

from server.

ZhanqiuHu avatar ZhanqiuHu commented on June 12, 2024

Thanks for the answer! However, it seems like the description on the doc is a little bit vague. What specific steps are involved in preprocessing of inputs and outputs? For example, for inputs, copying/moving the data to the device is probably part of it? And I guess for decoupled python model, (de)serailization will be part of comptue infer time rather than compute input or compute output time?

Thanks!

from server.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.