Comments (4)
Hi @ZhanqiuHu,
I am trying to profile our decoupled models (python backend) with perf_analyzer, and I'm curious how the following latency metrics are calculated?
Please see here for the details of the metrics being calculated.
Also, when using grpc or http endpoints, is it possible to measure the latencies spend on network overhead and (un)marshalling protobuf?
I believe if you run perf analyzer with -i grpc
, you should see output like this
*** Measurement Settings ***
Batch size: 1
Service Kind: Triton
Using "time_windows" mode for stabilization
Measurement window: 5000 msec
Using synchronous calls for inference
Stabilizing using average latency
Request concurrency: 1
Client:
Request count: 30375
Throughput: 1685.54 infer/sec
Avg latency: 591 usec (standard deviation 144 usec)
p50 latency: 569 usec
p90 latency: 710 usec
p95 latency: 891 usec
p99 latency: 1105 usec
Avg gRPC time: 578 usec ((un)marshal request/response 6 usec + response wait 572 usec)
Server:
Inference count: 30376
Execution count: 30376
Successful request count: 30376
Avg request latency: 319 usec (overhead 107 usec + queue 26 usec + compute input 46 usec + compute infer 85 usec + compute output 53 usec)
Inferences/Second vs. Client Average Batch Latency
Concurrency: 1, throughput: 1685.54 infer/sec, latency 591 usec
from server.
Thanks a lot for providing the details! I was more interested in what "Compute Input", "Compute Output", and "Network+Server Send/Recv" specifically are. When I use -i grpc
the flag, it doesn't seem to report the gRPC time, and I was wondering if it is because I'm using a custom decoupled python model.
Thank you very much!
from server.
For compute input
, compute infer
, and compute output
metrics, you could read the Triton doc here for more details.
When I use -i grpc the flag, it doesn't seem to report the gRPC time, and I was wondering if it is because I'm using a custom decoupled python model.
Yes you are correct. The gRPC time reports are not supported in decoupled model.
from server.
Thanks for the answer! However, it seems like the description on the doc is a little bit vague. What specific steps are involved in preprocessing of inputs and outputs? For example, for inputs, copying/moving the data to the device is probably part of it? And I guess for decoupled python model, (de)serailization will be part of comptue infer
time rather than compute input
or compute output
time?
Thanks!
from server.
Related Issues (20)
- [Bug] Model 'ensemble' receives inputs originated from different decoupled models
- No trtllm tag in ngc for 24.05 HOT 4
- No 24.05-trtllm-python-py3 in NGC Repo HOT 2
- YOLOv8n-poses is giving me a negative output error HOT 2
- Automatically unload (oldest) models when memory is full HOT 2
- Specific structure for ensemble model may causes deadlock
- Windows 10 docker build Error "Could not locate a complete Visual Studio instance" HOT 2
- A Confusion about prefetch HOT 2
- What is the correct way to run inference in parallel in Triton?
- Support histogram custom metric in Python backend HOT 2
- Backend support for .keras files?
- triton-inference-server cannot be started HOT 1
- Incorrect asset tritonserver2.35.0-jetpack5.1.2-update-2.tgz HOT 1
- How does Triton implement one instance to handle multiple requests simultaneously? HOT 1
- ONNX backend with TensorRT optimizer sometimes fails to start HOT 1
- Any example of triton-vllm with c++ client?
- Tritonserver for FIL backend not starting HOT 1
- Why is my model in ensemble receiving out-of-order input HOT 3
- Add TT-Metalium as a backend HOT 1
- unexpected datatype TYPE_INT64 for inference input ,expecting TYPE_INT32 HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from server.