Comments (12)
Hello
The same issue applies for me as @serdarildercaglar mentioned above. It will be very helpful for us to fix this please..
from kserve.
@sivanantha321 Could you please help me with this problem? Please let me know if you can address it.
Will look into it
from kserve.
hi guys,
I'm having the same problems as @serdarildercaglar has. I urgently need a solution. Thx
from kserve.
@sivanantha321 Could you please help me with this problem? Please let me know if you can address it.
from kserve.
Hi @bunyaminkeles. Could you fix the issue? Is there any improvement on your side?
I am about to launch my project to production but I couldn't fix this issue. If I cannot employ workers, I cannot use multi-process.
from kserve.
from kserve.
For now, you can try with ray serve for take advantage of multiple workers. https://kserve.github.io/website/latest/modelserving/v1beta1/custom/custom_model/#parallel-model-inference. I recommend using kserve 0.11 as 0.12 release seems broken with ray serve
from kserve.
I used ray serve and it was successful in multiprocessing. However, when I use ray serve, as the number of replicas increases, resource consumption increases, and whether the server is busy or not, these replicas are constantly running, which negatively affects the cost. I will manage with ray serve until the Workers' problem is solved.
Is there any work planned to solve the problem depending on the increase in the number of Workers?
Thank you very much. Sincerely best wishes.
from kserve.
@serdarildercaglar are you using multiprocessing mainly to increase the gpu utilization ? Just curious about the motivation
from kserve.
Thanks for response @yuzisun.
Yes,
I need to increase the number of workers to process requests to the model simultaneously. When I increase the number of workers using Fastapi and send requests at the same time or close to the same time, the gpu processes the requests simultaneously. If the number of workers is 1, it processes the requests one by one and returns a response. Therefore, the response time becomes very long.
Since we are using kubernetes and kserve in our project, it is vital that I set the number of workers to 2 or more.
from kserve.
Thanks for response @yuzisun. Yes, I need to increase the number of workers to process requests to the model simultaneously. When I increase the number of workers using Fastapi and send requests at the same time or close to the same time, the gpu processes the requests simultaneously. If the number of workers is 1, it processes the requests one by one and returns a response. Therefore, the response time becomes very long. Since we are using kubernetes and kserve in our project, it is vital that I set the number of workers to 2 or more.
Why not setting the replica to 2 or more as that’s how kubernetes scales? The worker count is mainly for saving expensive compute resource like gpu to scale up within the container, but at some point it is bounded by the resource limit of the container and you can‘t scale as much as it can with kubernetes replicas.
from kserve.
-
Increasing the number of “workers” can process incoming requests simultaneously based on the number of “workers” without creating more computational resources. Suppose the model generates prediction using CPU. For example, let the number of “workers” be 3. When a request arrives, the model runs normally. When 3 requests arrive at the same time, the same model can process requests with 3 workers using the available resources. If I create 3 replicas, 3 replicas will always consume resources when a single request comes or when 3 requests come.
The best solution for me with my limited resources is to use workers for CPU or GPU. -
Another issue: When using CPU, when I make the number of workers more than 2, the response does not return and waits until timeout. But when I deploy a model in onnx format, workers for CPU works fine. For a normal transformers model, workers does not work.
I may not have been able to explain it fully because of the language barrier. Thank you for trying to help.
from kserve.
Related Issues (20)
- Create github action to upload release yaml artifacts automatically
- Make MAX_GRPC_MESSAGE_LENGTH Configurable for Image Input Size Flexibility
- `NCCL` and `flash_atten` packages are missing in huggingface runtime
- ClusterRole permissions are too broadly scoped? HOT 1
- Add Oracle Cloud Infrastructure (OCI) Object Storage as a storage agent
- VirtualService regex match should be case insensitive
- Python SDK for KServe and Kubeflow Pipelines can not be installed at the same time HOT 2
- UnicodeDecodeError for grpcurl request with Bytes column in DataFrame HOT 6
- error setting up interface service HOT 2
- mlflow model cannot be loaded HOT 8
- stop using `gcr.io/kubebuilder/kube-rbac-proxy` before `18 March 2025` (image being deleted) HOT 1
- add Xinfernece runtime for huggingface LLM HOT 4
- Completion fails when echo is true with vLLM backend
- protobuf version conflict while trying to integrate with kfp HOT 1
- Client fails to list clusterservingruntimes HOT 2
- Not able to access torchserve custom metrics after deploying inference service on kserve
- The request to InferenceService is sent twice
- Getting timeout failed to failed to call webhook: Post "https://kserve-webhook-server-service.default.svc:443/mutate-serving-kserve-io-v1beta1-inferenceservice?timeout=10s" HOT 7
- Multi-Lora support
- fake client returns no kind "ClusterServingRuntimeList" is registered for version "serving/v1alpha1"
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from kserve.