Comments (1)
the same problem but with mistral models
after check log.
2024-02-01T10:32:24+0000 [INFO] [api_server:12] 172.17.0.1:33106 (scheme=http,method=POST,path=/v1/generate,type=application/json,length=46) (status=500,type=application/json,length=110) 4624.572ms (trace=98f17d6566d01280e521c85644a9c515,span=33c1901ac96f7b85,sampled=1,service.name=llm-mistral-service)
2024-02-01T10:32:32+0000 [INFO] [runner:llm-mistral-runner:1] _ (scheme=http,method=POST,path=/generate_iterator,type=application/octet-stream,length=1774) (status=200,type=application/vnd.bentoml.stream_outputs,length=) 12304.698ms (trace=98f17d6566d01280e521c85644a9c515,span=ac50ed178f5ec9ac,sampled=1,service.name=llm-mistral-runner)
runner returns the correct response with code 200, but api_service return to client error 500
from openllm.
Related Issues (20)
- Can't pass workers_per_resource to the bentoml container HOT 2
- RunnerService: MAX_MODEL_LEN is not reflected to the llm._max_model_len HOT 2
- bug: Requests with "use_beam_search: true" fail with an unclear exception message.
- bug: Error in sending post request for bentoml container service HOT 1
- Runtime error about concurrency
- When attempting to add the CohereForAI/aya-101 model using CohereForAI's Aya model, an error occurred during the loading process.
- openllm_core.exceptions.OpenLLMException: Failed to initialise vLLMEngine due to the following error: Model architectures ['T5ForConditionalGeneration'] are not supported for now. HOT 1
- bug: not generate eos_token when using qwen7B-chat
- bug: start chatglm-6b locally err
- I'm having trouble getting statted with openllm, but I don't want to use conda and I have WSL2 HOT 1
- feat: support volta architecture GPUs for the vLLM backend
- Deploying LLM in On-Premises Server to Assist Users to Launch Locally in Work Laptop - Web Browser HOT 3
- Deprecation Warning for PyTorch Backend HOT 2
- FileNotFoundError: [Errno 2] No such file or directory: b'/root/bentoml/models/pt-google--gemma-7b-it/latest'
- feat: c4ai-command-r-v01 support HOT 2
- feat: support Qwen1.5 HOT 1
- feat: any plan to support NPU
- bug: An exception occurred while instantiating runner 'llm-mistral-runner' HOT 1
- bug: Not enough data for satisfy transfer length header HOT 1
- feat: Can you support llama3? HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from openllm.