Comments (1)
not sure what you meant, but OpenLLM supports OpenAI-compatible endpoint, so you can just use that.
Otherwise for both v1/generate_stream
and v1/generate
it looks something like this
{
"prompt": "What is the meaning of life?",
"stop": [
"\n"
],
"llm_config": {
"max_new_tokens": 128,
"min_length": 0,
"early_stopping": false,
"num_beams": 1,
"num_beam_groups": 1,
"use_cache": true,
"temperature": 0.75,
"top_k": 15,
"top_p": 0.78,
"typical_p": 1,
"epsilon_cutoff": 0,
"eta_cutoff": 0,
"diversity_penalty": 0,
"repetition_penalty": 1,
"encoder_repetition_penalty": 1,
"length_penalty": 1,
"no_repeat_ngram_size": 0,
"renormalize_logits": false,
"remove_invalid_values": false,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_scores": false,
"encoder_no_repeat_ngram_size": 0,
"n": 1,
"best_of": null,
"presence_penalty": 0,
"frequency_penalty": 0,
"use_beam_search": false,
"ignore_eos": false,
"skip_special_tokens": true
},
"adapter_name": null
}
from openllm.
Related Issues (20)
- openllm_core.exceptions.OpenLLMException: Failed to initialise vLLMEngine due to the following error: Model architectures ['T5ForConditionalGeneration'] are not supported for now. HOT 1
- bug: not generate eos_token when using qwen7B-chat
- bug: start chatglm-6b locally err
- I'm having trouble getting statted with openllm, but I don't want to use conda and I have WSL2 HOT 1
- feat: support volta architecture GPUs for the vLLM backend
- Deploying LLM in On-Premises Server to Assist Users to Launch Locally in Work Laptop - Web Browser HOT 3
- Deprecation Warning for PyTorch Backend HOT 2
- FileNotFoundError: [Errno 2] No such file or directory: b'/root/bentoml/models/pt-google--gemma-7b-it/latest'
- feat: c4ai-command-r-v01 support HOT 2
- feat: support Qwen1.5 HOT 1
- feat: any plan to support NPU
- bug: An exception occurred while instantiating runner 'llm-mistral-runner' HOT 1
- bug: Not enough data for satisfy transfer length header HOT 1
- feat: Can you support llama3? HOT 3
- bug: WARNING: openllm 0.4.44 does not provide the extra 'gemma'
- feat: support LMDeploy backend HOT 1
- bug: error coming up while install the vllm using pip install "openllm[vllm]"
- For AMD/GPU, how to use multi GPUS in the api_server.py HOT 1
- bug: pip package version ssues
- feat: Multimodal LLMs?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from openllm.