Comments (5)
For general use as shown in most examples, you should have a local ollama server running to be able to continue.
To do this:
- Download: https://ollama.com/
- In your terminal, run an LLM:
- See available LLMs: https://ollama.com/library
- Example:
ollama run llama2
- Example:
ollama run llama2:70b
- If you want to use a non local server (or a different local one), see the docs on Custom Client
from ollama-python.
This is verbage as part of the PR: #64
from ollama-python.
It is also worth noting that you are using an await
. Are you using an async client?
For a non async client you do not need await:
import ollama
response = ollama.chat(model='llama2', messages=[
{
'role': 'user',
'content': 'Why is the sky blue?',
},
])
print(response['message']['content'])
For an async client, you should use an await.
import asyncio
from ollama import AsyncClient
async def chat():
message = {'role': 'user', 'content': 'Why is the sky blue?'}
response = await AsyncClient().chat(model='llama2', messages=[message])
asyncio.run(chat())
from ollama-python.
@connor-makowski Thanks for your feedback. I used both solution (sync and Async clients). the problem is that when assuming a message with Role: System, LLM is not giving answer.
from ollama-python.
what model are you using?
your snippet doesn't stream. is it possible the llm is responding but hasn't completed yet? in this mode, ollama will wait until it has the full response before returning to the call. this could look like non-response if it's also generating tokens at a slow rate (due to hardware limitations)
from ollama-python.
Related Issues (20)
- Please do not append port 80 or 443 when I set a custom host, as this can lead to errors on certain gateways.
- tool calls integration broken when empty HOT 6
- Update pypi HOT 2
- Pass a list of functions/tools to client like in the API HOT 1
- Decorator to generate schemas for tools HOT 10
- Bug in tool_calls response parsing HOT 3
- Provide a way to override system prompt at runtime
- Ollama in combination with Mistral NeMo is making up weird questions on its own
- Persistent chat memory HOT 7
- Client's host doesn't work when the host is behind a proxy server
- Avoid the need to download ollama manually HOT 2
- 'timed out waiting for llama runner to start' in ~6 minutes when trying to load large model
- Reproducible output cannot be achieved in model Mixtral
- Version is set to 0.0.0 in pyproject.toml
- create modelfile, this is why HOT 1
- Inconsistent API Behavior
- how to make batch request with python client?
- Return python object instead of JSON HOT 1
- tools not available in ollama HOT 2
- Add a 'stop' method to abort model loading
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ollama-python.