Comments (10)
Hi @Ftrybe! Yes, have you used it in Python before? I'm seeing this in the documentation:
import os
import openai
openai.api_type = "azure"
openai.api_version = "2023-05-15"
openai.api_base = os.getenv("OPENAI_API_BASE") # Your Azure OpenAI resource's endpoint value.
openai.api_key = os.getenv("OPENAI_API_KEY")
response = openai.ChatCompletion.create(
engine="gpt-35-turbo", # The deployment name you chose when you deployed the GPT-35-Turbo or GPT-4 model.
messages=[
{"role": "system", "content": "Assistant is a large language model trained by OpenAI."},
{"role": "user", "content": "Who were the founders of Microsoft?"}
]
)
print(response)
print(response['choices'][0]['message']['content'])
Is this the most minimal possible use of it? Seems like a lot. Let me know if you can use it by just switching the api_type
, api_base
and engine
. I wouldn't want to lock us into that specific api version or something, if it isn't necessary.
For Open Interpreter, I could see this being implemented in this way:
interpreter.model = "azure-gpt-35-turbo" # "azure-" followed by your deployment name
interpreter.api_base = os.getenv("OPENAI_API_BASE") # Azure OpenAI resource's endpoint value
interpreter.api_key = os.getenv("OPENAI_API_KEY") # Same as usual
Then on the CLI:
interpreter --model azure-gpt-35-turbo --api_base ... --api_key ...
Or you could edit interpreter
's config file with interpreter config
, which would let you define a YAML with these defaults, so it ran with your Azure deployment every time.
None of the above is implemented, I just wanted to run it by you before building it. Does that seem intuitive/easy to use?
Thanks again!
K
from open-interpreter.
Azure Update
To anyone searching for this, we have a new way of connecting to Azure! ↓
https://docs.openinterpreter.com/language-model-setup/hosted-models/azure
from open-interpreter.
@Vybo @Ftrybe @nick917 and @SimplyJuanjo:
Happy to report that Open Interpreter now supports Azure deployments, thanks to the incredible @ifsheldon (many thanks, great work feng!)
Read the Azure integration docs here.
To use, simply upgrade your Interpreter then run --use-azure
:
pip install --upgrade open-interpreter
interpreter --use-azure
from open-interpreter.
Thanks.
from open-interpreter.
I was looking for Azure support as well and the suggested functionality looks awesome, thank you as well!
from open-interpreter.
I think it should work using azure API. I have tried:
openai.api_key = os.getenv("AZURE_OPENAI_KEY")
openai.api_version = "2023-08-01-preview"
openai.api_type = "azure"
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
response = openai.ChatCompletion.create(
engine='gpt-35-turbo', # can be replaced by gpt 4
messages=messages,
functions=[function_schema],
function_call="auto",
stream=True,
temperature=self.temperature,
)
print(response['choices'][0]['message']['content'])
does not work if stream=True
which causes response
to be a generator.
There is more work to do after line 340 in `interpreter.py' since the code is not compatible.
Great work. keep it up.
from open-interpreter.
+1. I am also looking into the code, hopefully I can implement this and make a PR when I have some time.
Then on the CLI:
interpreter --model azure-gpt-35-turbo --api_base ... --api_key ...
@KillianLucas Probably we just need a flag for the interpreter? Say,
interpreter --use-azure
then on the first launch, instead of asking whether to use GPT4 or Code-Llama, we can just ask the API Key, API base and deployment name(engine name).
from open-interpreter.
Yeh, this one would be amazing, since some could have access to 32k API via Azure easily than with OpenAI.
Planned support for 32k GPT4 or LLaMA models?
from open-interpreter.
In the latest code, I noticed that you have added configurations for Azure GPT, including a configurable azure_api_base. Would you consider renaming it to a more generic term? I'd like the flexibility to access it through third-party API endpoints, such as those hosted by the one-api service. Thanks.
from open-interpreter.
@Ftrybe absolutely! On the roadmap. We're trying to figure out a unified --model
command that should connect to an LLM across one-api, HuggingFace, localhost, etc.
from open-interpreter.
Related Issues (20)
- System Message overrides prime directive... HOT 11
- Issue during installation via terminal HOT 2
- Unable to start - pydantic.errors.ConfigError HOT 1
- attempting to download phi (but phi is already downloaded) HOT 1
- AttributeError: 'Computer' object has no attribute 'files' │ HOT 2
- Is the %% [commands] implemented not yet? Or is it a bug?
- Waiting for the model to load... HOT 2
- `[IPKernelApp] CRITICAL` error if using at Google Colab
- Add information on how to view the docs HOT 1
- Display link to the Docs if error with flags HOT 4
- Program crashes when attempting computer.display.view() in os mode
- Colab Demo TypeError HOT 2
- AttributeError: 'list' object has no attribute 'lower' when starting interpreter from master branch HOT 2
- bug: `markdown` disabled or not supported. HOT 2
- [IPKernelApp] WARNING | Parent appears to have exited, shutting down. HOT 7
- Freezes after generating prompt HOT 4
- Repeating output
- open interpreter crash when using computer.display.view
- OI freezes then gives Jupyter error
- Check for GPU or MPS availability before using CPU
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from open-interpreter.