This program serves as a simple conversational assistant powered by OpenAI's GPT-3.5 language model. It allows users to have interactive conversations by sending queries and receiving responses generated by the model. The code is designed to be easy to understand for novice programmers.
Before running the program, ensure that you have the following dependencies installed:
- Python (version 3.6 or higher)
dotenv
library (pip install python-dotenv
)openai
library (pip install openai
)
Additionally, you need to set up an OpenAI API key by creating an account on the OpenAI website. Once you have your API key, create a file named .env
in the same directory as the code and add the following line, replacing YOUR_API_KEY
with your actual API key:
OpenAIKey=YOUR_API_KEY
- Import the required libraries:
import os
from dotenv import load_dotenv
import openai
import json
- Load the API key from the
.env
file:
load_dotenv()
openai.api_key = os.getenv('OpenAIKey')
- Define the function descriptions:
function_descriptions = [
{
"name": "function_1",
"description": "When the user asks to run function one",
"parameters": {
"type": "object",
"properties": {
"one": {
"type": "string",
"description": "User asked to run function one, with the required argument one"
},
},
"required": ["one"]
}
},
{
"name": "function_2",
"description": "When the user asks to run function two",
"parameters": {
"type": "object",
"properties": {
"one": {
"type": "string",
"description": "User asked for function two with argument one"
},
"two": {
"type": "string",
"description": "User asked for function two with argument two"
},
},
}
}
]
- Define the
function_call
function to handle specific function calls:
def function_call(ai_response):
function_call = ai_response["choices"][0]["message"]["function_call"]
function_name = function_call["name"]
arguments = function_call["arguments"]
if function_name == "function_1":
one = eval(arguments).get("one")
print(f"Function 1 {one}")
return
if function_name == "function_2":
print('Function 2')
one = eval(arguments).get("one")
two = eval(arguments).get("two")
print(f"Function 2 {one}{two}")
return
else:
return
- Define the
feedback
function to process user queries and generate responses:
def feedback(query):
messages = [{"role": "user", "content": query}]
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-0613",
messages=messages,
functions=function_descriptions,
function_call="auto"
)
while response["choices"][0]["finish_reason"] == "function_call":
function_response = function_call(response)
messages.append({
"role": "function",
"name": response["choices"][0]["message"]["function_call"]["name"],
"content": json.dumps(function_response)
})
response = openai.Chat
Completion.create(
model="gpt-3.5-turbo-0613",
messages=messages,
functions=function_descriptions,
function_call="auto"
)
print("\n"+response['choices'][0]['message']['content'].strip())
print("\n"+response['choices'][0]['message']['content'].strip())
- Start the conversational loop:
while True:
user_input = input("User: ")
feedback(user_input)
-
Ensure that you have fulfilled the prerequisites mentioned above.
-
Save the code in a file with a
.py
extension, e.g.,conversational_assistant.py
. -
Open a terminal or command prompt and navigate to the directory where you saved the file.
-
Run the program using the following command:
python OpenAI_Complex_Skeleton.py
-
The program will start and prompt you to enter your queries. Type your query and press Enter to receive a response from the conversational assistant.
-
The program will continue to prompt for queries and provide responses until you terminate it by pressing Ctrl+C or stopping the execution in your development environment.
Feel free to modify the function descriptions, add more conversational logic, or customize the program according to your needs. Enjoy interacting with your OpenAI Conversational Assistant!