Code Monkey home page Code Monkey logo

Comments (10)

Davy-Chendy avatar Davy-Chendy commented on August 19, 2024 1

Thank you for the reply! However, I choose solution 2 and try to ask: What are the genes associated with multiple sclerosis? but receive the error again. Besides, I try to use GPT4 api in other respos and it works. The error is: (kg_rag) @cswangxiaowei ➜ /workspaces/KG_RAG (main) $ python -m kg_rag.rag_based_generation.GPT.text_generation -g "gpt-4"

Enter your question : What are the genes associated with multiple sclerosis? Retrieving context from SPOKE graph... Here is the KG-RAG based answer:

Calling OpenAI... Calling OpenAI... Calling OpenAI... Calling OpenAI... Calling OpenAI... Traceback (most recent call last): File "/home/codespace/.local/lib/python3.10/site-packages/tenacity/init.py", line 382, in call result = fn(*args, **kwargs) File "/workspaces/KG_RAG/kg_rag/utility.py", line 183, in fetch_GPT_response response = openai.ChatCompletion.create( File "/home/codespace/.local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "/home/codespace/.local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create response, _, api_key = requestor.request( File "/home/codespace/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 299, in request resp, got_stream = self._interpret_response(result, stream) File "/home/codespace/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 710, in _interpret_response self._interpret_response_line( File "/home/codespace/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: Invalid URL (POST /v1/engines/gpt-4/chat/completions)

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/opt/conda/envs/kg_rag/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/envs/kg_rag/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/workspaces/KG_RAG/kg_rag/rag_based_generation/GPT/text_generation.py", line 56, in main() File "/workspaces/KG_RAG/kg_rag/rag_based_generation/GPT/text_generation.py", line 48, in main output = get_GPT_response(enriched_prompt, SYSTEM_PROMPT, CHAT_MODEL_ID, CHAT_DEPLOYMENT_ID, temperature=TEMPERATURE) File "/home/codespace/.local/lib/python3.10/site-packages/joblib/memory.py", line 655, in call return self._cached_call(args, kwargs)[0] File "/home/codespace/.local/lib/python3.10/site-packages/joblib/memory.py", line 598, in _cached_call out, metadata = self.call(*args, **kwargs) File "/home/codespace/.local/lib/python3.10/site-packages/joblib/memory.py", line 856, in call output = self.func(*args, **kwargs) File "/workspaces/KG_RAG/kg_rag/utility.py", line 203, in get_GPT_response return fetch_GPT_response(instruction, system_prompt, chat_model_id, chat_deployment_id, temperature) File "/home/codespace/.local/lib/python3.10/site-packages/tenacity/init.py", line 289, in wrapped_f return self(f, *args, **kw) File "/home/codespace/.local/lib/python3.10/site-packages/tenacity/init.py", line 379, in call do = self.iter(retry_state=retry_state) File "/home/codespace/.local/lib/python3.10/site-packages/tenacity/init.py", line 326, in iter raise retry_exc from fut.exception() tenacity.RetryError: RetryError[<Future at 0x7fae2c858e20 state=finished raised InvalidRequestError>]

I upgraded the openai to the latest version using pip install --upgrade openai and modified the fetch_GPT_response function as follows:

client = openai.OpenAI(api_key="your api key")

@retry(wait=wait_random_exponential(min=10, max=30), stop=stop_after_attempt(5))
def fetch_GPT_response(instruction, system_prompt, chat_model_id, chat_deployment_id, temperature=0):
    print('Calling OpenAI...')
    response = client.chat.completions.create(
        model=chat_model_id,
        messages=[
            {"role": "system", "content": system_prompt},
            {"role": "user", "content": instruction}
        ],
        temperature=temperature,
    )
    return response.choices[0].message.content.strip()

This modification was effective.

from kg_rag.

karthiksoman avatar karthiksoman commented on August 19, 2024

Hi @cswangxiaowei
Here the issue is that your API_KEY is not accessible for the openai library to authenticate. This is because your .gpt_config.env file is in /workspaces/KG_RAG folder. There are two solutions for this:

Solution 1:
Move .gpt_config.env to your $HOME folder and retry running it.

Solution 2:
Keep .gpt_config.env where it currently is, but update this line of config.yaml with the correct path of your .gpt_config.env file.

On a different note:
Since the objective of KG-RAG is to address biomedical queries, I would strongly suggest to try some disease related queries to test it (instead of non-biomedical queries like Who are you?. For such queries you can use direct GPT calls)
An example query can be (feel free to change it as per your interest):
What are the genes associated with multiple sclerosis?

Let me know if this issue gets resolved for you.

from kg_rag.

cswangxiaowei avatar cswangxiaowei commented on August 19, 2024

Thank you for the reply!
However, I choose solution 2 and try to ask: What are the genes associated with multiple sclerosis? but receive the error again. Besides, I try to use GPT4 api in other respos and it works.
The error is:
(kg_rag) @cswangxiaowei ➜ /workspaces/KG_RAG (main) $ python -m kg_rag.rag_based_generation.GPT.text_generation -g "gpt-4"

Enter your question : What are the genes associated with multiple sclerosis?
Retrieving context from SPOKE graph...
Here is the KG-RAG based answer:

Calling OpenAI...
Calling OpenAI...
Calling OpenAI...
Calling OpenAI...
Calling OpenAI...
Traceback (most recent call last):
File "/home/codespace/.local/lib/python3.10/site-packages/tenacity/init.py", line 382, in call
result = fn(*args, **kwargs)
File "/workspaces/KG_RAG/kg_rag/utility.py", line 183, in fetch_GPT_response
response = openai.ChatCompletion.create(
File "/home/codespace/.local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/home/codespace/.local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create
response, _, api_key = requestor.request(
File "/home/codespace/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 299, in request
resp, got_stream = self._interpret_response(result, stream)
File "/home/codespace/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 710, in _interpret_response
self._interpret_response_line(
File "/home/codespace/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: Invalid URL (POST /v1/engines/gpt-4/chat/completions)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/opt/conda/envs/kg_rag/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/envs/kg_rag/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/workspaces/KG_RAG/kg_rag/rag_based_generation/GPT/text_generation.py", line 56, in
main()
File "/workspaces/KG_RAG/kg_rag/rag_based_generation/GPT/text_generation.py", line 48, in main
output = get_GPT_response(enriched_prompt, SYSTEM_PROMPT, CHAT_MODEL_ID, CHAT_DEPLOYMENT_ID, temperature=TEMPERATURE)
File "/home/codespace/.local/lib/python3.10/site-packages/joblib/memory.py", line 655, in call
return self._cached_call(args, kwargs)[0]
File "/home/codespace/.local/lib/python3.10/site-packages/joblib/memory.py", line 598, in _cached_call
out, metadata = self.call(*args, **kwargs)
File "/home/codespace/.local/lib/python3.10/site-packages/joblib/memory.py", line 856, in call
output = self.func(*args, **kwargs)
File "/workspaces/KG_RAG/kg_rag/utility.py", line 203, in get_GPT_response
return fetch_GPT_response(instruction, system_prompt, chat_model_id, chat_deployment_id, temperature)
File "/home/codespace/.local/lib/python3.10/site-packages/tenacity/init.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/home/codespace/.local/lib/python3.10/site-packages/tenacity/init.py", line 379, in call
do = self.iter(retry_state=retry_state)
File "/home/codespace/.local/lib/python3.10/site-packages/tenacity/init.py", line 326, in iter
raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x7fae2c858e20 state=finished raised InvalidRequestError>]

from kg_rag.

karthiksoman avatar karthiksoman commented on August 19, 2024

Can you please double check the path of your .gpt_config.env is the same as that provided in the config.yaml file?

Also, here is the template of .gpt_config.env file:
API_KEY=
API_VERSION=<API version, this is totally optional>
RESOURCE_ENDPOINT=<API resource endpoint, this is totally optional>

from kg_rag.

Davy-Chendy avatar Davy-Chendy commented on August 19, 2024

Thank you for the reply! However, I choose solution 2 and try to ask: What are the genes associated with multiple sclerosis? but receive the error again. Besides, I try to use GPT4 api in other respos and it works. The error is: (kg_rag) @cswangxiaowei ➜ /workspaces/KG_RAG (main) $ python -m kg_rag.rag_based_generation.GPT.text_generation -g "gpt-4"

Enter your question : What are the genes associated with multiple sclerosis? Retrieving context from SPOKE graph... Here is the KG-RAG based answer:

Calling OpenAI... Calling OpenAI... Calling OpenAI... Calling OpenAI... Calling OpenAI... Traceback (most recent call last): File "/home/codespace/.local/lib/python3.10/site-packages/tenacity/init.py", line 382, in call result = fn(*args, **kwargs) File "/workspaces/KG_RAG/kg_rag/utility.py", line 183, in fetch_GPT_response response = openai.ChatCompletion.create( File "/home/codespace/.local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "/home/codespace/.local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create response, _, api_key = requestor.request( File "/home/codespace/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 299, in request resp, got_stream = self._interpret_response(result, stream) File "/home/codespace/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 710, in _interpret_response self._interpret_response_line( File "/home/codespace/.local/lib/python3.10/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: Invalid URL (POST /v1/engines/gpt-4/chat/completions)

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/opt/conda/envs/kg_rag/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/envs/kg_rag/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/workspaces/KG_RAG/kg_rag/rag_based_generation/GPT/text_generation.py", line 56, in main() File "/workspaces/KG_RAG/kg_rag/rag_based_generation/GPT/text_generation.py", line 48, in main output = get_GPT_response(enriched_prompt, SYSTEM_PROMPT, CHAT_MODEL_ID, CHAT_DEPLOYMENT_ID, temperature=TEMPERATURE) File "/home/codespace/.local/lib/python3.10/site-packages/joblib/memory.py", line 655, in call return self._cached_call(args, kwargs)[0] File "/home/codespace/.local/lib/python3.10/site-packages/joblib/memory.py", line 598, in _cached_call out, metadata = self.call(*args, **kwargs) File "/home/codespace/.local/lib/python3.10/site-packages/joblib/memory.py", line 856, in call output = self.func(*args, **kwargs) File "/workspaces/KG_RAG/kg_rag/utility.py", line 203, in get_GPT_response return fetch_GPT_response(instruction, system_prompt, chat_model_id, chat_deployment_id, temperature) File "/home/codespace/.local/lib/python3.10/site-packages/tenacity/init.py", line 289, in wrapped_f return self(f, *args, **kw) File "/home/codespace/.local/lib/python3.10/site-packages/tenacity/init.py", line 379, in call do = self.iter(retry_state=retry_state) File "/home/codespace/.local/lib/python3.10/site-packages/tenacity/init.py", line 326, in iter raise retry_exc from fut.exception() tenacity.RetryError: RetryError[<Future at 0x7fae2c858e20 state=finished raised InvalidRequestError>]

I have the same problem as you. Did you solve it?

from kg_rag.

karthiksoman avatar karthiksoman commented on August 19, 2024

Thanks @Davy-Chendy for the fix!
May I know which openai version you used for this? Also, does the existing fetch_GPT_response function is compatible with the openai version that you used?

from kg_rag.

Davy-Chendy avatar Davy-Chendy commented on August 19, 2024

Thanks @Davy-Chendy for the fix! May I know which openai version you used for this? Also, does the existing fetch_GPT_response function is compatible with the openai version that you used?

My current openai is the latest version currently updated (2024-3-2) using the command pip install --upgrade openai, which is 1.13.3. The existingfetch_GPT_responsefunction is not compatible with the current openai version because the usage of some functions in the new version of the openai library has changed. For details, please refer to my reply above, or openai's official documentation.

from kg_rag.

karthiksoman avatar karthiksoman commented on August 19, 2024

Thanks @Davy-Chendy
And you are able to get KG-RAG based response using the latest openai version?

from kg_rag.

Davy-Chendy avatar Davy-Chendy commented on August 19, 2024

Thanks @Davy-Chendy And you are able to get KG-RAG based response using the latest openai version?

yep

from kg_rag.

karthiksoman avatar karthiksoman commented on August 19, 2024

Awesome! Then I am closing this issue since it is resolved!

from kg_rag.

Related Issues (17)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.