Comments (19)
Hi @rotemweiss57 Thank you so much for the prompt reply! Indeed, I'm using GPT-3 as I cannot access the gpt-4 api currently due to whatever limit OpenAI has. Thank you for helping me with the issue, the package works excellently. I'm going to be testing it as a way to compile sources for a manuscript that I'm working on. Will update you on how well it works, and hopefully I will be able to use GPT-4 soon.
Thanks again!
from gpt-researcher.
Hi @AHerik of course! Do you currently have gpt-4 working? or are you using gpt-3?
If you already changed the config file to gpt3 (based on your error I assume you have), All you need to do is to go to /agent/prompts.py and in that file look for the function "generate_search_queries_prompt(question)"
Inside that function, modify this line: "You must respond with a list of strings in the following format: ["query 1", "query 2", "query 3", "query 4"]"
to: "You must respond only with a list of strings in the following json format: ["query 1", "query 2", "query 3", "query 4"]"
Again, it is best to use gpt 4, but it should be stable enough.
Hope it helps!
from gpt-researcher.
Yeah it shows no access. they said "we gave all API users who have a history of successful payments access to the GPT-4 API (8k). We plan to open up access to new developers by the end of July 2023".
In the meanwhile you can change it to gpt3.5 in the config file just to see it runs.
from gpt-researcher.
I made the following change in the config file:
- self.smart_llm_model = os.getenv("SMART_LLM_MODEL", "gpt-4")
+ self.smart_llm_model = os.getenv("SMART_LLM_MODEL", "gpt-3.5-turbo-16k")
But I get the following error:
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/ec2-user/.local/lib/python3.9/site-packages/uvicorn/protocols/websockets/wsproto_impl.py", line 249, in run_asgi
result = await self.app(self.scope, self.receive, self.send)
File "/home/ec2-user/.local/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/home/ec2-user/.local/lib/python3.9/site-packages/fastapi/applications.py", line 289, in __call__
await super().__call__(scope, receive, send)
File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 149, in __call__
await self.app(scope, receive, send)
File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/home/ec2-user/.local/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
await self.app(scope, receive, send)
File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/routing.py", line 341, in handle
await self.app(scope, receive, send)
File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/routing.py", line 82, in app
await func(session)
File "/home/ec2-user/.local/lib/python3.9/site-packages/fastapi/routing.py", line 324, in app
await dependant.call(**values)
File "/home/ec2-user/gpt-researcher/main.py", line 50, in websocket_endpoint
await manager.start_streaming(task, report_type, agent, websocket)
File "/home/ec2-user/gpt-researcher/agent/run.py", line 38, in start_streaming
report, path = await run_agent(task, report_type, agent, websocket)
File "/home/ec2-user/gpt-researcher/agent/run.py", line 43, in run_agent
check_openai_api_key()
File "/home/ec2-user/gpt-researcher/config/config.py", line 82, in check_openai_api_key
exit(1)
File "/usr/lib64/python3.9/_sitebuiltins.py", line 26, in __call__
raise SystemExit(code)
SystemExit: 1
I can't see what is causing this in the code.
from gpt-researcher.
Please do the following:
self.fast_llm_model = os.getenv("FAST_LLM_MODEL", "gpt-3.5-turbo-16k")
self.smart_llm_model = os.getenv("SMART_LLM_MODEL", "gpt-3.5-turbo-16k")
from gpt-researcher.
That's what I did. I also found that gpt-4 is referenced in research_agent.py, and changed it there as well. Now I'm getting the following error:
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/ec2-user/.local/lib/python3.9/site-packages/uvicorn/protocols/websockets/wsproto_impl.py", line 249, in run_asgi
result = await self.app(self.scope, self.receive, self.send)
File "/home/ec2-user/.local/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/home/ec2-user/.local/lib/python3.9/site-packages/fastapi/applications.py", line 289, in __call__
await super().__call__(scope, receive, send)
File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 149, in __call__
await self.app(scope, receive, send)
File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/home/ec2-user/.local/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
raise e
File "/home/ec2-user/.local/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
await self.app(scope, receive, send)
File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/routing.py", line 341, in handle
await self.app(scope, receive, send)
File "/home/ec2-user/.local/lib/python3.9/site-packages/starlette/routing.py", line 82, in app
await func(session)
File "/home/ec2-user/.local/lib/python3.9/site-packages/fastapi/routing.py", line 324, in app
await dependant.call(**values)
File "/home/ec2-user/gpt-researcher/main.py", line 50, in websocket_endpoint
await manager.start_streaming(task, report_type, agent, websocket)
File "/home/ec2-user/gpt-researcher/agent/run.py", line 39, in start_streaming
report, path = await run_agent(task, report_type, agent, websocket)
File "/home/ec2-user/gpt-researcher/agent/run.py", line 53, in run_agent
await assistant.conduct_research()
File "/home/ec2-user/gpt-researcher/agent/research_agent.py", line 133, in conduct_research
search_queries = await self.create_search_queries()
File "/home/ec2-user/gpt-researcher/agent/research_agent.py", line 91, in create_search_queries
return json.loads(result)
File "/usr/lib64/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/lib64/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 81)
Looks like gpt-3.5 is returning something that doesn't conform to json format? Seems strange. I can't see what is being decoded, I'm not proficient in python and so far my efforts to debug this using print() statements hasn't been successful, stdout seems to be redirected somewhere. I also tried import logging; logging.info() but the output is also getting swallowed somewhere.
from gpt-researcher.
Yes, if you look at prompts.py at generate_search_queries_prompt, it asks it to produce the list of queries in certain format. it is completely stable with gpt4 but not with gpt3.5 . I will dig into modifying the prompt to be more stable with gpt3.5. in the meanwhile try to run it again
add a print here to debug it:
async def create_search_queries(self):
""" Creates the search queries for the given question.
Args: None
Returns: list[str]: The search queries for the given question
"""
result = await self.call_agent(prompts.generate_search_queries_prompt(self.question))
print(result)
await self.websocket.send_json({"type": "logs", "output": f"🧠 I will conduct my research based on the following queries: {result}..."})
return json.loads(result)
from gpt-researcher.
I added
print(f"result = {result}", file=sys.stderr, flush=True)
after self.call_agent
and got what looks like an invalid json string:
result = ["<query1>"] ["<query2>"] ["<query3>"] ["<query4>"]
from gpt-researcher.
this edit to the prompt is slightly more stable. Please edit generate_search_queries_prompt in the meanwhile:
You must respond only with a list of strings in the following json format: ["query 1", "query 2", "query 3", "query 4"]
Thank you!
from gpt-researcher.
That helped, now I'm getting "cannot find Chrome binary", I'll install that now and report back.
from gpt-researcher.
Hmm, it didn't throw an error, but it generated an empty report:
Agent Output
Thinking about research questions for the task...
Total research words: 679
Writing research_report for research task: <my input>
End time: 2023-07-10 23:04:24.691949
Total run time: 0:00:01.192128
Research Report
from gpt-researcher.
Please empty the output directory and run again
from gpt-researcher.
Now it started downloading things for the research, but got this error:
An error occurred while processing the url <url>: [Errno 26] Text file busy: '/home/ec2-user/.wdm/drivers/chromedriver/linux64/114.0.5735.90/chromedriver'
Just in case, this is running on an "Amazon-Linux" instance.
By the way, thanks for the hand-holding, you're awesome!
from gpt-researcher.
Of course! can you try run the project locally?
from gpt-researcher.
I'm running it on an AWS instance, I can't run it locally, my mac is too old, brew doesn't support my OSX anymore and I can't install python3.
from gpt-researcher.
Got it. okay I will try play with an instance as well and update you. We haven't deployed it on an instance yet. If you have any insights feel free to share!
from gpt-researcher.
Cheaper to use docker, I don't want you spending money on my behalf. :)
(I should try that too, it's just that my drive is almost full, haha, I should change my mac)
from gpt-researcher.
Hi @firerish @rotemweiss57 ! I'm having the same issue:
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 51)
I can run the project locally on my Mac, but I was unable to follow the modifications you suggested which would have potentially worked locally. Sorry about that, I'm not very experienced with package development. Would you please be able to suggest some changes I can make to the prompts.py to see if it's working on my end? Thanks for this wonderful package by the way, I saw it on LinkedIn and was immediately interested!
from gpt-researcher.
Happy this is resolved. Closing this thread for now
from gpt-researcher.
Related Issues (20)
- Multiagent Issue
- Changing the model from gpt-4o to gpt-3.5 turbo ? HOT 1
- Different provider for FAST and SMART LLMs HOT 2
- The UI of GPT Researcher! HOT 6
- Chinese character not display HOT 1
- Crashes instead of displaying "Could not find any answers for this topic" when it wasn't able to find any relavant information. HOT 6
- No module named 'gpt_researcher.retrievers.custom' HOT 3
- ImportError: cannot import name 'GPTResearcher' from 'gpt_researcher' (unknown location) HOT 7
- Research topic string needs to be sanitized before using it as filename HOT 5
- Code halts if OpenAI rate limit is reached. openai.RateLimitError: Error code: 429 HOT 3
- How can I adjust each part of the output to have as much text as possible? HOT 1
- Is Anthropic / Claude use broken at the moment? -- NOW WITH FIX IN REPLIES HOT 3
- AttributeError: 'list' object has no attribute 'dict' -- NOW WITH FIX HOT 2
- Uvicorn loads but application stalls at "connection open" HOT 3
- References from previous report included in current report? HOT 4
- Support for different Embeddings other than OpenAI HOT 2
- Can't run - needs exa-py HOT 2
- Output style issues + did not manage to locate a library called 'gobject-2.0-0' HOT 8
- "FileNotFoundError: [Errno 2] No such file or directory" with particular prompts HOT 2
- multi-agents module seems to be using gpt-4o no matter what is specified as "model" in task.json HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gpt-researcher.