farizrahman4u / loopgpt Goto Github PK
View Code? Open in Web Editor NEWModular Auto-GPT Framework
License: MIT License
Modular Auto-GPT Framework
License: MIT License
Streamlit UI should be open
loop_gpt) billion@billion-ThinkPad-E14-Gen-2:~/loopgpt/loopgpt$ python3 loops/ui.py
Traceback (most recent call last):
File "loops/ui.py", line 3, in
from loopgpt.constants import PROCEED_INPUT
ImportError: cannot import name 'PROCEED_INPUT' from 'loopgpt.constants' (/loopgpt/loop_gpt/lib/python3.8/site-packages/loopgpt/constants.py)
run command- python3 loops/ui.py
No response
latest
Hi, Could you include and inviroment file that can be edited directly?
Open AIs ChatGPT is a fantastic tool for this application but is there any development towards using a locally hosted language model like Dolly or Vicuna?
I'd have to look through the code more but a locally hosted server could act like an API to ease swapping out ChatGPT API. If the code uses OOP maybe another module for making requests would be enough and another class for local models, or that's on the user to setup with templates responses required by loopGPT?
No response
Generally I'd like to use this to aid research for commercial projects that are data sensitive.
Current code limits the models to "gpt-3.5-turbo", "gpt-4" and "gpt-4-32k". In some cases like summary, it kind of defaults to "gpt-3.5-turbo".
But there are other OpenAI models, right now and in the future too.
Especially to read long web pages, I would like to use "gpt-3.5-turbo-16k". It gets rejected by the LoopGPT module code, although it would work with OpenAI just fine.
thanks
Refactor the code so that all models supported by OpenAI can be used, at least ChatModels as this is what you architecture expects.
You could add an early check that the model is supported using openai.Model.list()
Thanks
No response
No response
even if a scraped page exceeds token limits, it should be properly chunked?
NEXT_COMMAND: browser, Args: {'url': 'https://www.soundguys.com/sony-wf-1000xm4-review-31815/', 'question': 'Extract specs, prices, and reviews for Sony WF-1000XM4.'}
SYSTEM: Executing command: browser
Summarizing text...: 0%| | 0/2 [00:00<?, ?it/s]
Summarizing text...: 50%|βββββ | 1/2 [00:00<00:00, 2.66it/s]
Summarizing text...: 50%|βββββ | 1/2 [00:00<00:00, 1.45it/s]
SYSTEM: browser output: An error occurred while scraping the website: This model's maximum context length is 8192 tokens. However, your messages resulted in 15534 tokens. Please reduce the length of the messages.. Make sure the URL is valid.
run the browser command with arguments to repeat
check chunking logic, maybe getting bypassed after soup handles the content <> links splitting
feature/azure_openai 0.0.13
SYSTEM: Executing command: read_from_file
Traceback (most recent call last):
File "/home/xxx/loopgpt/research_gpt.py", line 14, in
agent.cli()
File "/home/xxx/anaconda3/lib/python3.9/site-packages/loopgpt/agent.py", line 501, in cli
cli(self, continuous=continuous)
File "/home/xxx/anaconda3/lib/python3.9/site-packages/loopgpt/loops/repl.py", line 174, in cli
resp = agent.chat(run_tool=True)
File "/home/xxx/anaconda3/lib/python3.9/site-packages/loopgpt/utils/spinner.py", line 137, in inner
return func(*args, **kwargs)
File "/home/xxx/anaconda3/lib/python3.9/site-packages/loopgpt/agent.py", line 185, in chat
assert max_tokens
AssertionError
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββTraceback (most recent call last):
File "C:\Users..\LoopGPT 1.py", line 17, in
agent.cli()
File "C:..\AppData\Local\Programs\Python\Python310\lib\site-packages\loopgpt\agent.py", line 423, in cli cli(self, continuous=continuous)
File "C:..\AppData\Local\Programs\Python\Python310\lib\site-packages\loopgpt\loops\repl.py", line 111, in cli
resp = agent.chat()
File "C:..\AppData\Local\Programs\Python\Python310\lib\site-packages\loopgpt\utils\spinner.py", line 137, in inner
return func(*args, **kwargs)
File "C:..\AppData\Local\Programs\Python\Python310\lib\site-packages\loopgpt\agent.py", line 173, in chat
full_prompt, token_count = self.get_full_prompt(message)
File "C:..\AppData\Local\Programs\Python\Python310\lib\site-packages\loopgpt\agent.py", line 90, in get_full_prompt
token_count = count_tokens(prompt + user_prompt, model=self.model)
File "C:..\AppData\Local\Programs\Python\Python310\lib\site-packages\loopgpt\models\openai_.py", line 48, in count_tokens
enc = tiktoken.encoding_for_model(model)
File "C:..\AppData\Local\Programs\Python\Python310\lib\site-packages\tiktoken\model.py", line 68, in encoding_for_model
raise KeyError(
KeyError: 'Could not automatically map gpt-4 to a tokeniser. Please use tiktok.get_encoding
to explicitly get the tokeniser you expect.'
Great work!
I'm wondering if you could add the SerpApi for enhanced Google searches. It appears quicker and more easy to use?
add SerpApi for Google search
add SerpApi for Google search
add SerpApi for Google search
The expected behavior is to start the software in the CLI.
I tried installing from source first and received the same error. I then tried installing through pip and received the same error.
Not sure, I'm probably doing something wrong. Maybe better documentation.
Latest release
To simulate team and agent interactions, it would be great if you could define multiple agents with each a different name/goal, and then start the simulation with a prompt to one (or multiple) agents. From then you can observe which agents decides to contact which and how those interactions look like.
The next step would be instead of all agents being aware of each other existence, initializing them with a list of agent that they know, and allow them to only interact with those agents, until introduces to others.
I ran the sample for getting weather information, but without setting up the fetch_weather command, which it tried to run anyway. Ideally, the system would notice that fetch_weather failed and construct an alternate plan not using the OpenWeatherMap API, and then continue with the rest of the goals (getting dressing tips and writing them to dressing_tips.txt).
Instead, the system pretended that it was successful, and said it had "not been given any new commands since the last time [it] provided an output", choosing the do_nothing action.
Run the WeatherGPT example on the README, but comment out the code setting up GetWeather.
I don't see any places where the system itself is being asked whether it has completed a step of the plan. Maybe add that to the prompt, or add a "cheap model" (Auto-GPT uses ada, as I recall) to evaluate this based on the output and the plan step?
latest
When I ask LoopGPT to read the contents of a text file, it will almost always complain that the file is empty.
Can you please provide an example of integrating this with Fastapi while saving the state with each request?
The application should run the example script. I checked openAPI and the api rate limit is three requests per minute? Is this a recent change? Seems like they are intentionally trying to break people's scripts/use of automated agents with a rate limit that low...
Console reports 'rate limit reached trying again in 20 seconds'
Clone repo
Create .env with API key
run python3 examples.py file
Rate limit the API calls with set timeout ?
latest
Hi.
nice job!
I am testing your implementation (my kind of standard test for auto gpts is to let them create a web-portal about cats with depressions) and i foundt hat created files are written in the root loopgpt folder.
greetings.
The proposed feature would involve enhancing the existing state serialization in LβΎοΈpGPT to allow for the transfer of an agent's memory/context to one or more other agents. This functionality would be invaluable in scenarios where multiple agents need to interact with each other or return to previous discussions, maintaining awareness of the context of each conversation.
To accomplish this, we would need to extend the serialization and deserialization processes to not only handle saving and loading the state of a single agent, but also transferring this state between different agents.
This could be implemented as follows:
Agent State Export: An agent should be able to export its current state, including memory/context, into a standardized format that can be imported by other agents. This could be a serialized object, a JSON representation, or another appropriate format.
Agent State Import: An agent should be able to import a previously exported state. This would update its current memory/context to reflect the imported state.
This feature would add to the modularity of LβΎοΈpGPT and improve its utility in multi-agent conversations and simulations. The ability to share context between agents would also allow for more seamless conversation continuity, as agents could "remember" previous discussions.
An alternative solution could be to have a shared database or context store that all agents can access and update. While this might be simpler to implement, it could lead to potential synchronization issues and may not provide the same level of flexibility as the proposed solution.
This feature would further extend the usefulness of LβΎοΈpGPT's existing state serialization capabilities, allowing for more complex agent interactions and conversation simulations. It would be particularly useful in scenarios where continuity and context-awareness across multiple agents are important, such as in customer service chatbots, virtual assistants, or interactive storytelling applications.
Fantastic work so far guys!
I'm currently trying to pass it a template for document generation but I am getting the following response:
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 59431 tokens. Please reduce the length of the messages.
Do you plan on supporting longer files? If so, I can have a go at creating a PR if you could point me in the right direction? (I mostly use JS and Ruby, my Python is a bit rusty)
hey, apologies for the noobish question but Google Developers portal is huge and highly confusing...
I have a google application created and issued an API key... I now understand that I need to enable "API & Services" for this application and API key, I could not find "Google Search" specifically in their list of services ( https://console.cloud.google.com/apis/library ), I was only able to find and enable the "Custom Search API", which I understood is search for a single or a set of websites (so you can enable search for your website).
I have added the (single) API key that I generated to both the GOOGLE_API_KEY
and CUSTOM_SEARCH_ENGINE_ID
environment variables, but loopgpt will error with various error messages when performing a "google_search" operation, errors like:
Command "google_search" failed with error: run() got an unexpected keyword argument 'response_format'
Command "google_search" failed with error: run() got an unexpected keyword argument 'results'
Command "google_search" failed with error: run() got an unexpected keyword argument 'params'
So I am a bit confused as to how create an appropriate API key to fill in the GOOGLE_API_KEY
env variable that's needed for loopgpt to perform queries... Could you kindly point me to the right direction?
Also please clarify if we need both the GOOGLE_API_KEY
and CUSTOM_SEARCH_ENGINE_ID
and what each one does respectively.
Thank you for building this awesome tool π
Crazy work right there.
Running from a notebook things should also work I would expect.
Also the error is easily solvable:
File [~/.local/lib/python3.10/site-packages/loopgpt/loops/repl.py:79](~/.local/lib/python3.10/site-packages/loopgpt/loops/repl.py:79), in write_divider(big)
[77](./.local/lib/python3.10/site-packages/loopgpt/loops/repl.py?line=76) def write_divider(big=False):
[78](./.local/lib/python3.10/site-packages/loopgpt/loops/repl.py?line=77) char = "\u2501" if big else "\u2500"
---> [79](./.local/lib/python3.10/site-packages/loopgpt/loops/repl.py?line=78) columns = os.get_terminal_size().columns
[80](./.local/lib/python3.10/site-packages/loopgpt/loops/repl.py?line=79) print(char * columns)
OSError: [Errno 25] Inappropriate ioctl for device
Basically, vscode integrated notebook doesn't have this column size, so we could use some terminal default 80 chars.
This is really cool, I've been tinkering with Auto-GPT and I stumbled with this on reddit, I really like how it works congratulations and thank you for this fine work!
The issue I noticed is that my OpenAI usage spiked up a bit in comparison to when I was just tinkering with Auto-GPT,
Not sure if this is an issue or by design, but would be nice to at least understand where is going all that traffic.
Tried to run the example in the main read of this project.
Been running into issues when it comes to the browser command.
[0420/152912.683:INFO:CONSOLE(0)] "The resource https://use.typekit.net/af/e6380d/00000000000000007735a1cc/30/l?primer=7cdcb44be4a7db8877ffa5c0007b8dd865b3bbc383831fe2ea177f62257a9191&fvd=n4&v=3 was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.", source: https://www.pcmag.com/picks/the-best-headphones (0)
We need it
https://github.com/PrefectHQ/marvin is a great way to wrap OpenAI GPT. It has AI functions and bots. I'd love if LoopGPT could be made into a Marvin bot easily. Also provide some instructions how to integrate it with langchain :)
It shouldn't cause an error
It throws the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\User\GitHub\loopgpt\loopgpt\agent.py", line 441, in config
"progress": self.progress[:],
TypeError: 'NoneType' object is not subscriptable
import loopgpt
agent = loopgpt.Agent()
agent.name = "Node Exec Test"
agent.goals = "Run the list_files command and then the list_agents command"
agent_init_config = agent.config() # save init config
agent.clear_state() # start fresh
agent.from_config(agent_init_config) # reload cleared config
print(agent.config()) # ππ
Update line 346 in agent.py
to this:
self.progress = []
latest from main branch
In AutoGPT it's fixed by setting the browser to firefox... are there any settings for this in the module?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.