Code Monkey home page Code Monkey logo

loopgpt's Issues

ImportError: cannot import name 'PROCEED_INPUT'

Please check that this issue hasn't been reported before.

  • I searched previous Bug Reports didn't find any similar reports.

Expected Behavior

Streamlit UI should be open

Current behaviour

loop_gpt) billion@billion-ThinkPad-E14-Gen-2:~/loopgpt/loopgpt$ python3 loops/ui.py
Traceback (most recent call last):
File "loops/ui.py", line 3, in
from loopgpt.constants import PROCEED_INPUT
ImportError: cannot import name 'PROCEED_INPUT' from 'loopgpt.constants' (/loopgpt/loop_gpt/lib/python3.8/site-packages/loopgpt/constants.py)

Steps to reproduce

run command- python3 loops/ui.py

Possible solution

No response

Which Operating Systems are you using?

  • Linux
  • macOS
  • Windows

Python Version

  • >= v3.11
  • v3.10
  • v3.9
  • <= v3.8

LoopGPT Version

latest

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of LoopGPT.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.

Enviroment file

Hi, Could you include and inviroment file that can be edited directly?

Compatibility with Other Language Models

⚠️ Please check that this feature request hasn't been suggested before.

  • I searched previous Issues didn't find any similar feature requests.

πŸ”– Feature description

Open AIs ChatGPT is a fantastic tool for this application but is there any development towards using a locally hosted language model like Dolly or Vicuna?

βœ”οΈ Solution

I'd have to look through the code more but a locally hosted server could act like an API to ease swapping out ChatGPT API. If the code uses OOP maybe another module for making requests would be enough and another class for local models, or that's on the user to setup with templates responses required by loopGPT?

❓ Alternatives

No response

πŸ“ Additional Context

Generally I'd like to use this to aid research for commercial projects that are data sensitive.

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this feature has not been requested yet.
  • I have provided enough information for the maintainers to understand and evaluate this request.

Support More OpenAI Models Like gpt-3.5-turbo-16k

⚠️ Please check that this feature request hasn't been suggested before.

  • I searched previous Issues didn't find any similar feature requests.

πŸ”– Feature description

Current code limits the models to "gpt-3.5-turbo", "gpt-4" and "gpt-4-32k". In some cases like summary, it kind of defaults to "gpt-3.5-turbo".
But there are other OpenAI models, right now and in the future too.
Especially to read long web pages, I would like to use "gpt-3.5-turbo-16k". It gets rejected by the LoopGPT module code, although it would work with OpenAI just fine.
thanks

βœ”οΈ Solution

Refactor the code so that all models supported by OpenAI can be used, at least ChatModels as this is what you architecture expects.

You could add an early check that the model is supported using openai.Model.list()

Thanks

❓ Alternatives

No response

πŸ“ Additional Context

No response

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this feature has not been requested yet.
  • I have provided enough information for the maintainers to understand and evaluate this request.

browser.py call to summarizer exceeds token limits

Please check that this issue hasn't been reported before.

  • I searched previous Bug Reports didn't find any similar reports.

Expected Behavior

even if a scraped page exceeds token limits, it should be properly chunked?

Current behaviour

NEXT_COMMAND: browser, Args: {'url': 'https://www.soundguys.com/sony-wf-1000xm4-review-31815/', 'question': 'Extract specs, prices, and reviews for Sony WF-1000XM4.'}

SYSTEM: Executing command: browser
Summarizing text...: 0%| | 0/2 [00:00<?, ?it/s]
Summarizing text...: 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 1/2 [00:00<00:00, 2.66it/s]
Summarizing text...: 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 1/2 [00:00<00:00, 1.45it/s]
SYSTEM: browser output: An error occurred while scraping the website: This model's maximum context length is 8192 tokens. However, your messages resulted in 15534 tokens. Please reduce the length of the messages.. Make sure the URL is valid.

Steps to reproduce

run the browser command with arguments to repeat

Possible solution

check chunking logic, maybe getting bypassed after soup handles the content <> links splitting

Which Operating Systems are you using?

  • Linux
  • macOS
  • Windows

Python Version

  • >= v3.11
  • v3.10
  • v3.9
  • <= v3.8

LoopGPT Version

feature/azure_openai 0.0.13

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of LoopGPT.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.

AssertionError

Error reading a .h file.

SYSTEM: Executing command: read_from_file
Traceback (most recent call last):
File "/home/xxx/loopgpt/research_gpt.py", line 14, in
agent.cli()
File "/home/xxx/anaconda3/lib/python3.9/site-packages/loopgpt/agent.py", line 501, in cli
cli(self, continuous=continuous)
File "/home/xxx/anaconda3/lib/python3.9/site-packages/loopgpt/loops/repl.py", line 174, in cli
resp = agent.chat(run_tool=True)
File "/home/xxx/anaconda3/lib/python3.9/site-packages/loopgpt/utils/spinner.py", line 137, in inner
return func(*args, **kwargs)
File "/home/xxx/anaconda3/lib/python3.9/site-packages/loopgpt/agent.py", line 185, in chat
assert max_tokens
AssertionError

agent = Agent(model="gpt-4") throws an error

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━Traceback (most recent call last):
File "C:\Users..\LoopGPT 1.py", line 17, in
agent.cli()
File "C:..\AppData\Local\Programs\Python\Python310\lib\site-packages\loopgpt\agent.py", line 423, in cli cli(self, continuous=continuous)
File "C:..\AppData\Local\Programs\Python\Python310\lib\site-packages\loopgpt\loops\repl.py", line 111, in cli
resp = agent.chat()
File "C:..\AppData\Local\Programs\Python\Python310\lib\site-packages\loopgpt\utils\spinner.py", line 137, in inner
return func(*args, **kwargs)
File "C:..\AppData\Local\Programs\Python\Python310\lib\site-packages\loopgpt\agent.py", line 173, in chat
full_prompt, token_count = self.get_full_prompt(message)
File "C:..\AppData\Local\Programs\Python\Python310\lib\site-packages\loopgpt\agent.py", line 90, in get_full_prompt
token_count = count_tokens(prompt + user_prompt, model=self.model)
File "C:..\AppData\Local\Programs\Python\Python310\lib\site-packages\loopgpt\models\openai_.py", line 48, in count_tokens
enc = tiktoken.encoding_for_model(model)
File "C:..\AppData\Local\Programs\Python\Python310\lib\site-packages\tiktoken\model.py", line 68, in encoding_for_model
raise KeyError(
KeyError: 'Could not automatically map gpt-4 to a tokeniser. Please use tiktok.get_encoding to explicitly get the tokeniser you expect.'

Add SerpApi instead of direct Google Search

⚠️ Please check that this feature request hasn't been suggested before.

  • I searched previous Issues didn't find any similar feature requests.

πŸ”– Feature description

Great work!
I'm wondering if you could add the SerpApi for enhanced Google searches. It appears quicker and more easy to use?

βœ”οΈ Solution

add SerpApi for Google search

❓ Alternatives

add SerpApi for Google search

πŸ“ Additional Context

add SerpApi for Google search

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this feature has not been requested yet.
  • I have provided enough information for the maintainers to understand and evaluate this request.

bash:syntax error- Ubuntu pip install and install from source

Please check that this issue hasn't been reported before.

  • I searched previous Bug Reports didn't find any similar reports.

Expected Behavior

The expected behavior is to start the software in the CLI.

Current behaviour

bash: syntax error
Screenshot from 2023-11-25 16-09-19

Steps to reproduce

I tried installing from source first and received the same error. I then tried installing through pip and received the same error.

Possible solution

Not sure, I'm probably doing something wrong. Maybe better documentation.

Which Operating Systems are you using?

  • Linux
  • macOS
  • Windows

Python Version

  • >= v3.11
  • v3.10
  • v3.9
  • <= v3.8

LoopGPT Version

Latest release

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of LoopGPT.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.

Support multiple agents interacting with each other (automatically)

To simulate team and agent interactions, it would be great if you could define multiple agents with each a different name/goal, and then start the simulation with a prompt to one (or multiple) agents. From then you can observe which agents decides to contact which and how those interactions look like.

The next step would be instead of all agents being aware of each other existence, initializing them with a list of agent that they know, and allow them to only interact with those agents, until introduces to others.

Doesn't seem to notice when it hallucinates a command?

Please check that this issue hasn't been reported before.

  • I searched previous Bug Reports didn't find any similar reports.

Expected Behavior

I ran the sample for getting weather information, but without setting up the fetch_weather command, which it tried to run anyway. Ideally, the system would notice that fetch_weather failed and construct an alternate plan not using the OpenWeatherMap API, and then continue with the rest of the goals (getting dressing tips and writing them to dressing_tips.txt).

Current behaviour

Instead, the system pretended that it was successful, and said it had "not been given any new commands since the last time [it] provided an output", choosing the do_nothing action.

Steps to reproduce

Run the WeatherGPT example on the README, but comment out the code setting up GetWeather.

Possible solution

I don't see any places where the system itself is being asked whether it has completed a step of the plan. Maybe add that to the prompt, or add a "cheap model" (Auto-GPT uses ada, as I recall) to evaluate this based on the output and the plan step?

Which Operating Systems are you using?

  • Linux
  • macOS
  • Windows

Python Version

  • >= v3.11
  • v3.10
  • v3.9
  • <= v3.8

LoopGPT Version

latest

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of LoopGPT.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.

Rate Limited Reached on Initial Install Running Example Script . . .

Please check that this issue hasn't been reported before.

  • I searched previous Bug Reports didn't find any similar reports.

Expected Behavior

The application should run the example script. I checked openAPI and the api rate limit is three requests per minute? Is this a recent change? Seems like they are intentionally trying to break people's scripts/use of automated agents with a rate limit that low...

Current behaviour

Console reports 'rate limit reached trying again in 20 seconds'

Steps to reproduce

Clone repo
Create .env with API key
run python3 examples.py file

Possible solution

Rate limit the API calls with set timeout ?

Which Operating Systems are you using?

  • Linux
  • macOS
  • Windows

Python Version

  • >= v3.11
  • v3.10
  • v3.9
  • <= v3.8

LoopGPT Version

latest

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of LoopGPT.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.

have a working space for created files?

Hi.
nice job!
I am testing your implementation (my kind of standard test for auto gpts is to let them create a web-portal about cats with depressions) and i foundt hat created files are written in the root loopgpt folder.

greetings.

Persistent Context Transfer Between Multiple Agents

⚠️ Please check that this feature request hasn't been suggested before.

  • I searched previous Issues didn't find any similar feature requests.

πŸ”– Feature description

The proposed feature would involve enhancing the existing state serialization in L♾️pGPT to allow for the transfer of an agent's memory/context to one or more other agents. This functionality would be invaluable in scenarios where multiple agents need to interact with each other or return to previous discussions, maintaining awareness of the context of each conversation.

βœ”οΈ Solution

To accomplish this, we would need to extend the serialization and deserialization processes to not only handle saving and loading the state of a single agent, but also transferring this state between different agents.

This could be implemented as follows:

  1. Agent State Export: An agent should be able to export its current state, including memory/context, into a standardized format that can be imported by other agents. This could be a serialized object, a JSON representation, or another appropriate format.

  2. Agent State Import: An agent should be able to import a previously exported state. This would update its current memory/context to reflect the imported state.

This feature would add to the modularity of L♾️pGPT and improve its utility in multi-agent conversations and simulations. The ability to share context between agents would also allow for more seamless conversation continuity, as agents could "remember" previous discussions.

❓ Alternatives

An alternative solution could be to have a shared database or context store that all agents can access and update. While this might be simpler to implement, it could lead to potential synchronization issues and may not provide the same level of flexibility as the proposed solution.

πŸ“ Additional Context

This feature would further extend the usefulness of L♾️pGPT's existing state serialization capabilities, allowing for more complex agent interactions and conversation simulations. It would be particularly useful in scenarios where continuity and context-awareness across multiple agents are important, such as in customer service chatbots, virtual assistants, or interactive storytelling applications.

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this feature has not been requested yet.
  • I have provided enough information for the maintainers to understand and evaluate this request.

Support chunking files

Fantastic work so far guys!

I'm currently trying to pass it a template for document generation but I am getting the following response:

openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 59431 tokens. Please reduce the length of the messages.

Do you plan on supporting longer files? If so, I can have a go at creating a PR if you could point me in the right direction? (I mostly use JS and Ruby, my Python is a bit rusty)

How to get a Google Search API key?

hey, apologies for the noobish question but Google Developers portal is huge and highly confusing...

I have a google application created and issued an API key... I now understand that I need to enable "API & Services" for this application and API key, I could not find "Google Search" specifically in their list of services ( https://console.cloud.google.com/apis/library ), I was only able to find and enable the "Custom Search API", which I understood is search for a single or a set of websites (so you can enable search for your website).

I have added the (single) API key that I generated to both the GOOGLE_API_KEY and CUSTOM_SEARCH_ENGINE_ID environment variables, but loopgpt will error with various error messages when performing a "google_search" operation, errors like:

Command "google_search" failed with error: run() got an unexpected keyword argument 'response_format'

Command "google_search" failed with error: run() got an unexpected keyword argument 'results'

Command "google_search" failed with error: run() got an unexpected keyword argument 'params'

So I am a bit confused as to how create an appropriate API key to fill in the GOOGLE_API_KEY env variable that's needed for loopgpt to perform queries... Could you kindly point me to the right direction?

Also please clarify if we need both the GOOGLE_API_KEY and CUSTOM_SEARCH_ENGINE_ID and what each one does respectively.

Thank you for building this awesome tool πŸ™

Running from notebook

Crazy work right there.

Running from a notebook things should also work I would expect.
Also the error is easily solvable:

File [~/.local/lib/python3.10/site-packages/loopgpt/loops/repl.py:79](~/.local/lib/python3.10/site-packages/loopgpt/loops/repl.py:79), in write_divider(big)
     [77](./.local/lib/python3.10/site-packages/loopgpt/loops/repl.py?line=76) def write_divider(big=False):
     [78](./.local/lib/python3.10/site-packages/loopgpt/loops/repl.py?line=77)     char = "\u2501" if big else "\u2500"
---> [79](./.local/lib/python3.10/site-packages/loopgpt/loops/repl.py?line=78)     columns = os.get_terminal_size().columns
     [80](./.local/lib/python3.10/site-packages/loopgpt/loops/repl.py?line=79)     print(char * columns)

OSError: [Errno 25] Inappropriate ioctl for device

Basically, vscode integrated notebook doesn't have this column size, so we could use some terminal default 80 chars.

Spike in usage of tokens of Open AI API using gpt-4 model

This is really cool, I've been tinkering with Auto-GPT and I stumbled with this on reddit, I really like how it works congratulations and thank you for this fine work!

The issue I noticed is that my OpenAI usage spiked up a bit in comparison to when I was just tinkering with Auto-GPT,
Not sure if this is an issue or by design, but would be nice to at least understand where is going all that traffic.

Browser Issue: preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally

Tried to run the example in the main read of this project.

Been running into issues when it comes to the browser command.

[0420/152912.683:INFO:CONSOLE(0)] "The resource https://use.typekit.net/af/e6380d/00000000000000007735a1cc/30/l?primer=7cdcb44be4a7db8877ffa5c0007b8dd865b3bbc383831fe2ea177f62257a9191&fvd=n4&v=3 was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.", source: https://www.pcmag.com/picks/the-best-headphones (0)

Function call `agent.config()` throws an error if `agent.clear_state()` is called before it

Please check that this issue hasn't been reported before.

  • I searched previous Bug Reports didn't find any similar reports.

Expected Behavior

It shouldn't cause an error

Current behaviour

It throws the following error:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\User\GitHub\loopgpt\loopgpt\agent.py", line 441, in config
    "progress": self.progress[:],
TypeError: 'NoneType' object is not subscriptable

Steps to reproduce

import loopgpt

agent = loopgpt.Agent()
agent.name = "Node Exec Test"
agent.goals = "Run the list_files command and then the list_agents command"
agent_init_config = agent.config() # save init config

agent.clear_state() # start fresh
agent.from_config(agent_init_config) # reload cleared config
print(agent.config()) # πŸ’€πŸ’€

Possible solution

Update line 346 in agent.py to this:

self.progress = []

Compare

Which Operating Systems are you using?

  • Linux
  • macOS
  • Windows

Python Version

  • >= v3.11
  • v3.10
  • v3.9
  • <= v3.8

LoopGPT Version

latest from main branch

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of LoopGPT.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.

chrome browser issue linux

In AutoGPT it's fixed by setting the browser to firefox... are there any settings for this in the module?

Ohter model

Openai's model has serious privacy issues and is expensive. It doesn't even learn.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.