Code Monkey home page Code Monkey logo

loopgpt's Introduction

L♾️pGPT

A Modular Auto-GPT Framework

L♾️pGPT is a re-implementation of the popular Auto-GPT project as a proper python package, written with modularity and extensibility in mind.

🚀 Features 🚀

  • "Plug N Play" API - Extensible and modular "Pythonic" framework, not just a command line tool. Easy to add new features, integrations and custom agent capabilities, all from python code, no nasty config files!
  • GPT 3.5 friendly - Better results than Auto-GPT for those who don't have GPT-4 access yet!
  • Minimal prompt overhead - Every token counts. We are continuously working on getting the best results with the least possible number of tokens.
  • Human in the Loop - Ability to "course correct" agents who go astray via human feedback.
  • Full state serialization - Pick up where you left off; L♾️pGPT can save the complete state of an agent, including memory and the states of its tools to a file or python object. No external databases or vector stores required (but they are still supported)!

🧑‍💻 Installation

Install from PyPI

📗 This installs the latest stable version of L♾️pGPT. This is recommended for most users:

pip install loopgpt

📕 The below two methods install the latest development version of L♾️pGPT. Note that this version maybe unstable:

Install from source

pip install git+https://www.github.com/farizrahman4u/loopgpt.git@main

Install from source (dev)

git clone https://www.github.com/farizrahman4u/loopgpt.git
cd  loopgpt
pip install -e .

Install from source (dev) using Docker

git clone https://www.github.com/farizrahman4u/loopgpt.git
cd  loopgpt
docker build -t loopgpt:local-dev .

🏎️ Getting Started

Setup your OpenAI API Key 🔑

Option 1️⃣: Via a .env file

Create a .env file in your current working directory (wherever you are going to run L♾️pGPT from) and add the following line to it:

OPENAI_API_KEY="<your-openai-api-key>"

🛑 IMPORTANT 🛑

Windows users, please make sure "show file extensions" is enabled in your file explorer. Otherwise, your file will be named .env.txt instead of .env.

Option 2️⃣: Via environment variables

Set an environment variable called OPENAI_API_KEY to your OpenAI API Key.

How to set environment variables:

Create a new L♾️pGPT Agent🕵️:

Let's create an agent in a new Python script.

from loopgpt.agent import Agent

agent = Agent()

L♾️pGPT uses gpt-3.5-turbo by default and all outputs shown here are made using it. GPT-4 users can set model="gpt-4" instead:

agent = Agent(model="gpt-4")

Setup the Agent🕵️'s attributes:

agent.name = "ResearchGPT"
agent.description = "an AI assistant that researches and finds the best tech products"
agent.goals = [
    "Search for the best headphones on Google",
    "Analyze specs, prices and reviews to find the top 5 best headphones",
    "Write the list of the top 5 best headphones and their prices to a file",
    "Summarize the pros and cons of each headphone and write it to a different file called 'summary.txt'",
]

And we're off! Let's run the Agent🕵️'s CLI:

agent.cli()

Save your Python file as research_gpt.py and run it:

python research_gpt.py

You can exit the CLI by typing "exit".

🔁 Continuous Mode 🔁

If continuous is set to True, the agent will not ask for the user's permission to execute commands. It may go into infinite loops, so use it at your own risk!

agent.cli(continuous=True)

💻 Command Line Only Mode

You can run L♾️pGPT directly from the command line without having to write any python code as well:

loopgpt run

Run loopgpt --help to see all the available options.

🐋 Docker Mode

You can run L♾️pGPT in the previously mentioned modes, using Docker:

# CLI mode
docker run -i --rm loopgpt:local-dev loopgpt run

# Script mode example
docker run -i --rm -v "$(pwd)/scripts:/scripts" loopgpt:local-dev python /scripts/myscript.py

⚒️ Adding custom tools ⚒️

L♾️pGPT agents come with a set of builtin tools which allows them to perform various basic tasks such as searching the web, filesystem operations, etc. You can view these tools with print(agent.tools).

In addition to these builtin tools, you can also add your own tools to the agent's toolbox.

Example: WeatherGPT 🌦️

Let's create WeatherGPT, an AI assistant for all things weather.

A tool inherits from BaseTool and you only need to write a docstring to get your tool up and running!

from loopgpt.tools import BaseTool

class GetWeather(BaseTool):
    """Quickly get the weather for a given city

    Args:
        city (str): name of the city
    
    Returns:
        dict: The weather report for the city
    """
    
    def run(self, city):
        ...

L♾️pGPT gives a default ID to your tool but you can override them if you'd like:

class GetWeather(BaseTool):
    """Quickly get the weather for a given city

    Args:
        city (str): name of the city
    
    Returns:
        dict: The weather report for the city
    """

    @property
    def id(self):
        return "get_weather_command"

Now let's define what our tool will do in its run method:

import requests

# Define your custom tool
class GetWeather(BaseTool):
    """Quickly get the weather for a given city

    Args:
        city (str): name of the city
    
    Returns:
        dict: The weather report for the city
    """
    
    def run(self, city):
        try:
            url = "https://wttr.in/{}?format=%l+%C+%h+%t+%w+%p+%P".format(city)
            data = requests.get(url).text.split(" ")
            keys = ("location", "condition", "humidity", "temperature", "wind", "precipitation", "pressure")
            data = dict(zip(keys, data))
            return data
        except Exception as e:
            return f"An error occurred while getting the weather: {e}."

That's it! You've built your first custom tool. Let's register it with a new agent and run it:

from loopgpt.tools import WriteToFile
import loopgpt

# Register custom tool type
# This is actually not required here, but is required when you load a saved agent with custom tools.
loopgpt.tools.register_tool_type(GetWeather)

# Create Agent
agent = loopgpt.Agent(tools=[GetWeather, WriteToFile])
agent.name = "WeatherGPT"
agent.description = "an AI assistant that tells you the weather"
agent.goals = [
    "Get the weather for NewYork and Beijing",
    "Give the user tips on how to dress for the weather in NewYork and Beijing",
    "Write the tips to a file called 'dressing_tips.txt'"
]

# Run the agent's CLI
agent.cli()

Let's take a look at the dressing_tips.txt file that WeatherGPT wrote for us:

dressing_tips.txt

- It's Clear outside with a temperature of +10°C in Beijing. Wearing a light jacket and pants is recommended.
- It's Overcast outside with a temperature of +11°C in New York. Wearing a light jacket, pants, and an umbrella is recommended.

🚢 Course Correction

Unlike Auto-GPT, the agent does not terminate when the user denies the execution of a command. Instead it asks the user for feedback to correct its course.

To correct the agent's course, just deny execution and provide feedback:

The agent has updated its course of action:

💾 Saving and Loading Agent State 💾

You can save an agent's state to a json file with:

agent.save("ResearchGPT.json")

This saves the agent's configuration (model, name, description etc) as well as its internal state (conversation state, memory, tool states etc). You can also save just the confifguration by passing include_state=False to agent.save():

agent.save("ResearchGPT.json", include_state=False)

Then pick up where you left off with:

import loopgpt
agent = loopgpt.Agent.load("ResearchGPT.json")
agent.cli()

or by running the saved agent from the command line:

loopgpt run ResearchGPT.json

You can convert the agent state to a json compatible python dictionary instead of writing to a file:

agent_config = agent.config()

To get just the configuration without the internal state:

agent_config = agent.config(include_state=False)

To reload the agent from the config, use:

import loopgpt

agent = loopgpt.Agent.from_config(agent_config)

📋 Requirements

Optional Requirements

For official google search support you will need to setup two environment variable keys GOOGLE_API_KEY and CUSTOM_SEARCH_ENGINE_ID, here is how to get them:

  1. Create an application on the Google Developers Console.
  2. Create your custom search engine using Google Custom Search.
  3. Once your custom search engine is created, select it and get into the details page of the search engine.
    • On the "Basic" section, you will find the "Search engine ID" field, that value is what you will use for the CUSTOM_SEARCH_ENGINE_ID environment variable.
    • Now go to the "Programmatic Access" section at the bottom of the page.
      • Create a "Custom Search JSON API"
      • Follow the dialog by selecting the application you created on step #1 and when you get your API key use it to populate the GOOGLE_API_KEY environment variable.

ℹ️ In case these are absent, L♾️pGPT will fall back to using DuckDuckGo Search.

💌 Contribute

We need A LOT of Help! Please open an issue or a PR if you'd like to contribute.

🌳 Community

Need help? Join our Discord.

⭐ Star History 📈

Star History Chart

loopgpt's People

Contributors

arikw avatar cg123 avatar eltociear avatar ewouth avatar farizrahman4u avatar fayazrahman avatar iskandarreza avatar maker57sk avatar pixeladed avatar thanpolas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

loopgpt's Issues

Support More OpenAI Models Like gpt-3.5-turbo-16k

⚠️ Please check that this feature request hasn't been suggested before.

  • I searched previous Issues didn't find any similar feature requests.

🔖 Feature description

Current code limits the models to "gpt-3.5-turbo", "gpt-4" and "gpt-4-32k". In some cases like summary, it kind of defaults to "gpt-3.5-turbo".
But there are other OpenAI models, right now and in the future too.
Especially to read long web pages, I would like to use "gpt-3.5-turbo-16k". It gets rejected by the LoopGPT module code, although it would work with OpenAI just fine.
thanks

✔️ Solution

Refactor the code so that all models supported by OpenAI can be used, at least ChatModels as this is what you architecture expects.

You could add an early check that the model is supported using openai.Model.list()

Thanks

❓ Alternatives

No response

📝 Additional Context

No response

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this feature has not been requested yet.
  • I have provided enough information for the maintainers to understand and evaluate this request.

chrome browser issue linux

In AutoGPT it's fixed by setting the browser to firefox... are there any settings for this in the module?

bash:syntax error- Ubuntu pip install and install from source

Please check that this issue hasn't been reported before.

  • I searched previous Bug Reports didn't find any similar reports.

Expected Behavior

The expected behavior is to start the software in the CLI.

Current behaviour

bash: syntax error
Screenshot from 2023-11-25 16-09-19

Steps to reproduce

I tried installing from source first and received the same error. I then tried installing through pip and received the same error.

Possible solution

Not sure, I'm probably doing something wrong. Maybe better documentation.

Which Operating Systems are you using?

  • Linux
  • macOS
  • Windows

Python Version

  • >= v3.11
  • v3.10
  • v3.9
  • <= v3.8

LoopGPT Version

Latest release

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of LoopGPT.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.

Rate Limited Reached on Initial Install Running Example Script . . .

Please check that this issue hasn't been reported before.

  • I searched previous Bug Reports didn't find any similar reports.

Expected Behavior

The application should run the example script. I checked openAPI and the api rate limit is three requests per minute? Is this a recent change? Seems like they are intentionally trying to break people's scripts/use of automated agents with a rate limit that low...

Current behaviour

Console reports 'rate limit reached trying again in 20 seconds'

Steps to reproduce

Clone repo
Create .env with API key
run python3 examples.py file

Possible solution

Rate limit the API calls with set timeout ?

Which Operating Systems are you using?

  • Linux
  • macOS
  • Windows

Python Version

  • >= v3.11
  • v3.10
  • v3.9
  • <= v3.8

LoopGPT Version

latest

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of LoopGPT.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.

Add SerpApi instead of direct Google Search

⚠️ Please check that this feature request hasn't been suggested before.

  • I searched previous Issues didn't find any similar feature requests.

🔖 Feature description

Great work!
I'm wondering if you could add the SerpApi for enhanced Google searches. It appears quicker and more easy to use?

✔️ Solution

add SerpApi for Google search

❓ Alternatives

add SerpApi for Google search

📝 Additional Context

add SerpApi for Google search

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this feature has not been requested yet.
  • I have provided enough information for the maintainers to understand and evaluate this request.

Compatibility with Other Language Models

⚠️ Please check that this feature request hasn't been suggested before.

  • I searched previous Issues didn't find any similar feature requests.

🔖 Feature description

Open AIs ChatGPT is a fantastic tool for this application but is there any development towards using a locally hosted language model like Dolly or Vicuna?

✔️ Solution

I'd have to look through the code more but a locally hosted server could act like an API to ease swapping out ChatGPT API. If the code uses OOP maybe another module for making requests would be enough and another class for local models, or that's on the user to setup with templates responses required by loopGPT?

❓ Alternatives

No response

📝 Additional Context

Generally I'd like to use this to aid research for commercial projects that are data sensitive.

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this feature has not been requested yet.
  • I have provided enough information for the maintainers to understand and evaluate this request.

Support multiple agents interacting with each other (automatically)

To simulate team and agent interactions, it would be great if you could define multiple agents with each a different name/goal, and then start the simulation with a prompt to one (or multiple) agents. From then you can observe which agents decides to contact which and how those interactions look like.

The next step would be instead of all agents being aware of each other existence, initializing them with a list of agent that they know, and allow them to only interact with those agents, until introduces to others.

have a working space for created files?

Hi.
nice job!
I am testing your implementation (my kind of standard test for auto gpts is to let them create a web-portal about cats with depressions) and i foundt hat created files are written in the root loopgpt folder.

greetings.

AssertionError

Error reading a .h file.

SYSTEM: Executing command: read_from_file
Traceback (most recent call last):
File "/home/xxx/loopgpt/research_gpt.py", line 14, in
agent.cli()
File "/home/xxx/anaconda3/lib/python3.9/site-packages/loopgpt/agent.py", line 501, in cli
cli(self, continuous=continuous)
File "/home/xxx/anaconda3/lib/python3.9/site-packages/loopgpt/loops/repl.py", line 174, in cli
resp = agent.chat(run_tool=True)
File "/home/xxx/anaconda3/lib/python3.9/site-packages/loopgpt/utils/spinner.py", line 137, in inner
return func(*args, **kwargs)
File "/home/xxx/anaconda3/lib/python3.9/site-packages/loopgpt/agent.py", line 185, in chat
assert max_tokens
AssertionError

browser.py call to summarizer exceeds token limits

Please check that this issue hasn't been reported before.

  • I searched previous Bug Reports didn't find any similar reports.

Expected Behavior

even if a scraped page exceeds token limits, it should be properly chunked?

Current behaviour

NEXT_COMMAND: browser, Args: {'url': 'https://www.soundguys.com/sony-wf-1000xm4-review-31815/', 'question': 'Extract specs, prices, and reviews for Sony WF-1000XM4.'}

SYSTEM: Executing command: browser
Summarizing text...: 0%| | 0/2 [00:00<?, ?it/s]
Summarizing text...: 50%|█████ | 1/2 [00:00<00:00, 2.66it/s]
Summarizing text...: 50%|█████ | 1/2 [00:00<00:00, 1.45it/s]
SYSTEM: browser output: An error occurred while scraping the website: This model's maximum context length is 8192 tokens. However, your messages resulted in 15534 tokens. Please reduce the length of the messages.. Make sure the URL is valid.

Steps to reproduce

run the browser command with arguments to repeat

Possible solution

check chunking logic, maybe getting bypassed after soup handles the content <> links splitting

Which Operating Systems are you using?

  • Linux
  • macOS
  • Windows

Python Version

  • >= v3.11
  • v3.10
  • v3.9
  • <= v3.8

LoopGPT Version

feature/azure_openai 0.0.13

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of LoopGPT.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.

Running from notebook

Crazy work right there.

Running from a notebook things should also work I would expect.
Also the error is easily solvable:

File [~/.local/lib/python3.10/site-packages/loopgpt/loops/repl.py:79](~/.local/lib/python3.10/site-packages/loopgpt/loops/repl.py:79), in write_divider(big)
     [77](./.local/lib/python3.10/site-packages/loopgpt/loops/repl.py?line=76) def write_divider(big=False):
     [78](./.local/lib/python3.10/site-packages/loopgpt/loops/repl.py?line=77)     char = "\u2501" if big else "\u2500"
---> [79](./.local/lib/python3.10/site-packages/loopgpt/loops/repl.py?line=78)     columns = os.get_terminal_size().columns
     [80](./.local/lib/python3.10/site-packages/loopgpt/loops/repl.py?line=79)     print(char * columns)

OSError: [Errno 25] Inappropriate ioctl for device

Basically, vscode integrated notebook doesn't have this column size, so we could use some terminal default 80 chars.

Support chunking files

Fantastic work so far guys!

I'm currently trying to pass it a template for document generation but I am getting the following response:

openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 59431 tokens. Please reduce the length of the messages.

Do you plan on supporting longer files? If so, I can have a go at creating a PR if you could point me in the right direction? (I mostly use JS and Ruby, my Python is a bit rusty)

How to get a Google Search API key?

hey, apologies for the noobish question but Google Developers portal is huge and highly confusing...

I have a google application created and issued an API key... I now understand that I need to enable "API & Services" for this application and API key, I could not find "Google Search" specifically in their list of services ( https://console.cloud.google.com/apis/library ), I was only able to find and enable the "Custom Search API", which I understood is search for a single or a set of websites (so you can enable search for your website).

I have added the (single) API key that I generated to both the GOOGLE_API_KEY and CUSTOM_SEARCH_ENGINE_ID environment variables, but loopgpt will error with various error messages when performing a "google_search" operation, errors like:

Command "google_search" failed with error: run() got an unexpected keyword argument 'response_format'

Command "google_search" failed with error: run() got an unexpected keyword argument 'results'

Command "google_search" failed with error: run() got an unexpected keyword argument 'params'

So I am a bit confused as to how create an appropriate API key to fill in the GOOGLE_API_KEY env variable that's needed for loopgpt to perform queries... Could you kindly point me to the right direction?

Also please clarify if we need both the GOOGLE_API_KEY and CUSTOM_SEARCH_ENGINE_ID and what each one does respectively.

Thank you for building this awesome tool 🙏

Browser Issue: preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally

Tried to run the example in the main read of this project.

Been running into issues when it comes to the browser command.

[0420/152912.683:INFO:CONSOLE(0)] "The resource https://use.typekit.net/af/e6380d/00000000000000007735a1cc/30/l?primer=7cdcb44be4a7db8877ffa5c0007b8dd865b3bbc383831fe2ea177f62257a9191&fvd=n4&v=3 was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.", source: https://www.pcmag.com/picks/the-best-headphones (0)

Ohter model

Openai's model has serious privacy issues and is expensive. It doesn't even learn.

Doesn't seem to notice when it hallucinates a command?

Please check that this issue hasn't been reported before.

  • I searched previous Bug Reports didn't find any similar reports.

Expected Behavior

I ran the sample for getting weather information, but without setting up the fetch_weather command, which it tried to run anyway. Ideally, the system would notice that fetch_weather failed and construct an alternate plan not using the OpenWeatherMap API, and then continue with the rest of the goals (getting dressing tips and writing them to dressing_tips.txt).

Current behaviour

Instead, the system pretended that it was successful, and said it had "not been given any new commands since the last time [it] provided an output", choosing the do_nothing action.

Steps to reproduce

Run the WeatherGPT example on the README, but comment out the code setting up GetWeather.

Possible solution

I don't see any places where the system itself is being asked whether it has completed a step of the plan. Maybe add that to the prompt, or add a "cheap model" (Auto-GPT uses ada, as I recall) to evaluate this based on the output and the plan step?

Which Operating Systems are you using?

  • Linux
  • macOS
  • Windows

Python Version

  • >= v3.11
  • v3.10
  • v3.9
  • <= v3.8

LoopGPT Version

latest

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of LoopGPT.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.

Function call `agent.config()` throws an error if `agent.clear_state()` is called before it

Please check that this issue hasn't been reported before.

  • I searched previous Bug Reports didn't find any similar reports.

Expected Behavior

It shouldn't cause an error

Current behaviour

It throws the following error:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\User\GitHub\loopgpt\loopgpt\agent.py", line 441, in config
    "progress": self.progress[:],
TypeError: 'NoneType' object is not subscriptable

Steps to reproduce

import loopgpt

agent = loopgpt.Agent()
agent.name = "Node Exec Test"
agent.goals = "Run the list_files command and then the list_agents command"
agent_init_config = agent.config() # save init config

agent.clear_state() # start fresh
agent.from_config(agent_init_config) # reload cleared config
print(agent.config()) # 💀💀

Possible solution

Update line 346 in agent.py to this:

self.progress = []

Compare

Which Operating Systems are you using?

  • Linux
  • macOS
  • Windows

Python Version

  • >= v3.11
  • v3.10
  • v3.9
  • <= v3.8

LoopGPT Version

latest from main branch

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of LoopGPT.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.

ImportError: cannot import name 'PROCEED_INPUT'

Please check that this issue hasn't been reported before.

  • I searched previous Bug Reports didn't find any similar reports.

Expected Behavior

Streamlit UI should be open

Current behaviour

loop_gpt) billion@billion-ThinkPad-E14-Gen-2:~/loopgpt/loopgpt$ python3 loops/ui.py
Traceback (most recent call last):
File "loops/ui.py", line 3, in
from loopgpt.constants import PROCEED_INPUT
ImportError: cannot import name 'PROCEED_INPUT' from 'loopgpt.constants' (/loopgpt/loop_gpt/lib/python3.8/site-packages/loopgpt/constants.py)

Steps to reproduce

run command- python3 loops/ui.py

Possible solution

No response

Which Operating Systems are you using?

  • Linux
  • macOS
  • Windows

Python Version

  • >= v3.11
  • v3.10
  • v3.9
  • <= v3.8

LoopGPT Version

latest

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of LoopGPT.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.

Enviroment file

Hi, Could you include and inviroment file that can be edited directly?

Persistent Context Transfer Between Multiple Agents

⚠️ Please check that this feature request hasn't been suggested before.

  • I searched previous Issues didn't find any similar feature requests.

🔖 Feature description

The proposed feature would involve enhancing the existing state serialization in L♾️pGPT to allow for the transfer of an agent's memory/context to one or more other agents. This functionality would be invaluable in scenarios where multiple agents need to interact with each other or return to previous discussions, maintaining awareness of the context of each conversation.

✔️ Solution

To accomplish this, we would need to extend the serialization and deserialization processes to not only handle saving and loading the state of a single agent, but also transferring this state between different agents.

This could be implemented as follows:

  1. Agent State Export: An agent should be able to export its current state, including memory/context, into a standardized format that can be imported by other agents. This could be a serialized object, a JSON representation, or another appropriate format.

  2. Agent State Import: An agent should be able to import a previously exported state. This would update its current memory/context to reflect the imported state.

This feature would add to the modularity of L♾️pGPT and improve its utility in multi-agent conversations and simulations. The ability to share context between agents would also allow for more seamless conversation continuity, as agents could "remember" previous discussions.

❓ Alternatives

An alternative solution could be to have a shared database or context store that all agents can access and update. While this might be simpler to implement, it could lead to potential synchronization issues and may not provide the same level of flexibility as the proposed solution.

📝 Additional Context

This feature would further extend the usefulness of L♾️pGPT's existing state serialization capabilities, allowing for more complex agent interactions and conversation simulations. It would be particularly useful in scenarios where continuity and context-awareness across multiple agents are important, such as in customer service chatbots, virtual assistants, or interactive storytelling applications.

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this feature has not been requested yet.
  • I have provided enough information for the maintainers to understand and evaluate this request.

agent = Agent(model="gpt-4") throws an error

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━Traceback (most recent call last):
File "C:\Users..\LoopGPT 1.py", line 17, in
agent.cli()
File "C:..\AppData\Local\Programs\Python\Python310\lib\site-packages\loopgpt\agent.py", line 423, in cli cli(self, continuous=continuous)
File "C:..\AppData\Local\Programs\Python\Python310\lib\site-packages\loopgpt\loops\repl.py", line 111, in cli
resp = agent.chat()
File "C:..\AppData\Local\Programs\Python\Python310\lib\site-packages\loopgpt\utils\spinner.py", line 137, in inner
return func(*args, **kwargs)
File "C:..\AppData\Local\Programs\Python\Python310\lib\site-packages\loopgpt\agent.py", line 173, in chat
full_prompt, token_count = self.get_full_prompt(message)
File "C:..\AppData\Local\Programs\Python\Python310\lib\site-packages\loopgpt\agent.py", line 90, in get_full_prompt
token_count = count_tokens(prompt + user_prompt, model=self.model)
File "C:..\AppData\Local\Programs\Python\Python310\lib\site-packages\loopgpt\models\openai_.py", line 48, in count_tokens
enc = tiktoken.encoding_for_model(model)
File "C:..\AppData\Local\Programs\Python\Python310\lib\site-packages\tiktoken\model.py", line 68, in encoding_for_model
raise KeyError(
KeyError: 'Could not automatically map gpt-4 to a tokeniser. Please use tiktok.get_encoding to explicitly get the tokeniser you expect.'

Spike in usage of tokens of Open AI API using gpt-4 model

This is really cool, I've been tinkering with Auto-GPT and I stumbled with this on reddit, I really like how it works congratulations and thank you for this fine work!

The issue I noticed is that my OpenAI usage spiked up a bit in comparison to when I was just tinkering with Auto-GPT,
Not sure if this is an issue or by design, but would be nice to at least understand where is going all that traffic.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.