Code Monkey home page Code Monkey logo

autogpt-package's People

Contributors

antoniociolino avatar h4ck3rk3y avatar laurentluce avatar leeederek avatar leoporoli avatar mieubrisse avatar peeeekay avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autogpt-package's Issues

Can't run AutoGPT with multiple plugins

While working with AutoGPT we came across the following bug -

When we try to run kurtosis autogpt with multiple official plugins i.e they belong to the same repo ( https://github.com/Significant-Gravitas/Auto-GPT-Plugins)

kurtosis run github.com/kurtosis-tech/autogpt-package --enclave autogpt '{"OPENAI_API_KEY": "YOUR_API_KEY_HERE", "ALLOWLISTED_PLUGINS": "twitter, email"}' - both twitter and email are plugins available in official autogpt plugin repo.

We see this error

There was an error executing Starlark code 
An error occurred executing instruction (number 9) at github.com/kurtosis-tech/autogpt-package[130:14]:
  exec(service_name="autogpt", recipe=ExecRecipe(command=["mkdir", "/app/autogpt/plugins"]))
  Caused by: Exec returned exit code '1' that is not part of the acceptable status codes '[0]', with output:
  "mkdir: cannot create directory ‘/app/autogpt/plugins’: File exists\n"

Error encountered running Starlark code.

This happens because we try to create the same directory multiple times while setting up the plugins for autogpt

Use official image(v0.3.0 or later) for plugins

Tried running this with 0.3.0 but that fails with the following error

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/app/autogpt/__main__.py", line 5, in <module>
    autogpt.cli.main()
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1635, in invoke
    rv = super().invoke(ctx)
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
Revert "use official image (#25)"
  File "/usr/local/lib/python3.10/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/click/decorators.py", line 26, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/app/autogpt/cli.py", line 87, in main
    from autogpt.main import run_auto_gpt
  File "/app/autogpt/main.py", line 22, in <module>
    from scripts.install_plugin_deps import install_plugin_dependencies
ModuleNotFoundError: No module named 'scripts'

For now have reverted back to unofficial image published on my Docker.

[GPT4ALL] Leverage already-downloaded gpt4all models from official GPT4ALL desktop client

Hi Kurtosis team C:

Thank you for adding gpt4all model usage support!
With it, comes the problem of model size. I think an elegant solution / enhancement, which I believe is possible, would be the usage of any models which were already downloaded by a user using the standard GPT4ALL client application. This, with the help of LocalAI of course, is doable as far as I can tell, and would save a lot of time, as well as integrate seamlessly with gpt4all.

This makes more sense in a MacOS or Windows environment, wherein a desktop environment is much more likely to be involved, a la this example photo showing the GPT4ALL client application on Windows 10.

chat_RwyFxpSIWz.mp4

However, of course, especially with the use-cases that autogpt garners, it is naive to assume instances will have a Desktop installed at all. gpt4all is guilty of this, as their README offers no CLI instructions, with even their build_and_run instructions being extremely visual, requiring specific dependencies and applications which are not CLI friendly, etc.

Perhaps it is not easily possible as I am thinking that it is, and this is the reason that gpt4all haven't provided instructions as to the process. But personally I think that, especially with that aforementioned build_and_run explanation, implement a system that allows users to download gpt4all models through kurtosis iself,1 once per model, and then access / utilize them in autogpt-package for use as desired.

Once again, thank you guys for making this already extremely complicated field a lot more approachable. Haven dove head-first into this stuff a while ago, I am very happy to see you guys working on a project like this, and I really appreciate the way in which you have responded to feedback. This project needs more eyes on it.

Jason

Footnotes

  1. perhaps not directly through the kurtosis CLI, but through a subcommand of the autogpt-package, or something along those lines.

[IDEA] Support plugin allowedlist in Array format not just a comma-delineated list of names in a string

Summary

The plugin named AutoGPTApiTools causes an error to be thrown, as shown in the following image

kurtosis startup error

The kurtosis call is

kurtosis run github.com/kurtosis-tech/autogpt-package@$AUTOGPT_VERSION --enclave autogpt "$(cat $ENV_JSON_FILE)"

Notably, I am using AUTOGPT_VERSION="0.3.1". Could this be the issue? If it is, I believe supporting plugins made for current 0.4.x versions of Auto-GPT wouldn't actually break anything when used with 0.3.1, unless those plugins use something specifically for 0.4.x.

Let me know if anyone has any ideas on this one, as I'm confused. Thanks guys


Epilogue

Before posting this I realized what the problem is. Essentially, I am storing my ALLOWLISTED_PLUGINS like this in the json file:

 "ALLOWLISTED_PLUGINS":
    [
        "AutoGPTApiTools",
        "AutoGPTEmailPlugin",
        "PlannerPlugin",
        "autogpt-random-values",
        "AutoGPTTelegram",
        "wikipedia-search",
        "autogpt-wolframalpha-search",
        "AutoGPTDiscord",
        "AutoGPTMessagesPlugin",
        "AutoGPTWebInteraction",
        "AutoGPTWolframAlpha"
    ]

when of course it is expected, as it is in the Auto-GPT .env file, as a comma separated list of plugin names within a single string. Personally, I prefer this structure, as I think it is cleaner and clearer, and would appreciate seeing it supported in some capacity. It is however a very semantic issue, so I understand if you guys don't want to stray from the way that Auto-GPT is currently doing it. I would appreciate supporting both versions, and simply checking the JSON to determine if the key ALLOWLISTED_PLUGINS is of type Array or of type String. But I can also just, ya know, flatten it myself... I guess...

Built-in Support for Local/Unpublished/Private Auto-GPT Plugins

Creating a new issue from discussion within #85

🧱 Supporting Local/Unpublished/Private Auto-GPT Plugins

I think it would be a very useful feature to allow users who are actively developing, working with a proprietary codebase, or for any other reason are deploying autogpt with currently unpublished forks of Auto-GPT-Plugin-Template to attach their local code to an instance of autogpt-package.

As for the implementation of this, I am less clear on how it would work, as I don't fully understand kurtosis under the hood. However, I do think it is possible. I also always think that... but I digress.

🌟 This would, if this direction is taken, be facilitated significantly by allowing kurtosis to mount local volumes into images/enclaves. I feel that since this has come up in a few of the discussions on the matter, it should be added as a feature to the kurtosis platform -- in my opinion of course.

✉ meta-conversation

@h4ck3rk3y @mieubrisse Thanks for replying to my issues! Not everyone responds well to carefully formatted and considerably thought-out compilation of critique to their project. I've been looking further into your Kurtosis project, and I'm liking what I'm seeing.

have a good one C:

[Various] Proposed Functionalities / Roadmap (from some guy)

Hello kurtosis tech team!

I have various thoughts about this project, all of which are simply ways to improve -- at least as I see it -- the usability, execution speed, functional application space, or simply the quality of life. This is to say, I think the current state of this project is already great.

BUT...

Here is my vision for how this package could be used which would, for me at least, supercharge usage of this project.

💬 Suggestions

These are not provided in any particular order, and it may well be that some should definitely be implemented before others. I leave that up to the more knowledgeable.

I guess you can't have scrollTo id links to Headers in issues, or I'm being dumb... Either way sorry, the links above are, save the first, bogus.

🧱 Supporting Local/Unpublished/Private Auto-GPT Plugins

I think it would be a very useful feature to allow users who are actively developing, working with a proprietary codebase, or for any other reason are deploying autogpt with currently unpublished forks of Auto-GPT-Plugin-Template to attach their local code to an instance of autogpt-package.

As for the implementation of this, I am less clear on how it would work, as I don't fully understand kurtosis under the hood. However, I do think it is possible. I also always think that... but I digress

📦 Adding autogpt Branch/Version Specification at Runtime

A method with which we could specify the version of autogpt we want to use would be very helpful, especially given the drastic changes version to version -- such as the scrapping of the MEMORY_BACKEND system in Auto-GPT v0.4.0.
Currently, I would like to be able to use autogpt version 0.3.1, which can be located under the branch stable-0.3.1 in the Auto-GPT repository.

📑 Creating a Wiki / Nested Documentation

While you guys have definitely documented things well enough to use your project through just README.md, I think it's time you have a full-on wiki. I think the easiest, most user-friendly way to do this would be to utilize GitHub repository Wikis, but there are various possible ways. I would be happy to help create such documentation, if you would like.

⏸ "Pausing" & "Unpausing" Instances for Prolonged Use / Execution

Finding a way to save the state of your autogpt-package instance, write it to a file or files, and then return to it at a later date, would make utilizing this tool much nicer in terms of quality of life (in my opinion of course)

This could also potentially be a benefit when it comes to fault-tolerance / error recovery, which would be a great thing to support, both implemented by you guys, but also opening up a way for us to write our own conditions for "failure" and hook in our own subsequent handling systems.

👑 Constructing a "build" System for Configuration / Local Caching / Pausable Instances / etc

This one is by far the biggest ask. (save the best for last or something)

I am personally tired of having to provide all my arguments to the project in the form of a single CLI string. Even escaping special characters aside, growth while ultimately hampered by this constraint adds in my opinion unnecessary overhead.
As an alternative, I propose a build system not unlike that of npm or setuptools for Node.js and Python respectively, involving a json1 file, maybe by convention called agpkg_config.json2, which would facilitate multiple executions of the same instance -- especially if pausing is supported. (somehow)

Here's my suggestion, in terms of file examples.

agpkg_config.jsonc3

// =============================================================================== //  
/*
               __                    __                         __                  
  ____ ___  __/ /_____  ____ _____  / /_      ____  ____ ______/ /______ _____ ____ 
 / __ `/ / / / __/ __ \/ __ `/ __ \/ __/_____/ __ \/ __ `/ ___/ //_/ __ `/ __ `/ _ \
/ /_/ / /_/ / /_/ /_/ / /_/ / /_/ / /_/_____/ /_/ / /_/ / /__/ ,< / /_/ / /_/ /  __/
\__,_/\__,_/\__/\____/\__, / .___/\__/     / .___/\__,_/\___/_/|_|\__,_/\__, /\___/ 
                     /____/_/             /_/                          /____/       
*/
// =============================================================================== // 
// VERSION: ${VERSION}
// <insert project credits, authors, disclaimers, whatever else> 
{
"log_level": "DEBUG" // simple stuff like log level can go here
"model_type":"GPT4ALL", // "OpenAI", "GPT4ALL", "llama.cpp", etc (potentially C:)  
"autogpt": {
   "branch":"stable-0.3.1", 
   /* https://github.com/Significant-Gravitas/Auto-GPT/blob/master/.env.template 
      This file (.env) includes all necessary configuration for autogpt, even removing the need to pass
      OpenAI API Keys to `kurtosis` in plain-text. */
   "env":"path/to/a/.env",
   "settings": "path/to/ai_settings.yaml"  
   /* ...etc */
  }, 
// if "model_type" is GPT4ALL... (or if you have some other want from the config process)
"gpt4all": {
  "local": false, // if true, "model" should be a local filepath instead of a URL.  
  /* if "local": true & "model" typeof URL, then a download of the model could be performed to a standard default location -- probably just ${CWD}/models/... -- wherein after the download is finished, the URL value of "model" is replaced with the local filepath to the newly downloaded model. (this is a bit much to ask, but would be very, very cool!) */  
  "model": "https://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin",  
  "temperature": 0.28
  /* other LLM settings */  
  },
"network": {
   "proxies": [
     {
       "url":"http://localhost",
       "port": 80
     }
   ]
  // idk what else, VPN via OpenVPN support? or user-agent swapping? seems like it would be
  // specified in the autogpt specifications
  }
}

With this concept, a .env file which is exactly the same format as that which is used in standard autogpt instances can be utilized off-the-shelf by a new user intending to try out autogpt-package. Same idea with ai_settings.yaml.

Along with that, all the specification for other things, such as gpt4all configuration, or networking, or any other custom stuff you decide to put on top of autogpt in this package to increase its usefulness, could all be specified from a central file, and we wouldn't need to parse and merge .env files into json data then escape it. It just seems like this would be a more scalable approach in my eyes, but I could be wrong.

I also understand this would NOT be as simple as I have put it out to be -- it would be a somewhat major undertaking for the codebase -- but I wanted to put my suggestion out here. Feel free to use it as you see fit; I will not be offended. C:


🎀 Summary

Here is the current README.md's method of initiating autogpt-package with a redis memory backend.4

$ kurtosis run github.com/kurtosis-tech/autogpt-package '{"GPT_4ALL": true, "MODEL_URL": "https://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin"}'

Here is how I would have things work, in a perfect world for me -- which is obviously not everyone's perfect world.

$ kurtosis run github.com/kurtosis-tech/autogpt-package --enclave autogpt `/path/to/agpkg_config.json`  

Alright, this is insanely long, and I apologize in advance! Hopefully something in here will inspire somebody. Thank you to everyone at kurtosis-tech again for making this package. It was the first way that I got Auto-GPT to work at all when I was first playing with it, and it was as seamless as it purported to be.
But then I wanted more... hah.

Cheers,
Jason, or zod

Footnotes

  1. Or yaml, or even just xml if you don't like us very much. Format doesn't really matter here

  2. This should be able to be overwritten by a CLI flag, perhaps --build-configuration <filename.json>

  3. This is just a simple extension of json which supports multi line, single line, and inline comments using js syntax. Here is a parser from Microsoft written for Node.js This is not required by any means; simple json should be used, & developers can precompile our jsonc or jsonl into appropriate json conforming structures on our own time.

  4. I realize that the README.md document is, while replete with information about the actual use of the repository, a bit scattered. I think you guys should add either just a docs/ directory with various .md files, or a full-on GitHub Pages Wiki -- but I will make a separate issue for that

Add gpt4all support / Add autogpt4all script packaging?

nomic-ai/gpt4all has quickly been rising in popularity as an alternative to OpenAI's GPT models that can be run entirely locally, using various generative pre-trained transformer model types, as described in detail in the aforementioned repository's README.md.

fortunately, another github user already created aorumbayev/autogpt4all, which is a bash script allowing the use of gpt4all models with autogpt through a go-skynet/LocalAI server, as described on the repository. Perhaps you guys at the Kurtosis would consider building a *-package repository for this script as well.

I wanted to mention this as I really like Kurtosis, but currently I can't accomplish what I mean to do with it, as I do not have a GPT4 API key.

hopefully you consider this!
be well

Support Weaviate

Both of them are OSS and can replace Redis in Auto-GPT. I am thinking of a UX like

`kurtosis run package-name '{"OPENAPI_AI_KEY": "", "backend": "milvus|local|redis|weaviate"}'

Can default to Redis

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.