Code Monkey home page Code Monkey logo

jgravelle / autogroq Goto Github PK

View Code? Open in Web Editor NEW
1.1K 1.1K 393.0 2.67 MB

AutoGroq is a groundbreaking tool that revolutionizes the way users interact with Autogen™ and other AI assistants. By dynamically generating tailored teams of AI agents based on your project requirements, AutoGroq eliminates the need for manual configuration and allows you to tackle any question, problem, or project with ease and efficiency.

Home Page: https://autogroq.streamlit.app/

Python 98.75% CSS 1.25%
agents ai artificial-intelligence autogen crewai groq llm

autogroq's People

Contributors

jgravelle avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autogroq's Issues

A conform map (as configuration) used for model names when exporting from autogroq

Reason: I fail to find a model-api that works both in autogroq and in autogen.

The main problem with autogen (seems to be) is that groq-api limits the response-rate to a very low value. Remember you had to do special code also in autogroq?

So, when the export [button-press works nicely] from autogroq to autogen has been done then what do you do with the stuff now in autogen? You can not use the groq-models.

I went as far as using fireworks.ai api-inference (somewhat similar as groq-api) in autogen, but it seems a bit clumsy to need to go through all that was exported from autogroq and then change that (manually) in autogen to some model that (just/maybe/might) work with the autogroq generated stuff.

At this point also an awful heretical blasphemous idea is born: maybe the fireworks.api could be used in autogroq? So that both autogroq and autogen could use the same model.

Well, the meantime: any idea for the least-manual-effort in matching/making the models align in autogroq and autogen for actual immediate-continual-working after the autogroq-export-to-autogen? Maybe defining a default model in/for the export(ed) model to-be in autogen/crewai etc? Or something better: a conform map?

The fireworks.ai allure reason? It has a way higher response-rate limit(s). Actually in the very first contact the account-executive promised to me that the rate could be increased to 2,000 RPM if I so want.

In summary: If you have an/any example of an actual smooth continuing of the the work (in autogen studio) after exported from autogroq then that surely would be appreciated.

As a feature request: A conform map (as configuration) for model names when exporting from autogroq. That would at least resolve the model name differences between various apis.

Regards; Mr. Jalo Ahonen

P.S A list of few (fireworks.ai) models I have tried in autogen studio.

Structure in autogen; for example the first one:

baseurl is = https://api.fireworks.ai/inference/v1
model name is = accounts/fireworks/models/firefunction-v1

(Yeah, that's the [longish] model name(s). Not quite like in autogroq, even when using mixtral.)

$0/M Tokens
32,768 Max Context
https://api.fireworks.ai/inference/v1
accounts/fireworks/models/firefunction-v1

$0.9/M Tokens
200,000 Max Context
https://api.fireworks.ai/inference/v1
accounts/fireworks/models/yi-34b-200k-capybara

$0.2/M Tokens
32,768 Max Context
https://api.fireworks.ai/inference/v1
accounts/fireworks/models/openorca-7b

$0.9/M Tokens
65,536 Max Context
https://api.fireworks.ai/inference/v1
accounts/fireworks/models/mixtral-8x22b-instruct-hf

$0.9/M Tokens
65,536 Max Context
https://api.fireworks.ai/inference/v1
accounts/fireworks/models/mixtral-8x22b-instruct

$0.5/M Tokens
32,768 Max Context
https://api.fireworks.ai/inference/v1
accounts/fireworks/models/mixtral-8x7b-instruct-hf

$0.5/M Tokens
32,768 Max Context
https://api.fireworks.ai/inference/v1
accounts/fireworks/models/mixtral-8x7b-instruct

Yaml Files for CrewAI

If you could put the agents in a agents.yaml file and include a task.yaml file, you'd be legendary

Incorrect API key provided: None. with OpenAI

I'm trying to run AutoGroq with OpenAi but even if I add the key to config_local.py or even to config.py directly I still get.

Request failed. Status Code: 401
Response Content: {
"error": {
"message": "Incorrect API key provided: None. You can find your API key at https://platform.openai.com/account/api-keys.",
"type": "invalid_request_error",
"param": null,
"code": "invalid_api_key"
}
}

Debug: Rephrased text: None
Error: Failed to rephrase the user request.
2024-06-13 12:29:09.016 Please replace st.experimental_rerun with st.rerun.

st.experimental_rerun will be removed after 2024-04-01.

Am i doing something wrong or can you replicate this behavior?

No module named 'utils.auth_utils'

Sorry for the silly ask, but I tried few times following the instructions form the video, I tried from the Readme...
Added the Groc key to the terminal and to the system path
but I get the same problem.

ModuleNotFoundError: No module named 'utils.auth_utils'
Traceback:
File "C:\Users\win\AppData\Local\Programs\Python\Python311\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 600, in _run_script
exec(code, module.dict)
File "E:\TRIAL\AutoGroq\AutoGroq\AutoGroq\main.py", line 5, in
from agent_management import display_agents
File "E:\TRIAL\AutoGroq\AutoGroq\autogroq\agent_management.py", line 9, in
from utils.auth_utils import get_api_key

Any ideas?

FEATURE REQUEST: The ability to add additional agents.

FEATURE REQUEST: The ability to add additional agents, much like the add new skills feature. I am able to the agents to suggest additional agents for the team and it would be SOOOO much easier to do them in Autogroq vs. Autogen/studio. Thanks for all you do!!!

wrongly escaped underscore in skills generation + better prompt for skills

Seeing a lot of \_ in generated skills...
this breaks both name recognition code and most of the python code, requiring hand edits.
I added a quick .replace("\\_","_") to fix it, but unsure of the best answer (unsure why it's escaping those)

Also here's my current improved prompt for better skills, based on best practices from Autogen Studio examples:

def get_generate_skill_prompt(rephrased_skill_request):
    return f'''
                Based on the rephrased skill request below, please do the following:

                1. Do step-by-step reasoning and think to understand the request better.
                2. Code the best Autogen Studio Python skill as per the request as a [skill_name].py file.
                3. Return only the skill file, no commentary, intro, or other extra text. If there ARE any non-code lines, please pre-pend them with a '#' symbol to comment them out.
                4. A proper skill will have these parts:
                   a. Imports (import libraries needed for the skill)
                   b. Function definition AND docstrings (this helps the LLM understand what the function does and how to use it)
                   c. Function body (the actual code that implements the function)
                   d. (optional) Example usage - ALWAYS commented out
                   Here is an example of a well formatted skill:

                   # skill filename: save_file_to_disk.py
                   # Import necessary module(s)
                   import os

                   def save_file_to_disk(contents, file_name):
                   # docstrings
                   """
                   Saves the given contents to a file with the given file name.

                   Parameters:
                   contents (str): The string contents to save to the file.
                   file_name (str): The name of the file, including its extension.

                   Returns:
                   str: A message indicating the success of the operation.
                   """

                   # Body of skill

                   # Ensure the directory exists; create it if it doesn't
                   directory = os.path.dirname(file_name)
                   if directory and not os.path.exists(directory):
                      os.makedirs(directory)

                   # Write the contents to the file
                   with open(file_name, 'w') as file:
                      file.write(contents)
    
                   return f"File file_name has been saved successfully."

                   # Example usage:
                   # contents_to_save = "Hello, world!"
                   # file_name = "example.txt"
                   # print(save_file_to_disk(contents_to_save, file_name))

                Rephrased skill request: "{rephrased_skill_request}"
                '''

ImportError: cannot import name 'getStringIO' from 'reportlab.lib.utils

pulled the latest commit. This is the error I get when running main.py

File "C:\Users\jsars\anaconda3\envs\AutoGroq\Lib\site-packages\xhtml2pdf\xhtml2pdf_reportlab.py", line 20, in
from reportlab.lib.utils import flatten, open_for_read, getStringIO,
ImportError: cannot import name 'getStringIO' from 'reportlab.lib.utils' (C:\Users\jsars\anaconda3\envs\AutoGroq\Lib\site-packages\reportlab\lib\utils.py)

Dynamic path to database

Hi J,

I downloaded Autogroq to test it out.

I had an issue with it not connecting to the Autogenstudio database.

I see you had it hard coded in config.py.

here is the update to make that dynamic.


#get user home directory
import os
home_dir = os.path.expanduser("~")

Database path

AUTOGEN_DB_PATH = f'{home_dir}/.autogenstudio/database.sqlite'

LMStudio unexpected keyword argument 'api_key'

st.experimental_rerun will be removed after 2024-04-01.
Debug: Handling user request for session state: {'discussion': '', 'rephrased_request': '', 'api_key': '', 'agents': [], 'whiteboard': '', 'reset_button': False, 'uploaded_data': None, 'model_selection': 'xtuner/llava-llama-3-8b-v1_1-gguf', 'current_project': <current_project.Current_Project object at 0x000002B9215389D0>, 'max_tokens': 2048, 'LMSTUDIO_API_KEY': 'lm-studio', 'skill_functions': {'execute_powershell_command': <function execute_powershell_command at 0x000002B921CD8220>, 'fetch_web_content': <function fetch_web_content at 0x000002B9210F5300>, 'generate_sd_images': <function generate_sd_images at 0x000002B921CCDEE0>, 'get_weather': <function get_weather at 0x000002B921CD85E0>, 'save_file_to_disk': <function save_file_to_disk at 0x000002B9238EAB60>}, 'selected_skills': [], 'autogen_zip_buffer': None, 'show_request_input': True, 'discussion_history': '', 'rephrased_request_area': '', 'crewai_zip_buffer': None, 'temperature': 0.3, 'previous_user_request': 'what is an llm', 'model': 'xtuner/llava-llama-3-8b-v1_1-gguf', 'skill_name': None, 'last_agent': '', 'last_comment': '', 'skill_request': '', 'user_request': 'what does 1 + 1 equal?', 'user_input': '', 'reference_html': {}, 'reference_url': ''}
Debug: Sending request to rephrase_prompt
Debug: Model: xtuner/llava-llama-3-8b-v1_1-gguf
Executing rephrase_prompt()
Error occurred in handle_user_request: LmstudioProvider.init() got an unexpected keyword argument 'api_key'

User-specific configurations

LLM_PROVIDER = "lmstudio"
GROQ_API_URL = "https://api.groq.com/openai/v1/chat/completions"
LMSTUDIO_API_URL = "http://localhost:1234/v1/chat/completions"
OLLAMA_API_URL = "http://127.0.0.1:11434/api/generate"
OPENAI_API_KEY = "your_openai_api_key"
OPENAI_API_URL = "https://api.openai.com/v1/chat/completions"

elif LLM_PROVIDER == "lmstudio":
API_URL = LMSTUDIO_API_URL
MODEL_TOKEN_LIMITS = {
'xtuner/llava-llama-3-8b-v1_1-gguf': 2048,
}

MODEL_CHOICES = {
'default': None,
'gemma-7b-it': 8192,
'gpt-4o': 4096,
'xtuner/llava-llama-3-8b-v1_1-gguf': 2048,
'llama3': 8192,
'llama3-70b-8192': 8192,
'llama3-8b-8192': 8192,
'mixtral-8x7b-32768': 32768
}

Move the creation Prompts used to allow user changes

The various prompts actually used (the skills, the agents, etc etc) should be in at least a user config file so it can be edited and/or add some UI to edit and save.

And while on the topic, using 'Autogen Studio python skill' in the skill creation prompt gets better results than 'python skill', and giving an example showing the format (which autogen actually can (and does?) read for context) helps with ensuring the formatting is correct, and comments and arguments are correctly spec'ed

How do i set up the api key?

I got this error message while trying to run this command "streamlit run AutoGroq/main.py". How do I set up the api key?

No secrets files found. Valid paths for a secrets.toml file are: C:\Users\camer.streamlit\secrets.toml, C:\Users\camer\Documents\AutoGroq.streamlit\secrets.toml

FileNotFoundError: No secrets files found. Valid paths for a secrets.toml file are: C:\Users\camer.streamlit\secrets.toml, C:\Users\camer\Documents\AutoGroq.streamlit\secrets.toml
Traceback:
File "C:\Users\camer\Documents\AutoGroq\autogroq_venv\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 600, in _run_script
exec(code, module.dict)
File "C:\Users\camer\Documents\AutoGroq\AutoGroq\main.py", line 82, in
main()
File "C:\Users\camer\Documents\AutoGroq\AutoGroq\main.py", line 23, in main
api_key = get_api_key()
^^^^^^^^^^^^^
File "C:\Users\camer\Documents\AutoGroq\AutoGroq\auth_utils.py", line 24, in get_api_key
api_key = st.secrets.get(api_key_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 774, in get
File "C:\Users\camer\Documents\AutoGroq\autogroq_venv\Lib\site-packages\streamlit\runtime\secrets.py", line 305, in getitem
value = self._parse(True)[key]
^^^^^^^^^^^^^^^^^
File "C:\Users\camer\Documents\AutoGroq\autogroq_venv\Lib\site-packages\streamlit\runtime\secrets.py", line 214, in _parse
raise FileNotFoundError(err_msg)

ImportError when trying to run via LmStudio

Long time listener first time caller. Just installed AutoGroq, set the config_local.py for LmStudio and received an error that Api_url couldn't be imported when I tried to run Autogroq. The following is the error:

ImportError: cannot import name 'API_URL' from 'config' (C:\Users\shake\Ag\Autogroq\AutoGroq\config.py)
Traceback:
File "C:\Users\shake.conda\envs\Ag\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 600, in _run_script
exec(code, module.dict)
File "C:\Users\shake\Ag\AutoGroq\AutoGroq\main.py", line 5, in
from agent_management import display_agents
File "C:\Users\shake\Ag\Autogroq\AutoGroq\agent_management.py", line 10, in
from utils.ui_utils import get_llm_provider, regenerate_json_files_and_zip, update_discussion_and_whiteboard
File "C:\Users\shake\Ag\Autogroq\AutoGroq\utils\ui_utils.py", line 13, in
from config import API_URL, LLM_PROVIDER, MAX_RETRIES, MODEL_TOKEN_LIMITS, RETRY_DELAY

This is my config_local.py but all I changed was the llm provider

# User-specific configurations

LLM_PROVIDER = "LMSTUDIO"
GROQ_API_URL = "https://api.groq.com/openai/v1/chat/completions"
LMSTUDIO_API_URL = "http://localhost:1234/v1/chat/completions"
OLLAMA_API_URL = "http://127.0.0.1:11434/api/generate"
OPENAI_API_KEY = "your_openai_api_key"
OPENAI_API_URL = "https://api.openai.com/v1/chat/completions"

Not sure what I could be doing wrong. LmStudio is up and running as a server but I don't think that should really matter for the purpose of the error. Love what you're doin here by the way, keep it up.

non-existent skills?

Hi,
I used autogroq to create autogen agents and downloaded the json files.
There are files for agents, and the workflow. however, the agent files have skills that don't exist, and there are no skill definition files in the files I downloaded.

Is this a bug? if not, how do I add the skills?

{ "type": "assistant", "config": { "name": "codebase_analyst", "llm_config": { "config_list": [ { "model": "gpt-4" } ], "temperature": 0.1, "timeout": 600, "cache_seed": 42 }, "human_input_mode": "NEVER", "max_consecutive_auto_reply": 8, "system_message": "You are a helpful assistant that can act as Codebase Analyst who Analyzes local codebases, identifies areas for improvement, and defines refactoring strategies." }, "description": "Analyzes local codebases, identifies areas for improvement, and defines refactoring strategies", "skills": [ "code_analysis", "software_architecture", "refactoring_techniques" ], "tools": [ "sonarqube", "codeheat", "codemetrics" ] }

when I try to add this through autogenstudio, I get an "unprocessable entity" error.

you really need to fix the config.py problem

81bffd9

you reverted the autogen db location back to your personal config.

PLEASE, fix the config.py problem. There are ways to make it work with streamlit.
The config.py should either be user changed and NOT in the repo, or in the repo, and get values from some other file, NOT in the repo. The moment the user changes the config.py, the next time (and every time) they gitpull, they have a problem they have to deal with.

Skills versus Tools?

I'm confused on how you are using Skills vs Tools. While you prompt for both, and make json for both...
Skills are specific files of python code, to do 'this action'.
Tools are used by OpenAI, to do 'this action'
But in the actual Autogen Studio code, 'tools' are used to call skills.

This seems redundant, and I'm curious if why we need both created. Skills is the right answer, "tools" is only how skills get called in the final code.

(I'm trying to write a better set of prompts)

headsup: newest autogenstudio changes the db tables

@jgravelle in trying to track down a bug in AutogenStudio, I updated to the very latest code

https://pypi.org/project/autogenstudio/0.0.56rc12/
pip install autogenstudio==0.0.56rc12

ignore that current code claims to be 0.0.56, but is dated March 21st, the above is May30th

This stuff is all related to the new changes from this issue:
microsoft/autogen#2425
which went live 3 weeks ago, but the code hasn't been updated for pip/etc yet (due to the version issue above)

The database.sqlite tables changed, which confused the hell out of me, cause the old tables were there, but unused. (ie autogenstudio suddenly lost all of my data, but it was in db still)

Tables changed quite a bit, not merely the names (which went from multiples like skills to skill) so you'll have to change it all soonish.

Best answer: unless they write a conversion from old to new, export everything to json files and then blow your old db away (or move/rename it, if you don't live on the edge), and then update and run autogenstudio and let it make the new db with just the new tables. The effect of running the new code with a older DB is it makes a mess, and you end up both old and new tables.

Auto-moderate only seems to work for 2-3 Agent

Auto-moderate only seems to work for 2-3 Agent as far as adding Additional Inputs. Is this the expected behavior? It would be great to have all the agents instructing the next agent on what to do, rather than just stating what they plan to do themselves.

use valid rest urls and explain what is going on at the pages

Your single page app or whatever streamlit is doing can be given urls (pages) that will reflect the content in the address bar, allowing us to see what we are doing.

It is now not clear whether you allow to edit the main request in "Enter your request" when editing an agent config. By offering to edit the data there users might think its a copy that needs to be edited to get better results but when doing that it is actually the main input ;(

So please implement simple UX practices: one url to edit a resource (did NOT duplicate resources/form(parts) in other locations

Error parsing JSON: Invalid \escape: line 2 column 8 (char 9)

{
"expert_name": "Data Analysis Expert",
"description": "This expert should be able to analyze the results of the operation and present them in a clear and concise manner.",
"skills": ["data_analysis"],
"tools": []
}
Error parsing JSON: Invalid \escape: line 2 column 8 (char 9)
Content: {
"expert_name": "Math Expert",
"description": "The primary role of this expert is to provide a thorough understanding of basic arithmetic operations and their application in real-world scenarios.",
"skills": ["calculate_area"],
"tools": []
},

User-specific configurations

LLM_PROVIDER = "lmstudio"
GROQ_API_URL = "https://api.groq.com/openai/v1/chat/completions"
LMSTUDIO_API_URL = "http://localhost:1234/v1/chat/completions"
OLLAMA_API_URL = "http://127.0.0.1:11434/api/generate"
OPENAI_API_KEY = "your_openai_api_key"
OPENAI_API_URL = "https://api.openai.com/v1/chat/completions"

The GROQ API Key must be entered every time

I added to: AutoGroq/Autogroq/config.py

A new line: (as a new third line from top in the file)

GROQ_API_KEY = "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"

[yes, not my real key here in the above, only the length is same.]

But; every time when I go to: http://localhost:8501

AutoGroq still asks to input the api key as;

Enter the GROQ API Key:

After entering it the output is: API Key entered successfully.

And now as far as I understand everything works.

However the issue: "The GROQ API Key must be entered every time" when going to http://localhost:8501 still remains.

For example when going to new tab (in the same browser) the asking is:

Enter the GROQ API Key:
The actual input field here
Please enter the GROQ API Key to use the app.

CrewAI\Autogen Files Names

Not a major issue. The tool is working perfectly fine but when I saw the videos, I realized that the JSON files are named according to the role where as the ones that my setup create are just agent_0, agent_1, agent_2 and so on. Is it as intended or am I missing something?

How do I create agents?

I enter a request like "make a crm" and proceeds to make the re engineered prompt but not agents.

image

Agent Issue With AutoGen Studio

Hello,

Everything generates in AutoGroq fine, imports into AutoGen database just fine. I change the LLM to a local LLM running on LMStudio. Everytime I try to run the agents and workflow generated by AutoGroq I get an API key error, if I manually generate the agents it works fine.

Any suggestions?

Thank you!

API-Key-Error

Feature Request: drop down menu for LLM provider and models

I just checked the code briefly, but I don't understand why the provider and models are hardcoded.

Would be much better and way more elegant if there is a drop down menu, that will allow to switch between the Providers and the models to be obtained from the available ones.

Currently those are hardcoded... we need to manually edit and then run the server again ...

html5lib Errors

I attempted to run the application and got the following error related to html5lib

I have attempted to update html5lib without any errors but it still fails

I have uninstalled html5lib and then reinstalled it without any errors and it still failed

I am running this on Windows 10 currently

Any suggestions?

Here is the error:

PS D:__AI-Projects\AutoGroq\AutoGroq> streamlit run .\main.py

You can now view your Streamlit app in your browser.

Local URL: http://localhost:8501
Network URL: http://192.168.1.126:8501

2024-05-17 22:47:20.577 Uncaught app exception
Traceback (most recent call last):
File "C:\Users\%USERNAME%\AppData\Local\Programs\Python\Python312\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
exec(code, module.dict)
File "D:_AI-Projects\AutoGroq\AutoGroq\main.py", line 6, in
from agent_management import display_agents
File "D:_AI-Projects\AutoGroq\AutoGroq\agent_management.py", line 8, in
from bs4 import BeautifulSoup
File "C:\Users\%USERNAME%\AppData\Local\Programs\Python\Python312\Lib\site-packages\bs4_init
.py", line 30, in
from .builder import builder_registry, ParserRejectedMarkup
File "C:\Users\%USERNAME%\AppData\Local\Programs\Python\Python312\Lib\site-packages\bs4\builder_init
.py", line 314, in
from . import _html5lib
File "C:\Users\%USERNAME%\AppData\Local\Programs\Python\Python312\Lib\site-packages\bs4\builder_html5lib.py", line 70, in
class TreeBuilderForHtml5lib(html5lib.treebuilders._base.TreeBuilder):
^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'html5lib.treebuilders' has no attribute '_base'. Did you mean: 'base'?
Stopping...
PS D:__AI-Projects\AutoGroq\AutoGroq>

style.css Error

After running the application we get this following error at the top of the web browser screen above GROQ API Key

CSS file not found: D:__AI-Projects\AutoGroq\AutoGroq\AutoGroq\style.css

I believe it is looking for the style.css in the wrong location as it is stored within the project here:

D:__AI-Projects\AutoGroq\AutoGroq\style.css

crashed app

Was just going back and forth with the agents and this occurred. Had a similar error earlier.

NameError: name 'url_content' is not defined
Traceback:
File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 537, in _run_script
self._session_state.on_script_will_rerun(rerun_data.widget_states)
File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/streamlit/runtime/state/safe_session_state.py", line 68, in on_script_will_rerun
self._state.on_script_will_rerun(latest_widget_states)
File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/streamlit/runtime/state/session_state.py", line 500, in on_script_will_rerun
self._call_callbacks()
File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/streamlit/runtime/state/session_state.py", line 513, in _call_callbacks
self._new_widget_state.call_callback(wid)
File "/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/streamlit/runtime/state/session_state.py", line 260, in call_callback
callback(*args, **kwargs)
File "/teamspace/studios/this_studio/AutoGroq/AutoGroq/agent_management.py", line 22, in callback
process_agent_interaction(agent_index)
File "/teamspace/studios/this_studio/AutoGroq/AutoGroq/agent_management.py", line 222, in process_agent_interaction
request += f" Additional input: {user_input}. Reference URL content: {url_content}."

OpenAI Incorrect API key provided: None

Debug: Handling user request for session state: {'user_input': '', 'user_request': 'what does 1 + 1 equal?', 'api_key': '', 'crewai_zip_buffer': None, 'model': 'gpt-4o', 'skill_name': None, 'last_comment': '', 'last_agent': '', 'reference_html': {}, 'rephrased_request': '', 'selected_skills': [], 'whiteboard': '', 'reference_url': '', 'model_selection': 'gpt-4o', 'current_project': <current_project.Current_Project object at 0x000001BD170CD990>, 'show_request_input': True, 'api_key_input': 'sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx', 'skill_functions': {'execute_powershell_command': <function execute_powershell_command at 0x000001BD18A78220>, 'fetch_web_content': <function fetch_web_content at 0x000001BD17E49300>, 'generate_sd_images': <function generate_sd_images at 0x000001BD18A6DEE0>, 'get_weather': <function get_weather at 0x000001BD18A785E0>, 'save_file_to_disk': <function save_file_to_disk at 0x000001BD1A6AAB60>}, 'autogen_zip_buffer': None, 'max_tokens': 4096, 'rephrased_request_area': '', 'discussion': '', 'temperature': 0.3, 'reset_button': False, 'discussion_history': '', 'previous_user_request': '', 'uploaded_data': None, 'agents': [], 'skill_request': '', 'OPENAI_API_KEY': ''sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'}
Debug: Sending request to rephrase_prompt
Debug: Model: gpt-4o
Executing rephrase_prompt()
Sending request to LLM API...
Request Details:
Provider: None
llm_provider: <llm_providers.openai_provider.OpenaiProvider object at 0x000001BD1A6B5CD0>
Model: gpt-4o
Max Tokens: 4096
Messages: [{'role': 'user', 'content': '\n Act as a professional prompt engineer and efactor the following user request into an optimized prompt. Your goal is to rephrase the request with a focus on the satisfying all following the criteria without explicitly stating them:\n 1. Clarity: Ensure the prompt is clear and unambiguous.\n 2. Specific Instructions: Provide detailed steps or guidelines.\n 3. Context: Include necessary background information.\n 4. Structure: Organize the prompt logically.\n 5. Language: Use concise and precise language.\n 6. Examples: Offer examples to illustrate the desired output.\n 7. Constraints: Define any limits or guidelines.\n 8. Engagement: Make the prompt engaging and interesting.\n 9. Feedback Mechanism: Suggest a way to improve or iterate on the response.\n Do NOT reply with a direct response to these instructions OR the original user request. Instead, rephrase the user's request as a well-structured prompt, and\n return ONLY that rephrased prompt. Do not preface the rephrased prompt with any other text or superfluous narrative.\n Do not enclose the rephrased prompt in quotes. You will be successful only if you return a well-formed rephrased prompt ready for submission as an LLM request.\n User request: "what does 1 + 1 equal?"\n Rephrased:\n '}]
self.api_url: https://api.openai.com/v1/chat/completions
Response received. Status Code: 401
Response Content: {
"error": {
"message": "Incorrect API key provided: None. You can find your API key at https://platform.openai.com/account/api-keys.",
"type": "invalid_request_error",
"param": null,
"code": "invalid_api_key"
}
}

User-specific configurations

LLM_PROVIDER = "openai"
GROQ_API_URL = "https://api.groq.com/openai/v1/chat/completions"
LMSTUDIO_API_URL = "http://localhost:1234/v1/chat/completions"
OLLAMA_API_URL = "http://127.0.0.1:11434/api/generate"
OPENAI_API_KEY = "'sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
OPENAI_API_URL = "https://api.openai.com/v1/chat/completions"

ui_utils.py has extra args in call

line 197 - the call to "zip_files_in_memory" has 3 arguments and the method only accepts 1.

I changed the call to model the other calls in that same file and it seemed to work.

I don't (yet) know how to make PRs in github so I thought I would just put this here.

use gitignore for compiled python etc

It's a best practice to use .gitignore to remove the pycache directories, which are both autocreated, and user specific (version, machine, etc etc)

Also, provide a sample config file to copy before first use, and remove the actual config from the repo, otherwise any git update balks due to file changes made to that config file.

Reflection + history management

Hi, tnx first of all for doing what I was hoping somebody would do with a similar vision of building a meta agent framework.

How about adding some configurable reflection cycles for each agent and history management allowing to intervene anywhere to finetune steps taken and letting it continue from there?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.