Code Monkey home page Code Monkey logo

oldagixt-frontend's People

Contributors

birdup000 avatar jamesonrgrieve avatar josh-xt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

oldagixt-frontend's Issues

Can't access existing agent after docker container or instance has been restarted

Description

when you select the agents after the instance of the agent-llm has been restarted with docker or instance you can't access that agent it won't go to it nothing happens if you try selecting it.

Steps to Reproduce the Bug

1.restart instance or docker container
2.Go to agent-llm
3.Select Agents
4.try to select a existing agent
3.issue will happen

Expected Behavior

When clicking the agent it should bring up the configuration and settings for the agent.

Actual Behavior

It won't go to the agent settings when you try to select the existing agent.

Additional Context / Screenshots

On "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36"

Operating System

  • Microsoft Windows
  • Apple MacOS
  • Linux
  • Android
  • iOS
  • Other

Python Version

  • Python <= 3.9
  • Python 3.10
  • Python 3.11

Environment Type - Connection

  • Local
  • Remote

Environment Type - Container

  • Using Docker
  • Not Using Docker

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of Agent-LLM.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.

unhandled runtime error

Description

maybe it is a backend issue: Josh-XT/AGiXT#360

when I click send message on the frontend for my azure researcher I get the response below.

Steps to Reproduce the Bug

  1. generate agent with azure openai via streamlit
  2. start backend with python3 app.py
  3. start frontend with yarn dev
  4. select Researcher Agent in frontend
  5. type message
  6. press send message
  7. error

Expected Behavior

get a response

Actual Behavior

Unhandled Runtime Error

AxiosError: Network Error
Call Stack
AxiosError
node_modules/axios/lib/core/AxiosError.js (22:0)
handleError
node_modules/axios/lib/adapters/xhr.js (158:0)

Additional Context / Screenshots

No response

Operating System

  • Microsoft Windows
  • Apple MacOS
  • Linux
  • Android
  • iOS
  • Other

Python Version

  • Python <= 3.9
  • Python 3.10
  • Python 3.11

Environment Type - Connection

  • Local
  • Remote

Environment Type - Container

  • Using Docker
  • Not Using Docker

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of Agent-LLM.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.

clicking start a task on an agent, the run status and the text log of what is happening never appears.

Description

clicking start a task on an agent, the run status and the text log of what is happening never appears.

I can see the log in the dockerlog that seems the backend is making requests to the llm, but the run status in the front end is never updated and the log in the frontend for the task never seems to appear.

May or may not be related to bugs #1 and #2

Steps to Reproduce the Bug

  1. run front end with docker compose and the agent-llm backend.
  2. create an agent (in my case Oobabooga: http://localbox.lan:5000/
  3. assign it a task
  4. watch as nothing happens in the front end, but the backend reports requests being sent to the LLM.

Expected Behavior

run status will update and log will appear

Actual Behavior

no run status change or log.

Additional Context / Screenshots

No response

Operating System

  • Microsoft Windows
  • Apple MacOS
  • Linux
  • Android
  • iOS
  • Other

Python Version

  • Python <= 3.9
  • Python 3.10
  • Python 3.11

Environment Type - Connection

  • Local
  • Remote

Environment Type - Container

  • Using Docker
  • Not Using Docker

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of Agent-LLM.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.

Docker container NEXT_PUBLIC_API_URI Env doesn't work

Description

When I pull this frontend package as container ,It seems access the backend API address http://localhost:7437 ,I tried to change the container Environment NEXT_PUBLIC_API_URI to the other Address , makes the frontend remote accessed. but the frontend container still access to http://localhost:7437

Steps to Reproduce the Bug

  1. git clone Agent-LLM reposity git clone https://github.com/Josh-XT/Agent-LLM.
  2. cd Agent-LLM/
  3. sudo docker-compose up -d to run container
  4. change the docker container agent-llm_frontend_1 Env NEXT_PUBLIC_API_URI by managent service such as portainer.
  5. save changes and restart container
  6. use other computer's web browser to open http:{ip}:3000,and open the development tool to check.

Expected Behavior

can access the backend as local do.

Actual Behavior

shows network error,and the frontend still access http://localhost:7437

Additional Context / Screenshots

No response

Operating System

  • Microsoft Windows
  • Apple MacOS
  • Linux
  • Android
  • iOS
  • Other

Python Version

  • Python <= 3.9
  • Python 3.10
  • Python 3.11

Environment Type - Connection

  • Local
  • Remote

Environment Type - Container

  • Using Docker
  • Not Using Docker

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of Agent-LLM.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.

Add option to add custom settings per agent

Problem Description

There are other settings for API keys that are not captured in agent settings and are currently required out of the .env file.

Proposed Solution

Easy work around is to add the ability for users to add custom keys into their agent settings, so if there isn't something like GOOGLE_API_KEY for example, they could hit a + and add that as the key and give it a value to assign to the agent.

Alternatives Considered

No response

Additional Context

No response

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this feature has not been requested yet.
  • I have provided enough information for the maintainers to understand and evaluate this request.

TypeError: sequence item 3: expected str instance, list found

Description

Bug when starting a new task:

Executing task 1: Develop a task list.

Exception in thread Thread-2 (run_task):
Traceback (most recent call last):
File "/home/brice/miniconda3/envs/agent/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/home/brice/miniconda3/envs/agent/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/home/brice/Agent-LLM/AgentLLM.py", line 364, in run_task
result = self.run(task=task["task_name"], prompt="execute")
File "/home/brice/Agent-LLM/AgentLLM.py", line 149, in run
formatted_prompt, unformatted_prompt = self.format_prompt(
File "/home/brice/Agent-LLM/AgentLLM.py", line 128, in format_prompt
formatted_prompt = self.custom_format(
File "/home/brice/Agent-LLM/AgentLLM.py", line 101, in custom_format
return re.sub(pattern, replace, string)
File "/home/brice/miniconda3/envs/agent/lib/python3.10/re.py", line 209, in sub
return _compile(pattern, flags).sub(repl, string, count)
TypeError: sequence item 3: expected str instance, list found

Steps to Reproduce the Bug

Ubuntu 22.04
Conda environment with python 3.10
No docker

Launch backend with python app.py
& frontend with yarn run dev

Create a new Agent, Oobabooga, and launch a task

Expected Behavior

No error

Actual Behavior

==Output==

Executing task 1: Develop a task list.

Exception in thread Thread-2 (run_task):
Traceback (most recent call last):
File "/home/brice/miniconda3/envs/agent/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/home/brice/miniconda3/envs/agent/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/home/brice/Agent-LLM/AgentLLM.py", line 364, in run_task
result = self.run(task=task["task_name"], prompt="execute")
File "/home/brice/Agent-LLM/AgentLLM.py", line 149, in run
formatted_prompt, unformatted_prompt = self.format_prompt(
File "/home/brice/Agent-LLM/AgentLLM.py", line 128, in format_prompt
formatted_prompt = self.custom_format(
File "/home/brice/Agent-LLM/AgentLLM.py", line 101, in custom_format
return re.sub(pattern, replace, "".join(string))
File "/home/brice/miniconda3/envs/agent/lib/python3.10/re.py", line 209, in sub
return _compile(pattern, flags).sub(repl, string, count)
TypeError: sequence item 3: expected str instance, list found
INFO: 127.0.0.1:55720 - "GET /api/agent/Vicuna/task/status HTTP/1.1" 200 OK

Additional Context / Screenshots

No response

Operating System

  • Microsoft Windows
  • Apple MacOS
  • Linux
  • Android
  • iOS
  • Other

Python Version

  • Python <= 3.9
  • Python 3.10
  • Python 3.11

Environment Type - Connection

  • Local
  • Remote

Environment Type - Container

  • Using Docker
  • Not Using Docker

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of Agent-LLM.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.

Yarn install and dev missing

Description

00h00m00s 0/0: : ERROR: [Errno 2] No such file or directory: 'dev'

Steps to Reproduce the Bug

Follow the install guide

Expected Behavior

Complete

Actual Behavior

Install and run

Additional Context / Screenshots

No response

Operating System

  • Microsoft Windows
  • Apple MacOS
  • Linux
  • Android
  • iOS
  • Other

Python Version

  • Python <= 3.9
  • Python 3.10
  • Python 3.11

Environment Type - Connection

  • Local
  • Remote

Environment Type - Container

  • Using Docker
  • Not Using Docker

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of Agent-LLM.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.

Text appears in the agent text file but not in the gui.

Description

When you assign a task to the agent it will not show in the GUI the results but the output is written to a text file in the agent folder.

34534523423423123
Find 5 latest news headlines on May, 05, 2023 related to Technology..txt

Steps to Reproduce the Bug

Assign a task to the agent it will not show in the GUI the results but the output is written to a text file in the agent folder.

Expected Behavior

For the output of the agent to appear in the GUI under the task agent.

Actual Behavior

34534523423423123

Additional Context / Screenshots

There is output generated by the agent but not appearing in the GUI.
127.0.0.1 - - [07/May/2023 18:05:40] "POST /api/v1/generate HTTP/1.1" 200 -
Output generated in 105.28 seconds (0.87 tokens/s, 92 tokens, context 24, seed 200956049)
127.0.0.1 - - [07/May/2023 18:08:28] "POST /api/v1/generate HTTP/1.1" 200 -
Output generated in 496.88 seconds (0.64 tokens/s, 318 tokens, context 398, seed 763687500)
127.0.0.1 - - [07/May/2023 18:16:45] "POST /api/v1/generate HTTP/1.1" 200 -
Output generated in 166.63 seconds (0.00 tokens/s, 0 tokens, context 409, seed 469111486)
127.0.0.1 - - [07/May/2023 18:19:32] "POST /api/v1/generate HTTP/1.1" 200 -
Output generated in 100.66 seconds (0.59 tokens/s, 59 tokens, context 102, seed 4115488)
127.0.0.1 - - [07/May/2023 18:21:12] "POST /api/v1/generate HTTP/1.1" 200 -
Output generated in 867.37 seconds (0.78 tokens/s, 673 tokens, context 416, seed 948141595)
127.0.0.1 - - [07/May/2023 18:35:40] "POST /api/v1/generate HTTP/1.1" 200 -
Output generated in 357.99 seconds (0.00 tokens/s, 0 tokens, context 855, seed 588820203)
127.0.0.1 - - [07/May/2023 18:41:38] "POST /api/v1/generate HTTP/1.1" 200 -
Output generated in 116.96 seconds (0.50 tokens/s, 59 tokens, context 128, seed 755310477)
127.0.0.1 - - [07/May/2023 18:43:35] "POST /api/v1/generate HTTP/1.1" 200 -

Operating System

  • Microsoft Windows
  • Apple MacOS
  • Linux
  • Android
  • iOS
  • Other

Python Version

  • Python <= 3.9
  • Python 3.10
  • Python 3.11

Environment Type - Connection

  • Local
  • Remote

Environment Type - Container

  • Using Docker
  • Not Using Docker

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of Agent-LLM.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.

Can access the webUI, and make a new agent, but can't have the agent do anything with openAI API key

Description

I've edited the .env file with my openAI API key and configured the agent via the webUI, I've tried the default gpt-3.5-turbo too, but nothing happens if I try to do any of the Chat, Instruct, or Task agent commands.

Any ideas :3

Also, I do have access to the gpt-4 api, I can successfully use it with Auto-GPT.

Steps to Reproduce the Bug

Install the latest alpha version or the main version and follow the docker install instructions.

Expected Behavior

Expecting some type of response back, but get nothing after trying the chat, instruct, or task agent commands.

Actual Behavior

I click on "start pursing task" for example and nothing happens.

Additional Context / Screenshots

236642689-67e1bf3b-fd48-4d47-992a-3eb386a6f0ba

Operating System

  • Microsoft Windows
  • Apple MacOS
  • Linux
  • Android
  • iOS
  • Other

Python Version

  • Python <= 3.9
  • Python 3.10
  • Python 3.11

Environment Type - Connection

  • Local
  • Remote

Environment Type - Container

  • Using Docker
  • Not Using Docker

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of Agent-LLM.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.

Error ECONNREFUSED

Description

When I start frontend this error appears
error - unhandledRejection: Error: connect ECONNREFUSED ::1:7437
at AxiosError.from (file:///Users/timur/Downloads/Agent-LLM-main/frontend/node_modules/axios/lib/core/AxiosError.js:89:14)
at RedirectableRequest.handleRequestError (file:///Users/timur/Downloads/Agent-LLM-main/frontend/node_modules/axios/lib/adapters/http.js:591:25)
at RedirectableRequest.emit (node:events:513:28)
at eventHandlers. (/Users/timur/Downloads/Agent-LLM-main/frontend/node_modules/follow-redirects/index.js:14:24)
at ClientRequest.emit (node:events:513:28)
at Socket.socketErrorListener (node:_http_client:502:9)
at Socket.emit (node:events:513:28)
at emitErrorNT (node:internal/streams/destroy:151:8)
at emitErrorCloseNT (node:internal/streams/destroy:116:3)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
digest: undefined
}
Promise { }
{
mutate: [Function (anonymous)],
data: [Getter],
error: [Getter],
isValidating: [Getter],
isLoading: [Getter]
}
error - unhandledRejection: Error: connect ECONNREFUSED ::1:7437
at AxiosError.from (file:///Users/timur/Downloads/Agent-LLM-main/frontend/node_modules/axios/lib/core/AxiosError.js:89:14)
at RedirectableRequest.handleRequestError (file:///Users/timur/Downloads/Agent-LLM-main/frontend/node_modules/axios/lib/adapters/http.js:591:25)
at RedirectableRequest.emit (node:events:513:28)
at eventHandlers. (/Users/timur/Downloads/Agent-LLM-main/frontend/node_modules/follow-redirects/index.js:14:24)
at ClientRequest.emit (node:events:513:28)
at Socket.socketErrorListener (node:_http_client:502:9)
at Socket.emit (node:events:513:28)
at emitErrorNT (node:internal/streams/destroy:151:8)
at emitErrorCloseNT (node:internal/streams/destroy:116:3)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
digest: undefined
}

Steps to Reproduce the Bug

Start backend
Start frontend
Go to the agent page

Expected Behavior

No errors

Actual Behavior

This error occurs.

Additional Context / Screenshots

No response

Operating System

  • Microsoft Windows
  • Apple MacOS
  • Linux
  • Android
  • iOS
  • Other

Python Version

  • Python <= 3.9
  • Python 3.10
  • Python 3.11

Environment Type - Connection

  • Local
  • Remote

Environment Type - Container

  • Using Docker
  • Not Using Docker

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this bug has not been reported yet.
  • I am using the latest version of Agent-LLM.
  • I have provided enough information for the maintainers to reproduce and diagnose the issue.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.