Code Monkey home page Code Monkey logo

farama-foundation / chatarena Goto Github PK

View Code? Open in Web Editor NEW
1.2K 19.0 125.0 867 KB

ChatArena (or Chat Arena) is a Multi-Agent Language Game Environments for LLMs. The goal is to develop communication and collaboration capabilities of AIs.

Home Page: https://www.chatarena.org/

License: Apache License 2.0

Python 98.92% Jupyter Notebook 1.08%
large-language-models multi-agent natural-language-processing multi-agent-reinforcement-learning multi-agent-simulation ai python artificial-intelligence chatgpt gpt-4

chatarena's Introduction

๐ŸŸ ChatArena

Multi-Agent Language Game Environments for LLMs

License: Apache2 PyPI Python 3.9+ twitter Discord Open In Colab HuggingFace Space


ChatArena is a library that provides multi-agent language game environments and facilitates research about autonomous LLM agents and their social interactions. It provides the following features:

  • Abstraction: it provides a flexible framework to define multiple players, environments and the interactions between them, based on Markov Decision Process.
  • Language Game Environments: it provides a set of environments that can help understanding, benchmarking or training agent LLMs.
  • User-friendly Interfaces: it provides both Web UI and CLI to develop/prompt engineer your LLM agents to act in environments.

ChatArena Architecture

Getting Started

Try our online demo: demo Demo video

Installation

Requirements:

  • Python >= 3. 7
  • OpenAI API key (optional, for using GPT-3.5-turbo or GPT-4 as an LLM agent)

Install with pip:

pip install chatarena

or install from source:

pip install git+https://github.com/chatarena/chatarena

To use GPT-3 as an LLM agent, set your OpenAI API key:

export OPENAI_API_KEY="your_api_key_here"

Optional Dependencies

By default pip install chatarena will only install dependencies necessary for ChatArena's core functionalities. You can install optional dependencies with the following commands:

pip install chatarena[all_backends] # install dependencies for all supported backends: anthropic, cohere, huggingface, etc.
pip install chatarena[all_envs]     # install dependencies for all environments, such as pettingzoo
pip install chatarena[all]          # install all optional dependencies for full functionality

Launch the Demo Locally

The quickest way to see ChatArena in action is via the demo Web UI. To launch the demo on your local machine, you first pip install chatarena with extra gradio dependency, then git clone this repository to your local folder, and finally call the app.py in the root directory of the repository:

pip install chatarena[gradio]
git clone https://github.com/chatarena/chatarena.git
cd chatarena
gradio app.py

This will launch a demo server for ChatArena, and you can access it from your browser (port 8080).

Check out this video to learn how to use Web UI: Webui demo video

For Developers

For an introduction to the ChatArena framework, please refer to this document. For a walkthrough of building a new environment, check Open In Colab

Here we provide a compact guide on minimal setup to run the game and some general advice on customization.

Key Concepts

  1. Arena: Arena encapsulates an environment and a collection of players. It drives the main loop of the game and provides HCI utilities like webUI, CLI, configuration loading and data storage.
  2. Environment: The environment stores the game state and executes game logics to make transitions between game states. It also renders observations for players, the observations are natural languages.
    1. The game state is not directly visible to the players. Players can only see the observations.
  3. Language Backend: Language backends are the source of language intelligence. It takes text (or collection of text) as input and returns text in response.
  4. Player: The player is an agent that plays the game. In RL terminology, itโ€™s a policy, a stateless function mapping from observations to actions.

Run the Game with Python API

Load Arena from a config file -- here we use examples/nlp-classroom-3players.json in this repository as an example:

arena = Arena.from_config("examples/nlp-classroom-3players.json")
arena.run(num_steps=10)

Run the game in an interactive CLI interface:

arena.launch_cli()

Check out this video to learn how to use CLI: cli demo video A more detailed guide about how to run the main interaction loop with finer-grained control can be found here

General Customization Advice

  1. Arena: Overriding Arena basically means one is going to write their own main loop. This can allow different interaction interfaces or drive games in a more automated manner, for example, running an online RL training loop
  2. Environment: A new environment corresponds to a new game, one can define the game dynamics here with hard-coded rules or a mixture of rules and language backend.
  3. Backend: If one needs to change the way of formatting observations (in terms of messages) into queries for the language model, the backend should be overridden.
  4. Player: By default, when a new observation is fed, players will query the language backend and return the response as actions. But one can also customize the way that players are interacting with the language backend.

Creating your Custom Environment

You can define your own environment by extending the Environment class. Here are the general steps:

  1. Define the class by inheriting from a base class and setting type_name, then add the class to ALL_ENVIRONMENTS
  2. Initialize the class by defining __init__ method (its arguments will define the corresponding config) and initializing class attributes
  3. Implement game mechanics in methods step
  4. Handle game states and rewards by implementing methods such as reset, get_observation, is_terminal, and get_rewards
  5. Develop role description prompts (and a global prompt if necessary) for players using CLI or Web UI and save them to a config file.

We provide a detailed tutorial to demonstrate how to define a custom environment, using the Chameleon environment as example.

If you want to port an existing library's environment to ChatArena, check out PettingzooChess environment as an example.

List of Environments

A multi-player language game environment that simulates a conversation.

  • NLP Classroom: a 3-player language game environment that simulates a classroom setting. The game is played in turns, and each turn a player can either ask a question or answer a question. The game ends when all players have asked and answered all questions.

Based on conversation, but with a moderator that controls the game dynamics.

  • Rock-paper-scissors: a 2-player language game environment that simulates a rock-paper-scissors game with moderator conversation. Both player will act in parallel, and the game ends when one player wins 2 rounds.
  • Tic-tac-toe: a 2-player language game environment that simulates a tic-tac-toe game with moderator conversation. The game is played in turns, and each turn a player can either ask for a move or make a move. The game ends when one player wins or the board is full.

A multi-player social deduction game. There are two roles in the game, chameleon and non-chameleon. The topic of the secret word will be first revealed to all the players. Then the secret word will be revealed to non-chameleons. The chameleon does not know the secret word. The objective in the game depends on the role of the player:

  • If you are not a chameleon, your goal is to reveal the chameleon without exposing the secret word.
  • If you are a chameleon, your aim is to blend in with other players, avoid being caught, and figure out the secret word. There are three stages in the game:
  1. The giving clues stage: each player will describe the clues about the secret word.
  2. The accusation stage: In this stage, each player will vote for another player who is most likely the chameleon. The chameleon should vote for other players.
  3. The guess stage: If the accusation is correct, the chameleon should guess the secret word given the clues revealed by other players.

A two-player chess game environment that uses the PettingZoo Chess environment.

A two-player tic-tac-toe game environment that uses the PettingZoo TicTacToe environment. Differing from the Moderator Conversation environment, this environment is driven by hard-coded rules rather than a LLM moderator.

Contributing

We welcome contributions to improve and extend ChatArena. Please follow these steps to contribute:

  1. Fork the repository.
  2. Create a new branch for your feature or bugfix.
  3. Commit your changes to the new branch.
  4. Create a pull request describing your changes.
  5. We will review your pull request and provide feedback or merge your changes.

Please ensure your code follows the existing style and structure.

Citation

If you find ChatArena useful for your research, please cite our repository (our arxiv paper is coming soon):

@software{ChatArena,
  author = {Yuxiang Wu, Zhengyao Jiang, Akbir Khan, Yao Fu, Laura Ruis, Edward Grefenstette, and Tim Rocktรคschel},
  title = {ChatArena: Multi-Agent Language Game Environments for Large Language Models},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  version = {0.1},
  howpublished = {\url{https://github.com/chatarena/chatarena}},
}

Contact

If you have any questions or suggestions, feel free to open an issue or submit a pull request. You can also contact us on the Farama discord server- https://discord.gg/Vrtdmu9Y8Q

Happy chatting!

Sponsors

We would like to thank our sponsors for supporting this project:

chatarena's People

Contributors

aidandos avatar andrewtanjs avatar crazyofapple avatar davidcincotta avatar dependabot[bot] avatar dexhunter avatar edmundmills avatar elliottower avatar eunjiinkim avatar jkterry1 avatar kwinkunks avatar pminervini avatar yuxiang-wu avatar zhengyaojiang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatarena's Issues

got a "TypeError: Tab.__init__() got an unexpected keyword argument 'visible'" when perform 'gradio app'

โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ /home/jason/codes/chatarena/app.py:124 in โ”‚
โ”‚ โ”‚
โ”‚ 121 โ”‚ โ”‚ โ”‚
โ”‚ 122 โ”‚ โ”‚ with gr.Row(): โ”‚
โ”‚ 123 โ”‚ โ”‚ โ”‚ with gr.Column(elem_id="col-chatbox"): โ”‚
โ”‚ โฑ 124 โ”‚ โ”‚ โ”‚ โ”‚ with gr.Tab("All", visible=True): โ”‚
โ”‚ 125 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ chatbot = gr.Chatbot(elem_id="chatbox", visible=Tr โ”‚
โ”‚ 126 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚
โ”‚ 127 โ”‚ โ”‚ โ”‚ โ”‚ player_chatbots = [] โ”‚
โ”‚ โ”‚
โ”‚ /home/jason/anaconda3/lib/python3.11/site-packages/gradio/component_meta.py: โ”‚
โ”‚ 146 in wrapper โ”‚
โ”‚ โ”‚
โ”‚ 143 โ”‚ โ”‚ if in_event_listener(): โ”‚
โ”‚ 144 โ”‚ โ”‚ โ”‚ return None โ”‚
โ”‚ 145 โ”‚ โ”‚ else: โ”‚
โ”‚ โฑ 146 โ”‚ โ”‚ โ”‚ return fn(self, **kwargs) โ”‚
โ”‚ 147 โ”‚ โ”‚
โ”‚ 148 โ”‚ return wrapper โ”‚
โ”‚ 149 โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
TypeError: Tab.init() got an unexpected keyword argument 'visible'

Does it mean a incorrect version of gradio which is 4.1.1 was installed?

Update Anthropic Client

Anthropic changed their python sdk - making this code line outdated.

https://github.com/chatarena/chatarena/blob/fa6b374bb62fa7070454962eec6a9c88bc584d63/chatarena/backends/anthropic.py#L41

Would love to know if this might help - https://github.com/BerriAI/litellm

~100 lines of code, that standardizes all the llm api calls to the OpenAI call

from litellm import completion

## set ENV variables
# ENV variables can be set in .env file, too. Example in .env.example
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["ANTHROPIC_API_KEY"] = "anthropic key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# anthropic call
response = completion("claude-v-2", messages)

Support Langchain Agents

Hi, will love to try out chatarena with the feature to simply import any existing Langchain agents

Fix OpenAI errors

Hi @yuxiang-wu,

Tried to do something similar and hit a ton of OpenAI errors (Context window limitations, rate limiting with gpt-4, etc.).

Would recommend using something like reliableGPT for error handling - it'll do retries/model switching/etc. for you

from reliablegpt import reliableGPT
openai.ChatCompletion.create = reliableGPT(openai.ChatCompletion.create, ...)

Source: https://github.com/BerriAI/reliableGPT

AssertionError: openai package is not installed or the API key is not set

I am running into an error: AssertionError: openai package is not installed or the API key is not set.

I am certain that both the openai package is installed and I have the API set as environment variable OPENAI_API_KEY.

openai version 1.11.1
chatarena version 0.1.18

When I run the following code in the same environment 'is_openai_available' resolves to True.

try:
import openai
except ImportError:
is_openai_available = False
# logging.warning("openai package is not installed")
else:
try:
client = openai.OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
is_openai_available = True
except openai.OpenAIError:
# logging.warning("OpenAI API key is not set. Please set the environment variable OPENAI_API_KEY")
is_openai_available = False

Any ideas what could be causing this?

OpenAI latency in gradio app (5+ seconds to generate a response)

I'm not sure what this is a problem with, but I'm going to post it here in case anyone else has the same issue or any idea how to fix it.

When testing locally, I was able to get a response in maybe 10-20 seconds, which is slow but acceptable at least.

When testing in HuggingFace spaces, I found it didn't generate a response at all and gave me this information in the logs:

2023-11-22 23:23:14,237:INFO - HTTP Request: POST http://localhost:7860/reset "HTTP/1.1 200 OK"
2023-11-22 23:23:16,315:INFO - HTTP Request: POST http://localhost:7860/api/predict "HTTP/1.1 200 OK"
2023-11-22 23:23:45,772:INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
2023-11-22 23:24:21,570:INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
2023-11-22 23:24:52,416:INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
2023-11-22 23:25:30,880:INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
2023-11-22 23:26:14,191:INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
2023-11-22 23:26:49,628:INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
2023-11-22 23:26:49,629:WARNING - Agent Lex Fridman failed to generate a response. Error: 'Choice' object is not subscriptable. Sending signal to end the conversation.

It may be due to the huggingface space using a different version of gradio, for some reason it runs this (note the version specified by our pyproject is gradio 3.34)

--> RUN pip install --no-cache-dir 	gradio[oauth]==3.23.0 	"uvicorn>=0.14.0" 	spaces==0.18.0 gradio_client==0.0.2

I have no idea how to change this part of the huggingface build section, it looks like a dockerfile but I don't see one anywhere, maybe I'm missing something.

Postprocessing of agent name removal

Hi, thanks for your excellent work! I am really inspired a lot by this repo. I noticed that in chatarena/backends/openai.py, there is a postprocessing step of removing the agent name if the response starts with it. However, I observed that sometimes gpt-3.5-turbo still yields responses starting with "${agent_name}: {response}". I suggest adding one more processing step as below:

# Remove the agent name if the response starts with it
response = re.sub(rf"^\s*\[.*]:", "", response).strip()
response = re.sub(rf"^\s*{re.escape(agent_name)}\s*:", "", response).strip()

Thanks.

Conflicting dependencies

When doing
pip install -e .[all] on the dev branch (it's the same on main iirc) I get

ERROR: Cannot install chatarena[all]==0.1.12.6 and pettingzoo[classic]==1.23.1 because these package versions have conflicting dependencies.

The conflict is caused by:
    chatarena[all] 0.1.12.6 depends on chess==1.9.4; extra == "all"
    pettingzoo[classic] 1.23.1 depends on chess==1.7.0; extra == "classic"

To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict

Upcoming Features and Future Directions of Chat Arena

Firstly, we would like to express our gratitude to all the contributors and users of Chat Arena. Over the last month, our community has grown significantly and we have received numerous feature requests, bug reports, and contributions. It is exciting to see the community thriving and passionate about our project.

Given the influx of feature requests, we believe it would be beneficial to list the most common ones we've received and share some insight about our future directions.

Common Feature Requests

Here are some of the most frequently requested features that we are considering:

  • OpenAI and Anthropic async query API support
  • Batch experiment support to run multiple games concurrently
  • Langchain backend support
  • Bard backend support

Future Directions

We're focusing on the following directions:

  • More game environments
  • Improved stability and reliability
  • Extensibility to more backends
  • Benchmarking different LLMs with in game environments
  • Improved documentation (homepage, notebook example)
  • Improved Web UI

We encourage you to share your thoughts on these upcoming features and future directions. It's through your feedback and contributions that Chat Arena can continue to grow and evolve.

Thank you again for your support and for being part of our growing community.

Best,
Chat Arena Team

Last updated: May 2023

orjson package rust requirement

I'm not familiar with this package but it might be worth looking into other alternatives which don't require rust, as it makes it harder to install on something like google colab (or for example, I use fish prompt on mac which had some manual installation steps required to get rust to properly work). Otherwise it should probably be in the documentation.

Future Direction and Requests

Nice to meet you, ChatArena is an exciting project. I have great expectations for it.
There are several things I would like to achieve with ChatArena.

  1. hold a tournament in which AI, prompted by users, competes to win or lose in logical thinking and debates.
  2. the AI becomes the GM of a tabletop RPG, and multiple human players enjoy the scenario.
  3. create an AI version of "THE SIMS." with transplanted character behaviors.

For 1 and 2, multiple humans can participate remotely and a public viewing mechanism is required.
For 3, we would like to recreate life on stage by storing data on the tone of voice, settings, and worldview of multiple characters.
(e.g. https://arxiv.org/abs/2304.03442)

All of these need to be expanded, but will ChatArena evolve in this direction as a way of thinking?

Add disclaimer

Disclaimer: The architecture as described in ChatArena may only be used to
create a benevolent AGI. Any implementation must comply with Asimovโ€™s rules
and the United Nations Charter.

Feature request: RL libraries integration

Hi, I was wondering if it would be possible to integrate chat arena with something like gymnasium or pettingzoo (multi agent version of gym). Would be very interesting to see LLMs be used both for language envs and for regular envs like poker, connect four etc.

Always get same error: ERROR: [Errno 10048] error while attempting to bind on address ('127.0.0.1', 7860)

my env is win11,
(chatarena) E:\code\ai\chatarena>conda list

packages in environment at C:\Users\zphu.conda\envs\chatarena:

Name Version Build Channel

aiofiles 23.1.0 pypi_0 pypi
aiohttp 3.8.4 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
altair 4.2.2 pypi_0 pypi
anyio 3.6.2 pypi_0 pypi
async-timeout 4.0.2 pypi_0 pypi
attrs 22.2.0 pypi_0 pypi
backoff 2.2.1 pypi_0 pypi
bzip2 1.0.8 he774522_0
ca-certificates 2023.01.10 haa95532_0
certifi 2022.12.7 py310haa95532_0
charset-normalizer 3.1.0 pypi_0 pypi
chatarena 0.1.7 pypi_0 pypi
chess 1.9.4 pypi_0 pypi
click 8.1.3 pypi_0 pypi
cloudpickle 2.2.1 pypi_0 pypi
cohere 4.1.4 pypi_0 pypi
colorama 0.4.6 pypi_0 pypi
contourpy 1.0.7 pypi_0 pypi
cycler 0.11.0 pypi_0 pypi
entrypoints 0.4 pypi_0 pypi
farama-notifications 0.0.4 pypi_0 pypi
fastapi 0.95.0 pypi_0 pypi
ffmpy 0.3.0 pypi_0 pypi
filelock 3.11.0 pypi_0 pypi
fonttools 4.39.3 pypi_0 pypi
frozenlist 1.3.3 pypi_0 pypi
fsspec 2023.4.0 pypi_0 pypi
gradio 3.24.1 pypi_0 pypi
gradio-client 0.0.8 pypi_0 pypi
gymnasium 0.28.1 pypi_0 pypi
h11 0.14.0 pypi_0 pypi
httpcore 0.16.3 pypi_0 pypi
httpx 0.23.3 pypi_0 pypi
huggingface-hub 0.13.4 pypi_0 pypi
idna 3.4 pypi_0 pypi
jax-jumpy 1.0.0 pypi_0 pypi
jinja2 3.1.2 pypi_0 pypi
jsonschema 4.17.3 pypi_0 pypi
kiwisolver 1.4.4 pypi_0 pypi
libffi 3.4.2 hd77b12b_6
linkify-it-py 2.0.0 pypi_0 pypi
markdown-it-py 2.2.0 pypi_0 pypi
markupsafe 2.1.2 pypi_0 pypi
matplotlib 3.7.1 pypi_0 pypi
mdit-py-plugins 0.3.3 pypi_0 pypi
mdurl 0.1.2 pypi_0 pypi
multidict 6.0.4 pypi_0 pypi
numpy 1.24.2 pypi_0 pypi
openai 0.27.4 pypi_0 pypi
openssl 1.1.1t h2bbff1b_0
orjson 3.8.10 pypi_0 pypi
packaging 23.0 pypi_0 pypi
pandas 2.0.0 pypi_0 pypi
pettingzoo 1.22.3 pypi_0 pypi
pillow 9.5.0 pypi_0 pypi
pip 23.0.1 py310haa95532_0
prompt-toolkit 3.0.38 pypi_0 pypi
pydantic 1.10.7 pypi_0 pypi
pydub 0.25.1 pypi_0 pypi
pygments 2.14.0 pypi_0 pypi
pyparsing 3.0.9 pypi_0 pypi
pyrsistent 0.19.3 pypi_0 pypi
python 3.10.10 h966fe2a_2
python-dateutil 2.8.2 pypi_0 pypi
python-multipart 0.0.6 pypi_0 pypi
pytz 2023.3 pypi_0 pypi
pyyaml 6.0 pypi_0 pypi
regex 2023.3.23 pypi_0 pypi
requests 2.28.2 pypi_0 pypi
rfc3986 1.5.0 pypi_0 pypi
rich 13.3.3 pypi_0 pypi
semantic-version 2.10.0 pypi_0 pypi
setuptools 65.6.3 py310haa95532_0
six 1.16.0 pypi_0 pypi
sniffio 1.3.0 pypi_0 pypi
sqlite 3.41.1 h2bbff1b_0
starlette 0.26.1 pypi_0 pypi
tenacity 8.2.2 pypi_0 pypi
tk 8.6.12 h2bbff1b_0
tokenizers 0.13.3 pypi_0 pypi
toolz 0.12.0 pypi_0 pypi
tqdm 4.65.0 pypi_0 pypi
transformers 4.27.4 pypi_0 pypi
typing-extensions 4.5.0 pypi_0 pypi
tzdata 2023.3 pypi_0 pypi
uc-micro-py 1.0.1 pypi_0 pypi
urllib3 1.26.15 pypi_0 pypi
uvicorn 0.21.1 pypi_0 pypi
vc 14.2 h21ff451_1
vs2015_runtime 14.27.29016 h5e58377_2
wcwidth 0.2.6 pypi_0 pypi
websockets 11.0.1 pypi_0 pypi
wheel 0.38.4 py310haa95532_0
wincertstore 0.2 py310haa95532_2
xz 5.2.10 h8cc25b3_1
yarl 1.8.2 pypi_0 pypi
zlib 1.2.13 h8cc25b3_0

Installation issues/circular imports

I have installed locally (pip install -e .) and via the requirements.txt, and manually installed the chardet package, but still get this error.

ModuleNotFoundError Traceback (most recent call last)
File ~/anaconda3/envs/umshini/lib/python3.9/site-packages/requests/compat.py:11
10 try:
---> 11 import chardet
12 except ImportError:

ModuleNotFoundError: No module named 'chardet'

During handling of the above exception, another exception occurred:

AttributeError Traceback (most recent call last)
Cell In[10], line 1
----> 1 from chatarena.arena import Arena
2 arena = Arena.from_config("examples/nlp-classroom-3players.json")

File ~/Documents/GitHub/chatarena/chatarena/arena.py:7
4 import csv
5 import logging
----> 7 from .agent import Player
8 from .environments import Environment, TimeStep, load_environment
9 from .backends import Human

File ~/Documents/GitHub/chatarena/chatarena/agent.py:8
5 import uuid
6 from abc import abstractmethod
----> 8 from .backends import IntelligenceBackend, load_backend
9 from .message import Message, SYSTEM_NAME
10 from .config import AgentConfig, Configurable, BackendConfig

File ~/Documents/GitHub/chatarena/chatarena/backends/init.py:5
3 from .base import IntelligenceBackend
4 from .openai import OpenAIChat
----> 5 from .cohere import CohereAIChat
6 from .human import Human
7 from .hf_transformers import TransformersConversational

File ~/Documents/GitHub/chatarena/chatarena/backends/cohere.py:10
8 # Try to import the cohere package and check whether the API key is set
9 try:
---> 10 import cohere
11 except ImportError:
12 is_cohere_available = False

File ~/anaconda3/envs/umshini/lib/python3.9/site-packages/cohere/init.py:1
----> 1 from cohere.client import Client
2 from cohere.client_async import AsyncClient
3 from cohere.error import CohereAPIError, CohereConnectionError, CohereError

File ~/anaconda3/envs/umshini/lib/python3.9/site-packages/cohere/client.py:8
5 from concurrent.futures import ThreadPoolExecutor
6 from typing import Any, Dict, List, Optional, Union
----> 8 import requests
9 from requests.adapters import HTTPAdapter
10 from urllib3 import Retry

File ~/anaconda3/envs/umshini/lib/python3.9/site-packages/requests/init.py:45
41 import warnings
43 import urllib3
---> 45 from .exceptions import RequestsDependencyWarning
47 try:
48 from charset_normalizer import version as charset_normalizer_version

File ~/anaconda3/envs/umshini/lib/python3.9/site-packages/requests/exceptions.py:9
1 """
2 requests.exceptions
3 ~~~~~~~~~~~~~~~~~~~
4
5 This module contains the set of Requests' exceptions.
6 """
7 from urllib3.exceptions import HTTPError as BaseHTTPError
----> 9 from .compat import JSONDecodeError as CompatJSONDecodeError
12 class RequestException(IOError):
13 """There was an ambiguous exception that occurred while handling your
14 request.
15 """

File ~/anaconda3/envs/umshini/lib/python3.9/site-packages/requests/compat.py:13
11 import chardet
12 except ImportError:
---> 13 import charset_normalizer as chardet
15 import sys
17 # -------
18 # Pythons
19 # -------
20
21 # Syntax sugar.

File ~/anaconda3/envs/umshini/lib/python3.9/site-packages/charset_normalizer/init.py:23
1 """
2 Charset-Normalizer
3 ~~~~~~~~~~~~~~
(...)
21 :license: MIT, see LICENSE for more details.
22 """
---> 23 from charset_normalizer.api import from_fp, from_path, from_bytes, normalize
24 from charset_normalizer.legacy import detect
25 from charset_normalizer.version import version, VERSION

File ~/anaconda3/envs/umshini/lib/python3.9/site-packages/charset_normalizer/api.py:10
7 PathLike = Union[str, 'os.PathLike[str]'] # type: ignore
9 from charset_normalizer.constant import TOO_SMALL_SEQUENCE, TOO_BIG_SEQUENCE, IANA_SUPPORTED
---> 10 from charset_normalizer.md import mess_ratio
11 from charset_normalizer.models import CharsetMatches, CharsetMatch
12 from warnings import warn

AttributeError: partially initialized module 'charset_normalizer' has no attribute 'md__mypyc' (most likely due to a circular import)

Message Pool

Design a message pool to manage the messages. This allows a unified treatment of the visibility of the messages.
Draft design:
The message pool is a list of (named) tuples, where each tuple has (turn, role, message).

There should be two potential configurations for step definition: multiple players can act in the same turn (rock-paper-scissors).
The agent can only see the messages

  1. before the current turn
  2. visible to the current role

Method format message:
Takes message pool and self_role, enviornment_roles as inputs, returns a formatted ChatGPT API input list.

Optional dependencies required even for environments which do not use them

I believe this is a problem with the init.py importing all of the sub-environments at the same time, meaning that to import one environment you need the modules to import all of them. For example, for the Umshini environments all you need is PettingZoo with no optional dependencies, but for the PettingZoo chess example you need pettingzoo[classic].

Simplest solution would be to have PettingZoo be a core requirement, if all environments importing will require that same init file which import them (https://github.com/chatarena/chatarena/blob/fa6b374bb62fa7070454962eec6a9c88bc584d63/chatarena/environments/__init__.py#L4), otherwise the init could be changed and they could be put in separate subfolders perhaps? I put the umshini environments in a separate folder as I wanted it to be clear that they are separate and don't require the same dependencies.

An alternative is to put the existing environments into a new directory called core or something like that? That would break backwards compatibility though, as it would require people to do from chatarena.environments.core import PettingZooChess or something. But it's probably good to categorize the environments in the long run if there are going to be others added.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.