Code Monkey home page Code Monkey logo

animal-ai's Introduction

Deprecated - New Home Here.

‼️ ⛔ THIS REPOSITORY IS NO LONGER BEING MAINTAINED OR MONITORED. ⛔ ‼️

We have relocated this repository to a new home. Further updates and releases will be published there.

~ ~

AnimalAI 3

AAI supports interdisciplinary research to help better understand human, animal, and artificial cognition. It aims to support AI research towards unlocking cognitive capabilities and better understanding the space of possible minds. It is designed to facilitate testing across animals, humans, and AI.

This Repo

This repo contains the AnimalAI environment, some introductory python scripts for interacting with it, as well as the 900 tasks which were used in the original Animal-AI Olympics competition (and some others for demonstration purposes). Details of the tasks can be found on the AAI website where they can also be played and competition entries watched.

The environment is built using Unity ml-agents release 2.1.0-exp.1 (python version 0.27.0).

The AnimalAI environment and packages are currently only tested on linux (Ubuntu 20.04.2 LTS) with python 3.8 but have been reported working with python 3.6+, other linux distros and Windows and Mac.

The Unity Project for the environment is available here.

Quick Install

see here for a more detailed installation guide, including information on Python/pip/conda and using the command line during installation

To get started you will need to:

  1. Clone this repo.
  2. Install the animalai python package and requirements by running pip install -e animalai from the root folder.
  3. Download the environment for your system:
OS Environment link
Linux v3.0.1
Mac v3.0.1
Windows v3.0.1

(Old v2.x versions can be found here)

Unzip the entire content of the archive to the (initially empty) env folder. On linux you may have to make the file executable by running chmod +x env/AnimalAI.x86_64. Note that the env folder should contain the AnimalAI.exe/.x86_84/.app depending on your system and any other folders in the same directory in the zip file.

Tutorials and Examples

Some example scripts to get started can be found in the examples folder. The following docs provide information for some common uses of the environment.

Manual Control

If you launch the environment directly from the executable or through the play.py script it will launch in player mode. Here you can control the agent with the following:

Keyboard Key Action
W move agent forwards
S move agent backwards
A turn agent left
D turn agent right
C switch camera
R reset environment

Citing

If you use the Animal-AI environment in your work you can cite the environment paper:

Crosby, M., Beyret, B., Shanahan, M., Hernández-Orallo, J., Cheke, L. & Halina, M.. (2020). The Animal-AI Testbed and Competition. Proceedings of the NeurIPS 2019 Competition and Demonstration Track, in Proceedings of Machine Learning Research 123:164-176 Available here.

 @InProceedings{pmlr-v123-crosby20a, 
    title = {The Animal-AI Testbed and Competition}, 
    author = {Crosby, Matthew and Beyret, Benjamin and Shanahan, Murray and Hern\'{a}ndez-Orallo, Jos\'{e} and Cheke, Lucy and Halina, Marta}, 
    booktitle = {Proceedings of the NeurIPS 2019 Competition and Demonstration Track}, 
    pages = {164--176}, 
    year = {2020}, 
    editor = {Hugo Jair Escalante and Raia Hadsell}, 
    volume = {123}, 
    series = {Proceedings of Machine Learning Research}, 
    month = {08--14 Dec}, 
    publisher = {PMLR}, 
} 

Unity ML-Agents

The Animal-AI Olympics was built using Unity's ML-Agents Toolkit.

Juliani, A., Berges, V., Vckay, E., Gao, Y., Henry, H., Mattar, M., Lange, D. (2018). Unity: A General Platform for Intelligent Agents. arXiv preprint arXiv:1809.02627

Further the documentation for mlagents should be consulted if you want to make any changes.

Version History

  • v3.0.1
    • Added Agent Freezing Parameter, enabling you to freeze the agent for no reward decrement at the start of an episode, while other objects continue to move around.
  • v3.0 Note that due to the changes to controls and graphics agents trained on previous versions might not preform the same
    • Updated agent handling. The agent now comes to a stop more quickly when not moving forwards or backwards and accelerates slightly faster.
    • Added new objects, spawners, signs, goal types (see doc)
    • Added 3 animal skins to the player character.
    • Updated graphics for many objects. Default shading on many previously plain objects make it easier to determine location(s)/velocity.
    • Made the Unity Environment available (see link on main page).
    • Many improvements to documentation and examples.
    • Upgraded to Mlagents 2.1.0-exp.1 (ml-agents python version 0.27.0)
    • Fixed various bugs.
  • v2.2.3
    • Now you can specify multiple different arenas in a single yml config file ant the environment will cycle through them each time it resets
  • v2.2.2
    • Low quality version with improved fps. (will work on further improvments to graphics & fps later)
  • v2.2.1
    • Improve UI scaling wrt. screen size
    • Fixed an issue with cardbox objects spawning at the wrong sizes
    • Fixed an issue where the environment would time out after the time period even when health > 0 (no longer intended behaviour)
    • Improved Death Zone shader for weird Zone sizes
  • v2.2.0 Health and Basic Scripts
    • Switched to health-based system (rewards remain the same).
    • Updated overlay in play mode.
    • Allow 3D hot zones and death zones and make them 3D by default in old configs.
    • Added rewards that grow/decay (currently not configurable but will be added in next update).
    • Added basic Gym Wrapper.
    • Added basic heuristic agent for benchmarking and testing.
    • Improved all other python scripts.
    • Fixed a reset environment bug when resetting during training.
    • Added the ability to set the DecisionPeriod (frameskip) when instantiating and environment.
  • v2.1.1 bugfix
    • Fixed raycast length being less then diagonal length of standard arena
  • v2.1 beta release
    • Upgraded to ML-Agents release 2 (0.26.0)
    • New features
      • Added raycast observations
      • Added agent global position to observations

Notice

Copyright 2022 Matthew Crosby

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

animal-ai's People

Contributors

aidan-curtis avatar kozzy97 avatar mdcrosby avatar shenweizhou avatar thanksphil avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

animal-ai's Issues

Change arena with reset function not working

Hi,

After upgrading from version 2.2.1 to 3.0.1 I can no longer reset with a new arena config. It only resets to the previous environment. I've tried with a clean install of Animal AI but the issue still persists. I can still create an environment without an arena config, then give it via resetting later on.

Is this intended behaviour or a bug?

I’m using the Linux version of the environment.

Curriculum learning over AAI 3.0?

Hi,

I got another technical question more than an issue, as I'm planning to use AAI 3.0 for some experiments for my master thesis. I have already played a bit with AAI 2.0, in which Curriculum learning was easily implemented and inherited from ML-Agents but with some variations (external yml files for the curriculum, etc.). As I have not seen a similar example on AAI 3.0, I wonder how could I implement this, to follow a similar philosophy.

Many thanks!

Environment timed out

Hi,

Thank you for improving the Animal-AI testbed. while testing gymwrapper.py I received the below error. Could you help me with how to deal with this? Version of my mlagents_envs is 0.27.0.

Thank you so much!

[WARNING] Environment timed out shutting down. Killing...
Traceback (most recent call last):
  File "gymwrapper.py", line 68, in <module>
    train_agent_single_config(configuration_file=configuration_file)
  File "gymwrapper.py", line 31, in train_agent_single_config
    captureFrameRate = captureFrameRate, #Set this so the output on screen is visible - set to 0 for faster training but no visual updates
  File "/home/jdhwang/animal-ai/animalai/envs/environment.py", line 95, in __init__
    log_folder=log_folder,
  File "/om2/user/jdhwang/anaconda3/envs/myenv/lib/python3.7/site-packages/mlagents_envs/environment.py", line 223, in __init__
    aca_output = self._send_academy_parameters(rl_init_parameters_in)
  File "/om2/user/jdhwang/anaconda3/envs/myenv/lib/python3.7/site-packages/mlagents_envs/environment.py", line 477, in _send_academy_parameters
    return self._communicator.initialize(inputs, self._poll_process)
  File "/om2/user/jdhwang/anaconda3/envs/myenv/lib/python3.7/site-packages/mlagents_envs/rpc_communicator.py", line 121, in initialize
    self.poll_for_timeout(poll_callback)
  File "/om2/user/jdhwang/anaconda3/envs/myenv/lib/python3.7/site-packages/mlagents_envs/rpc_communicator.py", line 112, in poll_for_timeout
    "The Unity environment took too long to respond. Make sure that :\n"
mlagents_envs.exception.UnityTimeOutException: The Unity environment took too long to respond. Make sure that :
         The environment does not need user interaction to launch
         The Agents' Behavior Parameters > Behavior Type is set to "Default"
         The environment and the Python interface have compatible versions.

Missing module in configuration_tutorial.ipynb

Hi,

I was going through the notebook configuration_tutorial.ipynb, and I got a missing module error when trying to run the environment in play mode example:

from animalai.envs.arena_config import ArenaConfig

ModuleNotFoundError: No module named 'animalai.envs.arena_config'

Could it be that this notebook needs to be updated? I cannot see either any envs/arena_config.py

Thanks!

Questions about inference camera views

Hi!

Figured this would be the most effective place to ask.

I'm doing inference on a trained agent (separately trained in PyTorch just using the AnimalAIEnvironment class, so moving the model back into the Unity editor is a bit difficult), and I wish to view the environment from either bird-eye or third person rather than though the agent. Is it possible to switch the camera during inference like you can in play mode? This might also be nice to have available in the environment state space in the future.

I see it's possible switch camera view in the 'Play and Watch' demo on the competition website. Is the code for that available?

Thank you in advance!

How is the competition preparation going on?

I post this here, cause there is no contact here.
On the page, there's a coming competition in 2021, though 2021 is almost gone.
Would you kindly share any updates for this environment and competition?

I also hope to have comprehensive documentation for AninalAI, it is hard to use version 3 without refering version 2.

Documentation for AnimalAIEnvironment Class

I am trying to spawn an environment and test/implement my own networks.

Following this, there are lots of parameters for AnimalAIEnvironment.
I have no idea what each parameter does, and I cannot find any detailed documentation.
It would be great help, if we have it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.