Code Monkey home page Code Monkey logo

Comments (11)

terryzhao127 avatar terryzhao127 commented on June 12, 2024 1

...
Running the code above works fine, you can even remove the env.render() line and you will still get observations and rewards. Does that answer your question?

Thanks for your answer and now I know what to do.

from animalai-olympics.

beyretb avatar beyretb commented on June 12, 2024

Hello,

Are you running one of the examples provided or your own implementation? Also, please provide your hardware/OS setup?

Keep in mind that the environment will freeze between actions, therefore it may have to do with the time it takes for training steps for example. The PPO example provided in examples/trainMLAgents.py takes a training step after most observations are received, which explains the jerky rendering.

from animalai-olympics.

yding5 avatar yding5 commented on June 12, 2024

Hello,

Yes I was running the PPO example. Sorry for forgetting to mention that. The setup is intel X5460, GTX 980Ti, and ubuntu 16.04 LTS. I tried the Rainbow example which has a much more smoother rendering. I haven't fully understand what's the meaning of "most observations" but I will try to look at it further when trying my own implementation to see will there be any problems. Thanks.

from animalai-olympics.

beyretb avatar beyretb commented on June 12, 2024

Hi, I used "most" as technically speaking training may not start right away as you may need to collect a certain number of observations to fill the replay buffer first, but that's a technicality really...

Also, be aware that all the code provided, even the environment, barely make any use of the GPU (hence the low GPU utilisation). Switching to tensorflow-gpu for Rainbow should speed things up and make use of the GPU.

from animalai-olympics.

yding5 avatar yding5 commented on June 12, 2024

Hi, I did some research on this. When I run the PPO example, although the frame rate of the window showing the top view is very low as described previously, the training speed is about 30 steps/s. The same thing happens with the visualizeLightsOff.py: the update rate of the displaying window is bad and kind of out-of-control by change the interval in

anim = animation.FuncAnimation(fig, run_step_imshow, init_func=initialize_animation, frames=100, interval=50)

Is is possible that the reason is the rendering for the window is independent or at least not synchronized with the simulation steps? Ref: Unity-Technologies/ml-agents#299

Overall, we are trying to get the top-view at each step along with the agent-view for analysis. One suggestion is to add an additional agent or observation and get it from visual observation interface (Unity-Technologies/ml-agents#134). However, it seems to us that it is not possible to do this because we are using the executable environment you provided. It that right? If so, one work around we are thinking is getting the screenshot from the window but then it comes the refresh rate issue of the window.

from animalai-olympics.

beyretb avatar beyretb commented on June 12, 2024

Hello,

Is is possible that the reason is the rendering for the window is independent or at least not synchronized with the simulation steps?

Yes that's very likely part of the explanation as well, we maximise the number of frames for the agent independently of the actual screen rendering.

However, it seems to us that it is not possible to do this because we are using the executable environment you provided. It that right?

correct, there is no brain attached to the top-down camera at the moment

One suggestion is to add an additional agent or observation and get it from visual observation interface

This is feasible, I will look into adding one camera per arena (as presently there is only one for the whole environment) and attach these to an extra brain. I will need to run some tests to see by how much this slows down training and will decide how to proceed later on. We are working on having the competition ready by the end of the month at the moment, I will look into suggestion once this is done.

Keep in mind however that these extra observations would be for information purposes only and will not be provided at test time as it would break the comparability with the actual experiments from the animal cognition literature.

from animalai-olympics.

terryzhao127 avatar terryzhao127 commented on June 12, 2024

It is very weird that the env.render() function does not work at all.

The codes (a file named test.py at the root of the repo) are:

from animalai.envs.gym.environment import AnimalAIEnv
from animalai.envs.arena_config import ArenaConfig

import random

env_path = 'env/AnimalAI'
worker_id = random.randint(1, 100)
arena_config_in = ArenaConfig('examples/configs/1-Food.yaml')
gin_files = ['examples/configs/rainbow.gin']

env = AnimalAIEnv(environment_filename=env_path,
                  worker_id=worker_id,
                  n_arenas=1,
                  arenas_configurations=arena_config_in,
                  retro=True)
env.reset()

done = False
while not done:
    env.render()
    action = env.action_space.sample()  # your agent here (this takes random actions)
    observation, reward, done, _ = env.step(action)

env.close()

My computer is Ubuntu 18.10 but I think it is not related to system.

When I use the debug of PyCharm, if I stop at some statement in while loop, then press Resume Program, the render() can work. Is it because the Python process is too fast to be caught by render system?

from animalai-olympics.

beyretb avatar beyretb commented on June 12, 2024

The gym environment is purely a wrapper for the Unity ML Agent environment and allows to plug libraries as Dopamine directly without changing the code too much. Therefore, the logic behind it is a bit different from OpenAI gym and follows the Unity way of doing things.

You can see in the source, the env.render() function actually only returns the visual observations from the agent:

def render(self, mode='rgb_array'):
    return self.visual_obs

As mentioned above (and detailed on this issue) the Unity window rendered is not synchronised with the agent. If you wish to have a step by step visualisation of the environment you can use the visualisation as is done in examples/visualizeLightsOff.py and replace the ML Agents environment with the Gym one, and display the output of env.render().

As a side note, retrieving the outcome of a single step is the same as Gym though, the env.info attribute also contains the ML Agents brain.

from animalai-olympics.

terryzhao127 avatar terryzhao127 commented on June 12, 2024

@beyretb
I am so sorry that I don't even know ML Agents. Does this file use matplotlib to render the game?

Another question plz: If I don't use any functionality of rendering, can the game be purely run by gym wrapper which I used in above codes.

from animalai-olympics.

beyretb avatar beyretb commented on June 12, 2024

Hey, no worries, you don't need knowledge of ML Agents, the documentation should be enough to get started with the packages we provide.

The visualizeLightsOff.py script is purely a visualisation tool for you to see what the agent sees, and matplotlib is used for this purpose yes. It is not needed for training though.

Running the code above works fine, you can even remove the env.render() line and you will still get observations and rewards. Does that answer your question?

from animalai-olympics.

beyretb avatar beyretb commented on June 12, 2024

Closing this issue as better rendering to the agent is now available

from animalai-olympics.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.