beyretb / animalai-olympics Goto Github PK
View Code? Open in Web Editor NEWCode repository for the Animal AI Olympics competition
License: Apache License 2.0
Code repository for the Animal AI Olympics competition
License: Apache License 2.0
I am trying to figure out how to submit an entry. In my current code, each time I want to take an action, I use this line (where I have put the desired move into the variable action
):
info = env.step(vector_action=action, memory=None, text_action=None)
and then I get the observations and rewards with these lines:
brainInfo = info['Learner']
reward = brainInfo.rewards[0]
etc.
As I try to decipher the docs on submitting with docker, it looks like I need to use a class named Agent
. In the example, agent.py
, the Agent
class has a method:
def step(self, obs, reward, done, info):
that returns an action. Does the evaluation software call this method and in that way get the submitted action? Is that the place where I get the observations and reward?
Also, I am unclear on how the Agent class is supposed to interface with my code. Is it just basically a bridge for data to pass across? In that case, how does the simulation get started? The dockerfile does not have a CMD at the end to get things going, so there must be some entry point elsewhere.
Starting the AnimalsAI executable on Ubuntu with MESA drivers.
The screen loads and renders, but then it freezes (the window is still reactive, e.g. can be closed)
Here is the log
Desktop is 1920 x 1080 @ 60 Hz
Initialize engine version: 2018.3.13f1 (06548a9e9582)
GfxDevice: creating device client; threaded=1
Renderer: Mesa DRI Intel(R) Sandybridge Desktop
Vendor: Intel Open Source Technology Center
Version: 3.3 (Core Profile) Mesa 18.2.8
GLES: 0
GL_3DFX_texture_compression_FXT1 GL_AMD_draw_buffers_blend GL_AMD_seamless_cubemap_per_texture GL_AMD_shader_trinary_minmax GL_AMD_vertex_shader_layer GL_AMD_vertex_shader_viewport_index GL_ANGLE_texture_compression_dxt3 GL_ANGLE_texture_compression_dxt5 GL_APPLE_object_purgeable GL_ARB_ES2_compatibility GL_ARB_ES3_compatibility GL_ARB_arrays_of_arrays GL_ARB_base_instance GL_ARB_blend_func_extended GL_ARB_buffer_storage GL_ARB_clear_buffer_object GL_ARB_clear_texture GL_ARB_clip_control GL_ARB_compressed_texture_pixel_storage GL_ARB_conditional_render_inverted GL_ARB_copy_buffer GL_ARB_copy_image GL_ARB_cull_distance GL_ARB_debug_output GL_ARB_depth_buffer_float GL_ARB_depth_clamp GL_ARB_direct_state_access GL_ARB_draw_buffers GL_ARB_draw_buffers_blend GL_ARB_draw_elements_base_vertex GL_ARB_draw_instanced GL_ARB_enhanced_layouts GL_ARB_explicit_attrib_location GL_ARB_explicit_uniform_location GL_ARB_fragment_coord_conventions GL_ARB_fragment_layer_viewport GL_ARB_fragment_shader GL_ARB_framebuffer_object
GL_ARB_framebuffer_sRGB GL_ARB_get_program_binary GL_ARB_get_texture_sub_image GL_ARB_half_float_pixel GL_ARB_half_float_vertex GL_ARB_instanced_arrays GL_ARB_internalformat_query GL_ARB_internalformat_query2 GL_ARB_invalidate_subdata GL_ARB_map_buffer_alignment GL_ARB_map_buffer_range GL_ARB_multi_bind GL_ARB_occlusion_query2 GL_ARB_pipeline_statistics_query GL_ARB_pixel_buffer_object GL_ARB_point_sprite GL_ARB_polygon_offset_clamp GL_ARB_program_interface_query GL_ARB_provoking_vertex GL_ARB_robustness GL_ARB_sample_shading GL_ARB_sampler_objects GL_ARB_seamless_cube_map GL_ARB_seamless_cubemap_per_texture GL_ARB_separate_shader_objects GL_ARB_shader_bit_encoding GL_ARB_shader_draw_parameters GL_ARB_shader_group_vote GL_ARB_shader_objects GL_ARB_shader_subroutine GL_ARB_shader_texture_lod GL_ARB_shader_viewport_layer_array GL_ARB_shading_language_420pack GL_ARB_shading_language_packing GL_ARB_sync GL_ARB_texture_barrier GL_ARB_texture_buffer_object GL_ARB_texture_buffer_object_rgb32 GL_ARB_texture_buffer_r
ange GL_ARB_texture_compression_rgtc GL_ARB_texture_cube_map_array GL_ARB_texture_filter_anisotropic GL_ARB_texture_float GL_ARB_texture_gather GL_ARB_texture_mirror_clamp_to_edge GL_ARB_texture_multisample GL_ARB_texture_non_power_of_two GL_ARB_texture_query_levels GL_ARB_texture_query_lod GL_ARB_texture_rectangle GL_ARB_texture_rg GL_ARB_texture_rgb10_a2ui GL_ARB_texture_storage GL_ARB_texture_storage_multisample GL_ARB_texture_swizzle GL_ARB_timer_query GL_ARB_transform_feedback2 GL_ARB_transform_feedback_overflow_query GL_ARB_uniform_buffer_object GL_ARB_vertex_array_bgra GL_ARB_vertex_array_object GL_ARB_vertex_attrib_binding GL_ARB_vertex_shader GL_ARB_vertex_type_10f_11f_11f_rev GL_ARB_vertex_type_2_10_10_10_rev GL_ARB_viewport_array GL_ATI_blend_equation_separate GL_ATI_texture_float GL_EXT_abgr GL_EXT_blend_equation_separate GL_EXT_draw_buffers2 GL_EXT_draw_instanced GL_EXT_framebuffer_blit GL_EXT_framebuffer_multisample GL_EXT_framebuffer_multisample_blit_scaled GL_EXT_framebuffer_sRGB GL_EXT_packe
d_depth_stencil GL_EXT_packed_float GL_EXT_pixel_buffer_object GL_EXT_polygon_offset_clamp GL_EXT_provoking_vertex GL_EXT_shader_framebuffer_fetch_non_coherent GL_EXT_shader_integer_mix GL_EXT_texture_array GL_EXT_texture_compression_dxt1 GL_EXT_texture_compression_rgtc GL_EXT_texture_compression_s3tc GL_EXT_texture_filter_anisotropic GL_EXT_texture_integer GL_EXT_texture_sRGB GL_EXT_texture_sRGB_decode GL_EXT_texture_shared_exponent GL_EXT_texture_snorm GL_EXT_texture_swizzle GL_EXT_timer_query GL_EXT_transform_feedback GL_EXT_vertex_array_bgra GL_IBM_multimode_draw_arrays GL_INTEL_performance_query GL_KHR_blend_equation_advanced GL_KHR_context_flush_control GL_KHR_debug GL_KHR_no_error GL_KHR_robustness GL_MESA_pack_invert GL_MESA_shader_integer_functions GL_MESA_texture_signed_rgba GL_NV_conditional_render GL_NV_depth_clamp GL_NV_packed_depth_stencil GL_NV_texture_barrier GL_OES_EGL_image GL_S3_s3tc
OPENGL LOG: Creating OpenGL 3.3 graphics device ; Context level <OpenGL 3.3> ; Context handle 87480496
Begin MonoManager ReloadAssembly
- Completed reload, in 0.056 seconds
Default vsync count 0
requesting resize 1280 x 720
resizing window to 1280 x 720
Desktop is 1920 x 1080 @ 60 Hz
UnloadTime: 0.697000 ms
configuration missing for arena 0
(Filename: ./Runtime/Export/Debug.bindings.h Line: 45)
Setting up 4 worker threads for Enlighten.
Thread -> id: 7f646ddf3700 -> priority: 1
Thread -> id: 7f646d5f2700 -> priority: 1
Thread -> id: 7f646cdf1700 -> priority: 1
Thread -> id: 7f6457fff700 -> priority: 1
requesting resize 1280 x 720
resizing window to 1280 x 720
Desktop is 1920 x 1080 @ 60 Hz
I am unable to get any of the example agents or the simulation environment to open on my machine and am not entirely sure why. I am running Ubuntu 16.04 and I have attempted to create a Dockerfile with all of the necessary components (per the Installation Instructions) as follows:
FROM tensorflow/tensorflow:1.12.3-gpu
MAINTAINER Justin VanHouten ()
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt-get update
RUN apt-get install -y nano
RUN apt-get install -y wget
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y xorg openbox
RUN apt-get install -y git-core
RUN apt-get install -y python3.6
WORKDIR ..
RUN wget https://bootstrap.pypa.io/get-pip.py
RUN python3.6 get-pip.py
RUN pip install opencv-python
RUN git clone https://github.com/beyretb/AnimalAI-Olympics.git
WORKDIR AnimalAI-Olympics
WORKDIR animalai
RUN pip install -e .
WORKDIR ..
WORKDIR examples/animalai_train
RUN pip install -e .
WORKDIR ..
WORKDIR ..
RUN wget https://www.doc.ic.ac.uk/~bb1010/animalAI/env_linux_v1.0.0.zip
RUN unzip ./env_linux_v1.0.0.zip -d env
If I then run a container based on this Dockerfile, cd into /examples and use the command "python3.6 [any agent python script] configs/[any premade YAML]" I get the error "The Unity environment took too long to respond...".
If I cd into /env and enter command "./AnimalAI.x86_64" it appears to run but nothing happens; I can Ctrl-Z out of it but it doesn't throw any errors.
When I open the log file, the only line says that "Desktop is 0 x 0 @ 0 Hz" which indicates to me that it isn't detecting a display. My questions are:
Thank you in advance for any help or guidance you can provide; I've been over the Installation and Quick Start guides a dozen times and am not really sure what I'm missing.
Hi!
I am delighted to work on this env and thank you for making this!
I have one little Feature request, which is about rendering an Env during training.
As the original ML agent supports the way to disable the rendering of env, I would like to have it on AnimalAI as well because we might want to train the model on a remote server which does not have access to GUI.
Reference: Unity-Technologies/ml-agents#1413
Thanks,
Observations return velocities in meters/sec. Trying to calculate distance moved, I used the time between calls to env.step multiplied by the velocity. This value turns out to be way too small.
How do we determine the time used by the sim to calculate distanced moved between two calls to env.step?
I would like to have a clarifications on https://mdcrosby.com/blog/animalaieval.html
Category N1 says "Allowed objects: all goals", as red balls are called badGoals, does it mean that in the first category there can be also red balls? Also the example has 1 green ball, but is this representative of the real distribution in the test? Or there can be 100 green balls?
Category N2 says "Allowed objects: All except zones" does it mean that barriers can be present? If so what is the difference with category N3?
Are there more precisely defined docs?
From what I can tell, it would currently be possible for an agent to know (based on the number of arena resets) exactly which category is being tested at the moment. This information can provide an unnatural advantage for evaluation. A simple strategy to (ab)use this is the following:
†For the final evaluation, the number of tests per category will supposedly be higher than 30. But even if the agent for the final submissions has to also pass the 30-per-category somehow, inferring whether the 31st test is in category 1 or 2 could probably be accomplished relatively easily.
Depending on the variations in agent performance in each category, this could yield a sizeable score boost and it would be sub-optimal not to do (or at least attempt) it. And I don't see how counting the number of resets would violate the competition rules either.
To avoid a Prisoner's Dilemma scenario of everyone having to invest effort into this strategy, the tests should probably not be carried out sequentially for each category as appears to be the case right now. To avoid unfairness in the evaluation procedure in the face of timeouts, interleaving the test categories in an arbitrary (but fixed!) order could be a good solution. (Attempting to extract information about this order through overly clever submissions - such as timing out on each test in sequence and noting which categories get score increments - would be quite clearly violating the competition rules.) This change might however slightly alter/worsen the scores of submissions that do not finish all tests in time (they might end up skipping some easier tests instead of only the later, harder ones).
Some level of hand-inference will of course always be possible (such as "we are not in the first category because the agent is standing on a solid colored object" or "we are in the generalization category because there are several teal pixels in view"), and even trained agents themselves may "overfit" on the test category differences (to the extent that hidden tests allow this).
But currently the category information is trivial to supply to the agent and there is an obvious incentive to do so. This should be alleviated in my opinion.
Hi
During training the rendering is done in a tiny window (regardless of the resolution
param).
E.g. running trainDopamine.py
and trainMLAgents.py
does not render the top view the same way as visualizeArena.py
does.
What controls this?
It's hard to see what's happening this way.
Thanks
The following exception occurs when I run trainMLAgents.py with the provided configuration:
(venv) D:\Repos\AnimalAI-Olympics\examples>python trainMLAgents.py
trainMLAgents.py:35: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
trainer_config = yaml.load(data_file)
INFO:mlagents.envs:
'Academy' started successfully!
Unity Academy name: Academy
Number of Brains: 1
Number of Training Brains : 1
2019-07-02 22:04:20.888000: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
INFO:mlagents.envs:Hyperparameters for the PPO Trainer of brain Learner:
batch_size: 64
beta: 0.01
buffer_size: 2024
epsilon: 0.2
gamma: 0.99
hidden_units: 256
lambd: 0.95
learning_rate: 0.0003
max_steps: 5.0e6
normalize: False
num_epoch: 3
num_layers: 1
time_horizon: 128
sequence_length: 64
summary_freq: 1000
use_recurrent: False
summary_path: ./summaries/train_example-1_Learner
memory_size: 256
use_curiosity: True
curiosity_strength: 0.01
curiosity_enc_size: 256
model_path: ./models/train_example/Learner
INFO:mlagents.trainers: train_example-1: Learner: Step: 1000. Mean Reward: -1.001. Std of Reward: 0.000. Training.
INFO:mlagents.trainers: train_example-1: Learner: Step: 2000. No episode was completed since last summary. Training.
Traceback (most recent call last):
File "trainMLAgents.py", line 86, in
tc.start_learning(env, trainer_config)
File "D:\Repos\AnimalAI-Olympics\venv\lib\site-packages\animalai_train\trainers\trainer_controller.py", line 217, in start_learning
new_info = self.take_step(env, curr_info)
File "D:\Repos\AnimalAI-Olympics\venv\lib\site-packages\animalai_train\trainers\trainer_controller.py", line 288, in take_step
trainer.update_policy()
File "D:\Repos\AnimalAI-Olympics\venv\lib\site-packages\animalai_train\trainers\ppo\trainer.py", line 343, in update_policy
run_out = self.policy.update(buffer.make_mini_batch(start, end), n_sequences)
File "D:\Repos\AnimalAI-Olympics\venv\lib\site-packages\animalai_train\trainers\buffer.py", line 197, in make_mini_batch
mini_batch[key] = np.array(self[key][start:end])
ValueError: could not broadcast input array from shape (129) into shape (1)
When I learning I can choose any resolution I want.
Can I change it for inference?
In testDocker.py I see this code:
env = AnimalAIEnv(
environment_filename='/aaio/test/env/AnimalAI',
seed=0,
retro=False,
n_arenas=1,
worker_id=1,
docker_training=True,
)
Does it mean that only possible resolution is 84x84?
Something I feel would be very useful especially to allow participation for users who do not have access to huge computing clusters (since I feel it shouldn't be the point that whoever has the most compute wins) and aren't necessarily used to setting them up, since it seems like my measly CPU wouldn't be quite enough to train any models here.
I'm actually trying to figure that out right now with Docker, but maybe a premade container with instructions how to deploy it on GCP would be very helpful
After some debugging we have seen an interesting problem and we are curious if you can help us or possibly make a different Unity build that addresses some problems we have. On some configurations we have observed that any version later than v0.1 will produce a consistently black screen right after the Unity logo shows. This includes v0.2, v0.3, v0.4 and v0.5 we have tested.
On real hardware using KDE it runs all versions correctly never producing a black screen.
On VMWare Player and VirtualBox with 3D acceleration enabled or disabled it always produces a black screen when running anything but v0.1. Testing was done both with python and separately with just running the AnimalAI.x86_64 binary manually. This was tested with Ubuntu 18.04.2 LTS, 19.04 as well as KDE Neon User.
Firstly, it seems strange that v0.1 would work and the others wouldn't, which is why we are bothering to report this.
Second, we tested other things made with Unity and games that are more complex than AnimalAI and they work fine.
Here is an attached diff of the output of which OpenGL features are supported, with the red/removed portions being ones that aren't present on the systems where black screens are produced: https://gist.github.com/krisives/84e67c6983c11902c3bbda8cfcf0992e
Lastly, an attempt was made to create a Frankenstein combination of executables and AnimalAI_Data directories that produced results other than black screens. This was "successful" to some degree in that if you take v0.5 files and replace the AnimalAI_Data/Managed
directory with files from v0.1 it will no longer produce a black screen, but instead produces a bunch of pink missing textures.
Please let us know if you need any additional details or have any suggestions.
Is it possible to configure agent starting positions? Via the config file, I see no possible option to do so at the moment.
I am testing with the following code testMovement.py:
from animalai.envs.environment import UnityEnvironment
from animalai.envs.ArenaConfig import ArenaConfig
import matplotlib.pyplot as plt
env= UnityEnvironment(
file_name='env/AnimalAI', # Path to the environment
worker_id=6, # Unique ID for running the environment (used for connection)
seed=0, # The random seed
docker_training=False, # Whether or not you are training inside a docker
no_graphics=False, # Always set to False
n_arenas=1, # Number of arenas in your environment
play=False # Set to False for training
)
arena_config_in = ArenaConfig('configs/justFood.yaml')
env.reset(config=arena_config_in, train_mode=True)
plt.ion()
def step(action):
info = env.step(action)
b1 = info['Learner']
img = b1.visual_observations
plt.imshow(img[0][0])
print(f'Rewards: {b1.rewards}')
And I execute it in interactive mode:
$ python -i testMovement.py
Then, I send actions manually, but I always get rewards=[-inf]
(even if I reach the green food):
>>> step([0,0])
Rewards: [-inf]
>>> step([1,0])
Rewards: [-inf]
>>> step([2,0])
Rewards: [-inf]
>>> step([1,1])
Rewards: [-inf]
>>> step([1,2])
Rewards: [-inf]
>>> step([1,0])
Rewards: [-inf]
>>> step([2,0])
Rewards: [-inf]
Changing the reset line to:
env.reset(config=arena_config_in, train_mode=False)
I get the same rewards, but it is easier to test.
In the description of objects, the bouncing yellow ball is labelled as bad. Is this correct ?
I am trying to make a submission on Windows with docker settings for proxy:
export "HTTP_PROXY=http://proxy.xxx:8080" and "HTTPS_PROXY=http://proxy.xxx:8080"
AnimalAI-Olympics\examples\submission>docker build --tag=submission .
Only first step was executed (downloaded) successfully :
FROM nvidia/cuda:9.0-cudnn7-runtime-ubuntu16.04
Here is the log:
AnimalAI-Olympics\examples\submission>docker build --tag=submission .
Sending build context to Docker daemon 29.88MB
Step 1/20 : FROM nvidia/cuda:9.0-cudnn7-runtime-ubuntu16.04
---> fa42893c355d
Step 2/20 : RUN apt-get clean && apt-get update && apt-get install -y locales
---> Running in 827ac1f47f87
Err:1 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64 InRelease
Failed to connect to developer.download.nvidia.com port 443: Connection refused
Err:2 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64 InRelease
Failed to connect to developer.download.nvidia.com port 443: Connection refused
Err:3 http://archive.ubuntu.com/ubuntu xenial InRelease
Could not connect to archive.ubuntu.com:80 (91.189.88.162). - connect (111: Connection refused) [IP: 91.189.88.162 80]
Err:4 http://archive.ubuntu.com/ubuntu xenial-updates InRelease
Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.162 80]
Err:5 http://archive.ubuntu.com/ubuntu xenial-backports InRelease
Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.162 80]
Err:6 http://security.ubuntu.com/ubuntu xenial-security InRelease
Could not connect to security.ubuntu.com:80 (91.189.88.24). - connect (111: Connection refused) [IP: 91.189.88.24 80]
Reading package lists...
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/xenial/InRelease Could not connect to archive.ubuntu.com:80 (91.189.88.162). - connect (111: Connecti
on refused) [IP: 91.189.88.162 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/xenial-updates/InRelease Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.162 80]
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/xenial-backports/InRelease Unable to connect to archive.ubuntu.com:http: [IP: 91.189.88.162 80]
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/xenial-security/InRelease Could not connect to security.ubuntu.com:80 (91.189.88.24). - connect (111
: Connection refused) [IP: 91.189.88.24 80]
W: Failed to fetch https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/InRelease Failed to connect to developer.download.nvidia.com port
443: Connection refused
W: Failed to fetch https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64/InRelease Failed to connect to developer.download.nvi
dia.com port 443: Connection refused
W: Some index files failed to download. They have been ignored, or old ones used instead.
Reading package lists...
Building dependency tree...
Reading state information...
Package locales is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'locales' has no installation candidate
The command '/bin/sh -c apt-get clean && apt-get update && apt-get install -y locales' returned a non-zero code: 100
Hi,
With v 04 the python program below returns visual observations of all zeros
after the env.step call. (with adjustments for the config yaml)
It returns visual_observations full of floats (correct) under v 03.
--------------------------------cut here for code----------------------------
from animalai.envs import UnityEnvironment
from animalai.envs.exception import UnityEnvironmentException
from animalai.envs.ArenaConfig import ArenaConfig
import sys
arena_config_in = ArenaConfig('configs/obstacles.yaml')
env_name = 'env/AnimalAI'
train_mode = True
env = UnityEnvironment(file_name=env_name)
default_brain = env.brain_names[0]
brain = env.brains[default_brain]
env_info = env.reset(config =arena_config_in, train_mode=train_mode)[default_brain]
action = [[0,0]]
env_info = env.step(action)[default_brain]
print("VISUAL OBS: ",env_info.visual_observations)
input("HIT ME: ")
env.close()
I got back the scores from a submission, but would like to know more about how they are done. For example, in C1, which I assume is like the sample trial 1-Food.yaml, I am able to get 29 or 30 of the food targets when doing 30 sequential runs of 1-Food.yaml. However, in the submission, I got a score of less than 15. From the docs, I think that the score in any one category is the number of runs that received a reward out of a total of 30 runs. Is that correct? If so, it is possible to provide some more examples from that category to help us figure out why we are missing so many more with the submission than we are with the 1-Food.yaml example?
Also, in the avoidance category, the example seems to have a target that is contained within the red avoidance area. In that case, my agent just runs around outside the area, which seems like correct behavior. However, this results in the simulation either never ending or timing out without a reward. How is that scored?
Also, when there are gold rewards, ones that do not reset the sim until they are all captured, along with green rewards (e.g., 2-Preferences example), what is the success criteria? It is not possible to get all the rewards because the sim resets after either collecting the green or after collecting all the gold ones.
It seems to require tensorflow==1.12.2, but it is not available.
Could you help me with this?
ERROR: Could not find a version that satisfies the requirement tensorflow==1.12.2 (from animalai-train) (from versions: 1.13.0rc1, 1.13.0rc2, 1.13.1, 1.14.0rc0, 1.14.0rc1, 1.14.0, 2.0.0a0, 2.0.0b0, 2.0.0b1) ERROR: No matching distribution found for tensorflow==1.12.2 (from animalai-train)
I have tried using numpy 1.16.1 and it works. Is there a reason for that specific set of versions?
Thanks
Dear AAO,
I am running
Mac OS X 10.14.4
Python 3.6.6
numpy '1.14.5'
When I do:
$ python3 train.py
After a few steps during which the little blue guy is running around bumping
into things, I get the error warning below and then the blue guy doesn't
run around anymore.
Any ideas ??
Best
Phil Neal
curiosity_enc_size: 256
model_path: ./models/train_example/Learner
INFO:mlagents.envs:Saved Model
INFO:mlagents.trainers: train_example-1: Learner: Step: 1000. No episode was completed since last summary. Training.
INFO:mlagents.envs:Saved Model
INFO:mlagents.trainers: train_example-1: Learner: Step: 2000. No episode was completed since last summary. Training.
/Users/pneal/aao/may2/animalai/trainers/ppo/trainer.py:335: RuntimeWarning: invalid value encountered in subtract
(advantages - advantages.mean()) / (advantages.std() + 1e-10))
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/numpy/core/_methods.py:112: RuntimeWarning: invalid value encountered in subtract
x = asanyarray(arr - arrmean)
INFO:mlagents.envs:Saved Model
INFO:mlagents.trainers: train_example-1: Learner: Step: 3000. No episode was completed since last summary. Training.
INFO:mlagents.envs:Saved Model
While running directly: AnimalAI-Olympics\env\AnimalAI.exe correctly displays all random objects,
trying to use python api:
AnimalAI-Olympics\examples>python visualizeArena.py configs/5-SpatialReasoning.yaml
displays an empty arena with blue ball which cannot be controlled.
The same is true with other *.yaml files.
Something wrong is going with yaml loader.
I have installed:
pyyaml 5.1.1 pypi_0 pypi
yaml 0.1.7 hc54c509_2
I put two print() in arena_config.py ::
class ArenaConfig(yaml.YAMLObject):
yaml_tag = u'!ArenaConfig'
def __init__(self, yaml_path=None):
if yaml_path is not None:
self.arenas = yaml.load(open(yaml_path, 'r'), Loader=yaml.Loader).arenas
print("AAA 2 "+yaml_path)
else:
print("AAA 3")
self.arenas = {}
Here is the output:
AnimalAI-Olympics\examples>python visualizeArena.py configs/5-SpatialReasoning.yaml
AAA configs/5-SpatialReasoning.yaml
AAA 2 configs/5-SpatialReasoning.yaml
AAA 3
It seems ArenaConfig() is initialized twice and the second time there is no *.yaml parameter?!
Hi,
I just built a new v 05 following the directions
However, I have this problem:
Under v 04 this line worked:
env_info = env.reset(arenas_configurations_input=arena_config_in,train_mode=train_mode)[default_brain]
But under v 05 I have to change it:
env_info = env.reset(arenas_configurations=arena_config_in,train_mode=train_mode)[default_brain]
I had to lop off the "_input"
It could be me with a version mis-match. I dunno.
testDocker.py
runs 5 episodes but only resets the agent once.
I assume this is just a mistake and not the case for the real submission evaluation script?
I created a MetaCurriculm and added it to the TrainerController. I got an error on the environment reset.
I modified the trainMLAgents.py:
# ...
maybe_meta_curriculum = MetaCurriculum('configs/curricula/')
# ...
tc = TrainerController(model_path, summaries_dir, run_id + '-' + str(sub_id),
save_freq, maybe_meta_curriculum,
load_model, train_model,
keep_checkpoints, lesson, external_brains, run_seed, arena_config_in)
tc.start_learning(env, trainer_config)
And this is the error:
Traceback (most recent call last):
File "trainMLAgents.py", line 87, in <module>
tc.start_learning(env, trainer_config)
File "/Applications/anaconda3/envs/animalai/lib/python3.6/site-packages/animalai_train/trainers/trainer_controller.py", line 205, in start_learning
curr_info = self._reset_env(env)
File "/Applications/anaconda3/envs/animalai/lib/python3.6/site-packages/animalai_train/trainers/trainer_controller.py", line 183, in _reset_env
return env.reset(config=self.meta_curriculum.get_config())
TypeError: reset() got an unexpected keyword argument 'config'
I must have misunderstood something. When I send an action to turn right [0.0, 2.0], the image returned looks like the agent has turned to the left. vice versa with an action to turn left [0.0, 1.0], the agent appears to be turning to the right. Forward and back appear correct.
In standalone mode, WASD works correctly.
Anyone else seeing this?
Is it possible to get the console output or perhaps some log files from the runs? That would help a lot with debugging. The stdout and stderr text files have very little info in them.
The speed of the agent is 3-dimensional but the actions are 2-dimensional only. Does it imply that Z component of speed is always zero and the agent location is restricted to XY plane?
Experimentation seems to indicate that the velocity observations are in the arena reference frame (moving forward while facing a corner yields a change in both the x (index 0) and z (index 2) velocity values. Is that correct? Are there any units associated with the velocity values, such as rows per second?
I would have thought that an agent in the wild would be able to observe its velocity in its own frame but not in the world frame.
Also, just to be sure, actions such as forward and backward are implemented as momentary accelerations in the agent's frame, is that correct? I surmised that by seeing the effects of multiple forward inputs-- the velocity increases with each one and then gradually slows to zero after the inputs end.
Hi,
I just upgraded to version 04 from 03.
I get this error message under v04:
"env_info = env.reset(arenas_configurations_input=arena_config_in,train_mode=train_mode)[default_brain]
TypeError: reset() got an unexpected keyword argument 'arenas_configurations_input'"
env.reset works in v 04 when I use:
env_info = env.reset(config=arena_config_in,train_mode=train_mode)[default_brain]
( 'config 'as the keyword vs 'arenas_configurations_input')
Best regards 8-)
I am running the environment on a HPC cluster where Docker containers are not allowed, but Singularity containers are.
I followed your Docker instructions and then created a singularity image which runs the environment on headless nodes using offscreen rendering.
However, this was only successful on CPU only nodes, when I use a CPU+GPU node and launch Singularity with the --nv flag (for GPU support) the environment hangs with the typical error message "The Unity environment took too long to respond. Make sure that ..."
I was wondering if you, or anyone else, has had a similar issue and are aware of the solution?
Many thanks for any assistance.
Hi
A sense of depth is quite crucial for many real world tasks.
Would that make sense to include stereo vision for the agent?
I have the environment up and running well. However, when I run into a "BadGoal" it returns a positive reward, just like a "GoodGoal" (0.5 to 5 depending on the size). It shows up with the correct color (red) in the arena and "GoodGoal" shows up green, as expected.
Bit of a rant here. When the competition rules were announced, they stated that the resolution would be 84x84 for submissions and that was fixed and not changing. Also, a time limit was set. To decide halfway through the competition to change such a fundamental parameter is unfair. Huge amounts of time have gone into training at 84x84, along with a lot of testing and other aspects of the code. Similarly, the time constraint was a consideration right from the start. Now that the basic aspects of the code are working, the second half of the competition time was planned to be spent exploring interesting avenues using that basic code. Instead, a tremendous amount of time and resources will have to be used to redo that initial work at various resolutions. Furthermore, this benefits groups with large computing resources and budgets. At 256x256, the network training needed for the existing code is very hard to do with our machines. I understand that we may get $500 for Amazon AWS, but that funding was going to be used to explore interesting new areas rather than just re-training at higher resolution. Furthermore, at 256x256, we will burn through that $500. pretty quick.
Unhappy here...........
I just want to get first person viewpoint image and third person viewpoint image simultaneously.
If i set play=True, it's hard to get many(2k~) images.
If possible, could you release unity environment code except for configuration 8~10?
Any other ideas?
Hi,
Yesterday I started playing with the sample script trainMLAgents.py. I noticed that the simulation is running on a single core. Is there an easy way to make it multicore to speedup training?
Thanks
ironbar
Hi, I found the window showing the environment during training very helpful for observing the agent behavior. However, the FPS on my machine is very low (up to 4 frames/s) and stuck from time to time. Meanwhile, both CPU and GPU utilization is low.
I have read that it might be related to the setup used in Unity to speed up training but I don't know whether this is a issue with my machine or it is the same for all of us. Is there any way to improve it or you have any suggestion to observe the agent behavior conveniently? Thanks!
I cannot seem to find the right syntax for controlling the agent's spawning location in the documentation.
Is there some way to end a run early? If we think we have all the food we are going to get, can we tell the run to stop so that we do not continue to lose 1/T reward each step? This would be equivalent to an animal deciding to go to sleep rather than wander around with predators about.
If yes, how can we get it?
Thanks!
Maybe it is already there, but to eliminate the possible of confusion on my
part, would it be possible to embed the animalai version in the brain object or
in its own function that would be callable from python ?
Hi
I was wondering if including the orientation or rotational speed in the observations would be make sense.
Currently it can be inferred from the velocity but when the agent is just rotating in one place there's only the visual information.
Thanks
I cannot install this package on macos. Building grpcio fails. I did some digging and found this project solved the problem (https://github.com/jeongyoonlee/Kaggler/blob/master/setup.py) by passing compile args. Relevant SO threads: https://stackoverflow.com/questions/52460913/compiling-cython-with-xcode-10/52466939 and https://stackoverflow.com/questions/1676384/how-to-pass-flag-to-gcc-in-python-setup-py-script
I tried hacking animal's setup.py with no success. Here's the error I'm hitting
Building wheels for collected packages: grpcio
Running setup.py bdist_wheel for grpcio ... error
Complete output from command /anaconda3/bin/python -u -c "import setuptools, tokenize;file='/private/var/folders/39/glh0vc5x3qq23jgtd82s68200000gn/T/pip-install-v_y2xwtl/grpcio/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" bdist_wheel -d /private/var/folders/39/glh0vc5x3qq23jgtd82s68200000gn/T/pip-wheel-q8xajt41 --python-tag cp37:
Found cython-generated files...
running bdist_wheel
running build
running build_py
running build_project_metadata
creating python_build
creating python_build/lib.macosx-10.7-x86_64-3.7
creating python_build/lib.macosx-10.7-x86_64-3.7/grpc
copying src/python/grpcio/grpc/_channel.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc
copying src/python/grpcio/grpc/_common.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc
copying src/python/grpcio/grpc/init.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc
copying src/python/grpcio/grpc/_utilities.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc
copying src/python/grpcio/grpc/_plugin_wrapping.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc
copying src/python/grpcio/grpc/_interceptor.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc
copying src/python/grpcio/grpc/_grpcio_metadata.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc
copying src/python/grpcio/grpc/_server.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc
copying src/python/grpcio/grpc/_auth.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc
creating python_build/lib.macosx-10.7-x86_64-3.7/grpc/beta
copying src/python/grpcio/grpc/beta/_server_adaptations.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/beta
copying src/python/grpcio/grpc/beta/interfaces.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/beta
copying src/python/grpcio/grpc/beta/_metadata.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/beta
copying src/python/grpcio/grpc/beta/init.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/beta
copying src/python/grpcio/grpc/beta/utilities.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/beta
copying src/python/grpcio/grpc/beta/implementations.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/beta
copying src/python/grpcio/grpc/beta/_client_adaptations.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/beta
creating python_build/lib.macosx-10.7-x86_64-3.7/grpc/experimental
copying src/python/grpcio/grpc/experimental/gevent.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/experimental
copying src/python/grpcio/grpc/experimental/init.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/experimental
creating python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework
copying src/python/grpcio/grpc/framework/init.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework
creating python_build/lib.macosx-10.7-x86_64-3.7/grpc/_cython
copying src/python/grpcio/grpc/_cython/init.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/_cython
creating python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/foundation
copying src/python/grpcio/grpc/framework/foundation/callable_util.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/foundation
copying src/python/grpcio/grpc/framework/foundation/abandonment.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/foundation
copying src/python/grpcio/grpc/framework/foundation/init.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/foundation
copying src/python/grpcio/grpc/framework/foundation/stream.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/foundation
copying src/python/grpcio/grpc/framework/foundation/stream_util.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/foundation
copying src/python/grpcio/grpc/framework/foundation/future.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/foundation
copying src/python/grpcio/grpc/framework/foundation/logging_pool.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/foundation
creating python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/common
copying src/python/grpcio/grpc/framework/common/style.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/common
copying src/python/grpcio/grpc/framework/common/cardinality.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/common
copying src/python/grpcio/grpc/framework/common/init.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/common
creating python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/interfaces
copying src/python/grpcio/grpc/framework/interfaces/init.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/interfaces
creating python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/interfaces/face
copying src/python/grpcio/grpc/framework/interfaces/face/init.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/interfaces/face
copying src/python/grpcio/grpc/framework/interfaces/face/utilities.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/interfaces/face
copying src/python/grpcio/grpc/framework/interfaces/face/face.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/interfaces/face
creating python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/interfaces/base
copying src/python/grpcio/grpc/framework/interfaces/base/init.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/interfaces/base
copying src/python/grpcio/grpc/framework/interfaces/base/utilities.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/interfaces/base
copying src/python/grpcio/grpc/framework/interfaces/base/base.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/framework/interfaces/base
creating python_build/lib.macosx-10.7-x86_64-3.7/grpc/_cython/_cygrpc
copying src/python/grpcio/grpc/_cython/_cygrpc/init.py -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/_cython/_cygrpc
creating python_build/lib.macosx-10.7-x86_64-3.7/grpc/_cython/_credentials
copying src/python/grpcio/grpc/_cython/_credentials/roots.pem -> python_build/lib.macosx-10.7-x86_64-3.7/grpc/_cython/_credentials
running build_ext
b'make: Circular /private/var/folders/39/glh0vc5x3qq23jgtd82s68200000gn/T/pip-install-v_y2xwtl/grpcio/libs/opt/libaddress_sorting.a <- /private/var/folders/39/glh0vc5x3qq23jgtd82s68200000gn/T/pip-install-v_y2xwtl/grpcio/libs/opt/libares.a dependency dropped.\n'
Found cython-generated files...
building 'grpc._cython.cygrpc' extension
creating python_build/temp.macosx-10.7-x86_64-3.7
creating python_build/temp.macosx-10.7-x86_64-3.7/src
creating python_build/temp.macosx-10.7-x86_64-3.7/src/python
creating python_build/temp.macosx-10.7-x86_64-3.7/src/python/grpcio
creating python_build/temp.macosx-10.7-x86_64-3.7/src/python/grpcio/grpc
creating python_build/temp.macosx-10.7-x86_64-3.7/src/python/grpcio/grpc/_cython
gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/anaconda3/include -I/anaconda3/include -arch x86_64 -DOPENSSL_NO_ASM=1 -D_WIN32_WINNT=1536 -DGPR_BACKWARDS_COMPATIBILITY_MODE=1 -DHAVE_CONFIG_H=1 -DGRPC_ENABLE_FORK_SUPPORT=1 -DPyMODINIT_FUNC=extern "C" attribute((visibility ("default"))) PyObject* -DGRPC_POSIX_FORK_ALLOW_PTHREAD_ATFORK=1 -Isrc/python/grpcio -Iinclude -I. -Ithird_party/boringssl/include -Ithird_party/zlib -Ithird_party/cares -Ithird_party/cares/cares -Ithird_party/cares/config_darwin -Ithird_party/address_sorting/include -I/anaconda3/include/python3.7m -c src/python/grpcio/grpc/_cython/cygrpc.cpp -o python_build/temp.macosx-10.7-x86_64-3.7/src/python/grpcio/grpc/_cython/cygrpc.o -std=c++11 -fvisibility=hidden -fno-wrapv -fno-exceptions -DPB_FIELD_16BIT -pthread
warning: include path for stdlibc++ headers not found; pass '-stdlib=libc++' on the command line to use the libc++ standard library instead [-Wstdlibcxx-not-found]
> src/python/grpcio/grpc/_cython/cygrpc.cpp:1343:14: fatal error: 'cstdlib' file not found
#include
^~~~~~~~~
1 warning and 1 error generated.
creating var
creating var/folders
creating var/folders/39
creating var/folders/39/glh0vc5x3qq23jgtd82s68200000gn
creating var/folders/39/glh0vc5x3qq23jgtd82s68200000gn/T
creating var/folders/39/glh0vc5x3qq23jgtd82s68200000gn/T/tmpwp44atj9
gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/anaconda3/include -I/anaconda3/include -arch x86_64 -I/anaconda3/include/python3.7m -c /var/folders/39/glh0vc5x3qq23jgtd82s68200000gn/T/tmpwp44atj9/a.c -o var/folders/39/glh0vc5x3qq23jgtd82s68200000gn/T/tmpwp44atj9/a.o
Traceback (most recent call last):
File "/anaconda3/lib/python3.7/distutils/unixccompiler.py", line 118, in _compile
extra_postargs)
File "/private/var/folders/39/glh0vc5x3qq23jgtd82s68200000gn/T/pip-install-v_y2xwtl/grpcio/src/python/grpcio/_spawn_patch.py", line 54, in _commandfile_spawn
_classic_spawn(self, command)
File "/anaconda3/lib/python3.7/distutils/ccompiler.py", line 909, in spawn
spawn(cmd, dry_run=self.dry_run)
File "/anaconda3/lib/python3.7/distutils/spawn.py", line 36, in spawn
_spawn_posix(cmd, search_path, dry_run=dry_run)
File "/anaconda3/lib/python3.7/distutils/spawn.py", line 159, in _spawn_posix
% (cmd, exit_status))
distutils.errors.DistutilsExecError: command 'gcc' failed with exit status 1During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/private/var/folders/39/glh0vc5x3qq23jgtd82s68200000gn/T/pip-install-v_y2xwtl/grpcio/src/python/grpcio/commands.py", line 292, in build_extensions
build_ext.build_ext.build_extensions(self)
File "/anaconda3/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 194, in build_extensions
self.build_extension(ext)
File "/anaconda3/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 205, in build_extension
_build_ext.build_extension(self, ext)
File "/anaconda3/lib/python3.7/distutils/command/build_ext.py", line 533, in build_extension
depends=ext.depends)
File "/anaconda3/lib/python3.7/distutils/ccompiler.py", line 574, in compile
self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
File "/anaconda3/lib/python3.7/distutils/unixccompiler.py", line 120, in _compile
raise CompileError(msg)
distutils.errors.CompileError: command 'gcc' failed with exit status 1During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "", line 1, in
File "/private/var/folders/39/glh0vc5x3qq23jgtd82s68200000gn/T/pip-install-v_y2xwtl/grpcio/setup.py", line 313, in
cmdclass=COMMAND_CLASS,
File "/anaconda3/lib/python3.7/site-packages/setuptools/init.py", line 145, in setup
return distutils.core.setup(**attrs)
File "/anaconda3/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/anaconda3/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/anaconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/Users/home/.local/lib/python3.7/site-packages/wheel/bdist_wheel.py", line 192, in run
self.run_command('build')
File "/anaconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/anaconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/anaconda3/lib/python3.7/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/anaconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/anaconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/anaconda3/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 84, in run
_build_ext.run(self)
File "/anaconda3/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "/anaconda3/lib/python3.7/distutils/command/build_ext.py", line 339, in run
self.build_extensions()
File "/private/var/folders/39/glh0vc5x3qq23jgtd82s68200000gn/T/pip-install-v_y2xwtl/grpcio/src/python/grpcio/commands.py", line 297, in build_extensions
"Failedbuild_ext
step:\n{}".format(formatted_exception))
commands.CommandError: Failedbuild_ext
step:
Traceback (most recent call last):
File "/anaconda3/lib/python3.7/distutils/unixccompiler.py", line 118, in _compile
extra_postargs)
File "/private/var/folders/39/glh0vc5x3qq23jgtd82s68200000gn/T/pip-install-v_y2xwtl/grpcio/src/python/grpcio/_spawn_patch.py", line 54, in _commandfile_spawn
_classic_spawn(self, command)
File "/anaconda3/lib/python3.7/distutils/ccompiler.py", line 909, in spawn
spawn(cmd, dry_run=self.dry_run)
File "/anaconda3/lib/python3.7/distutils/spawn.py", line 36, in spawn
_spawn_posix(cmd, search_path, dry_run=dry_run)
File "/anaconda3/lib/python3.7/distutils/spawn.py", line 159, in _spawn_posix
% (cmd, exit_status))
distutils.errors.DistutilsExecError: command 'gcc' failed with exit status 1During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/private/var/folders/39/glh0vc5x3qq23jgtd82s68200000gn/T/pip-install-v_y2xwtl/grpcio/src/python/grpcio/commands.py", line 292, in build_extensions
build_ext.build_ext.build_extensions(self)
File "/anaconda3/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 194, in build_extensions
self.build_extension(ext)
File "/anaconda3/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 205, in build_extension
_build_ext.build_extension(self, ext)
File "/anaconda3/lib/python3.7/distutils/command/build_ext.py", line 533, in build_extension
depends=ext.depends)
File "/anaconda3/lib/python3.7/distutils/ccompiler.py", line 574, in compile
self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
File "/anaconda3/lib/python3.7/distutils/unixccompiler.py", line 120, in _compile
raise CompileError(msg)
distutils.errors.CompileError: command 'gcc' failed with exit status 1
Failed building wheel for grpcio
Running setup.py clean for grpcio
Failed to build grpcio
There is something odd about the movement model. It does not apply a constant force with each forward step-- the velocity change with each forward action diminishes as the velocity increases.
Can you provide more insight into this aspect of the movement model?
Also, establishing a high velocity moving forward and then doing a series of turns without a forward or back component would be expected to yield a negative z velocity once the turn exceeds 90 degrees, but it does not.
See these data:
enter next action: 0
sending next action: [0.0, 0.0]
Velocity observed: x = 0.00, z = 0.00
We are stopped in one corner of the arena, facing the opposite corner.
Now we move forward:
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = 0.00, z = 2.44
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = 0.00, z = 4.60
At first, our velocity increases by about 2.4 m/s with each push
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = 0.00, z = 6.51
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = 0.00, z = 8.20
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = 0.00, z = 9.70
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = -0.00, z = 11.02
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = -0.00, z = 12.19
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = -0.00, z = 13.23
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = -0.00, z = 14.14
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = -0.00, z = 14.95
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = -0.00, z = 15.67
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = -0.00, z = 16.30
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = -0.00, z = 16.87
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = -0.00, z = 17.36
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = -0.00, z = 17.80
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = -0.00, z = 18.19
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = -0.00, z = 18.53
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = -0.00, z = 18.84
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = -0.00, z = 19.11
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = -0.00, z = 19.35
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = -0.00, z = 19.56
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = -0.00, z = 19.74
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = -0.00, z = 19.91
enter next action: 1
sending next action: [1.0, 0.0]
Velocity observed: x = 0.00, z = 20.05
The final forward push only increases our velocity by 0.14 m/sec
We are moving fast toward the opposite corner of the arena. Now we will turn in place, with no additional forward or backward inputs.
enter next action: 3
sending next action: [0.0, 2.0]
Velocity observed: x = 1.82, z = 17.32
The x velocity increases, as expected, since we are now moving in a different direction than we are facing.
enter next action: 3
sending next action: [0.0, 2.0]
Velocity observed: x = 3.14, z = 14.75
enter next action: 3
sending next action: [0.0, 2.0]
Velocity observed: x = 4.02, z = 12.38
enter next action: 3
sending next action: [0.0, 2.0]
Velocity observed: x = 4.55, z = 10.23
enter next action: 3
sending next action: [0.0, 2.0]
Velocity observed: x = 4.79, z = 8.29
enter next action: 3
sending next action: [0.0, 2.0]
Velocity observed: x = 4.79, z = 6.59
enter next action: 3
sending next action: [0.0, 2.0]
Velocity observed: x = 4.61, z = 5.12
Friction is slowing us down, as expected.
enter next action: 3
sending next action: [0.0, 2.0]
Velocity observed: x = 4.28, z = 3.86
enter next action: 3
sending next action: [0.0, 2.0]
Velocity observed: x = 3.86, z = 2.81
enter next action: 3
sending next action: [0.0, 2.0]
Velocity observed: x = 3.38, z = 1.95
enter next action: 3
sending next action: [0.0, 2.0]
Velocity observed: x = 2.85, z = 1.27
At this point we have turned more than 90 degrees, so we should have a negative z velocity, but it is still positive
enter next action: 3
sending next action: [0.0, 2.0]
Velocity observed: x = 2.32, z = 0.75
enter next action: 3
sending next action: [0.0, 2.0]
Velocity observed: x = 1.79, z = 0.38
enter next action: 3
sending next action: [0.0, 2.0]
Velocity observed: x = 1.29, z = 0.14
Even though we have turned about 140 degrees, our z velocity is still positive.
enter next action: 3
sending next action: [0.0, 2.0]
Velocity observed: x = 0.82, z = -0.00
At last z velocity is 0. This should have happened when we were turned 90 degrees, but at this point we have turned about 150 degrees.
enter next action: 3
sending next action: [0.0, 2.0]
Velocity observed: x = 0.40, z = -0.04
enter next action: 3
sending next action: [0.0, 2.0]
Velocity observed: x = 0.00, z = 0.00
Eventually, friction makes us stop.
OTCPreprocessing is an undefined name in this context. Should this be AAIPreprocessing instead?
flake8 testing of https://github.com/beyretb/AnimalAI-Olympics on Python 3.7.1
$ flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
./examples/animalai_train/animalai_train/dopamine/animalai_lib.py:53:11: F821 undefined name 'OTCPreprocessing'
env = OTCPreprocessing(env)
^
1 F821 undefined name 'OTCPreprocessing'
1
E901,E999,F821,F822,F823 are the "showstopper" flake8 issues that can halt the runtime with a SyntaxError, NameError, etc. These 5 are different from most other flake8 issues which are merely "style violations" -- useful for readability but they do not effect runtime safety.
name
name
in __all__
Would it be possible to have a training example that doesn't use Tensorflow and/or
the Unity reinforcement agents ?
Something real simple that calls the AnimalAi training environment with a good brain ,
and prints all the observation data of all kinds , visual,text, float.
python visualizeArena.py configs/exampleCondig.yaml
Should that be "Config.yaml" ?
The following code used to work (more or less) on v. 02. but not on v .03
I get an error :
python3 testMovement.py
Traceback (most recent call last):
File "testMovement.py", line 2, in
from animalai.envs.ArenaConfig import ArenaConfig
ModuleNotFoundError: No module named 'animalai.envs.ArenaConfig'
When I change the line to:
from animalai.envs.arena_config import ArenaConfig
Lower case arena_config, things start to run but then I get:
CrashReporter: initialized
Mono path[0] = '/Users/pneal/aao/may26/env/AnimalAI.app/Contents/Resources/Data/Managed'
Mono config path = '/Users/pneal/aao/may26/env/AnimalAI.app/Contents/MonoBleedingEdge/etc'
INFO:mlagents.envs:
'Academy' started successfully!
Unity Academy name: Academy
Number of Brains: 1
Number of Training Brains : 1
Traceback (most recent call last):
File "testMovement.py", line 16, in
env.reset(config=arena_config_in, train_mode=True)
TypeError: reset() got an unexpected keyword argument 'config'
When I change the reset line to:
env.reset(train_mode=True)
then I don't get the "justFood.yaml" environment, I get the huge default
environment with lots of obstacles, etc.
Best regards,
Phil Neal
+++++++++++++++++++++++cut here for original code ++++++++++++++++=
from animalai.envs.environment import UnityEnvironment
from animalai.envs.ArenaConfig import ArenaConfig
import matplotlib.pyplot as plt
env= UnityEnvironment(
file_name='env/AnimalAI', # Path to the environment
worker_id=6, # Unique ID for running the environment (used for connection)
seed=0, # The random seed
docker_training=False, # Whether or not you are training inside a docker
no_graphics=False, # Always set to False
n_arenas=1, # Number of arenas in your environment
play=True # Set to False for training
)
arena_config_in = ArenaConfig('configs/justFood.yaml')
env.reset(config=arena_config_in, train_mode=True)
#plt.ion()
def step(action):
info = env.step(action)
b1 = info['Learner']
img = b1.visual_observations
plt.imshow(img[0][0])
plt.close()
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.