Comments (5)
Cool idea, trying to use just the meta graph in a standalone program!
I think the general idea should work. Some detail must be wrong. The action sampling looks suspect:
action_tf = sampled_action.argmax()
It's not clear where the nondeterminism comes from. If you change the policy from stochastic to deterministic, it may or may not work. For example, the agent may have learned to keep the paddle in one place by setting (center=0%, up=50%, down=50%) rather than (center=100%, up=0%, down=0%).
About speed: calling env.render()
syncs to the monitor's frame rate. Usually that's 60 Hz, but some combination of high-res monitors and MacBooks change it to 30 Hz.
from universe-starter-agent.
action_tf is already nondeterministic since it comes from get_sample_op, which refers to self.sample from the LSTMPolicy in model.py.. But the tf_op gives back a one-hot vector and the env.step() needs a numerical arg, that's why I use the .argmax() (as is done in the a3c.py code).
But yeah, I had also thought about this at first, thats why you can see an outcommented line #action = ... where I implemented the nondeterminism myself using numpy (see functions at top of script), but the result is just as bad with those functions..
So basically I have no idea why the trained Agent could be performing differently than what TensorBoard is claiming...
And for speed I was actually referring to training (where I'm obviously not rendering the environment) and I have never changed devices..
from universe-starter-agent.
Determinism
I see, that makes sense.
I don't think your code resets the LSTM state at the end of episodes. The trainer does, here: https://github.com/openai/universe-starter-agent/blob/master/a3c.py#L147. That might make a big difference.
If that doesn't fix it, maybe print out the values of observation, lstm_c, lstm_h, action_logits, and action both in the trainer and your code, and see if you see differences. The environment is only slightly randomized, so you could hope to see identical observations at the start of episodes.
Performance
On my MacBook Pro (Retina, 15-inch, Mid 2015), running
python train.py --num-workers 6 --env-id Pong-v0 --log-dir /tmp/pong
This is with Anaconda python 3.5, tensorflow 0.11.0 and OpenCV 3.1.0. I get similar results with tf 0.12.1. OpenCV uses a lot of the CPU time, and different versions use the hardware in different ways. If you're using install from the starter-agent README, it should be fine. If you're using another install, see if there are installation options around how it parallelizes. M
from universe-starter-agent.
Whuuuuut, I just found my error loool!
I trained the Agent on "PongDeterministic-v3" but in my visualization script I used "Pong-v0"...
So there was nothing wrong with the code but apparently some settings are slightly different in those two Gym-Pong versions, completely wrecking the Agent's behavior.
Well, anyway thx a lot for your fast feedback, it is much appreciated.
Now I can finally start trying some actual changes to the model :)
Are there any good fora where people are discussing approaches they are trying or should I stick to reading papers etc.. ?
best regards,
I guess this topic can be closed 👍
from universe-starter-agent.
There's some good discussion at https://discuss.openai.com/. Other than that: NIPS, ICLR, ICML, and arXiv papers.
from universe-starter-agent.
Related Issues (20)
- Should step_size be tf.shape(self.x)[1:2] in model.py ?
- Global network is not being updated after tmax steps. HOT 2
- deterministic policy or stochastic policy HOT 2
- loss function issue HOT 1
- Tensorflow version HOT 1
- Can't install universe on Mac OS with homebrew not in /usr/local
- "global_step" in A3C HOT 2
- flashgames.NeonRace-v0 gets in weird state at times HOT 1
- Image problem HOT 3
- BasicLSTMCell and LSTMStateTuple was moved from contrib.rnn.rnn_cell to contrib.rnn in Tensorflow-1.1
- No server running issue HOT 3
- Parameters for Breakout HOT 4
- No tangible results with pong or NeonRace within VNC environment HOT 2
- Question about global network update logic in A3C implementation
- Questions !! when i am using ubuntu 16.04 (what is conda --create ?) HOT 1
- AttributeError: 'VectorizeFilter' object has no attribute 'filter_n'
- unable to view agent's environment through vnc => persisting docker images? HOT 2
- Where does the initializer come from in a3c?
- Installation instructions don't work HOT 1
- How to run NeonRace locally instead of VNC?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from universe-starter-agent.