Comments (3)
In my experience it was just faster then reusing the states.
Why is it faster to recompute the states? Presumably, the already computed states could just be left in GPU memory? You can still batch the N steps together in the evaluate_actions
computation, even if you are reusing the already computed states.
Recurrent policies would be significantly slower for KFAC since they require to write custom RNN/LSTM kernels
If I were to write a recurrent policy that uses FC layers instead of an RNN, would it work out of the box with your KFAC optimizer? The optimizer doesn't handle GRU/RNN cells, but can it handle recurrent policies without those?
from pytorch-a2c-ppo-acktr-gail.
-
In my experience it was just faster then reusing the states. The current option uses batched computations on GPU more efficiently (but there should be no difference when computations are performed on CPU).
-
Recurrent policies would be significantly slower for KFAC since they require to write custom RNN/LSTM kernels (instead of the ones provided for cudnn) and then use specific approximations for them: https://openreview.net/forum?id=HyMTkQZAb¬eId=HkIsQkpSG This approximation wasn't published when I implemented this code.
from pytorch-a2c-ppo-acktr-gail.
It just what happened in practice. It can be totally possible to make it faster. But backward pass is more expensive anyway so it will not take this code significantly faster in overall anyway.
from pytorch-a2c-ppo-acktr-gail.
Related Issues (20)
- does mask introduce bias in the gail implementation ?
- observation reset before insert
- Why acktr algorithm cannot be used in Mujoco settings?
- Can I train in my own game
- Can not run enjoy.py
- Stale hidden states
- Possible bug on the sign of policy log prob. in Fisher computation
- CNN Architecture
- Operations that have no effect
- No softmax before categorical loss?
- object has no attribute 'steps' in acktr
- why PPO needs to store action_log_probs instead of using stop_gradient for better efficiency? HOT 1
- [Question]Can I use Recurrent_policy for GAIL at this implementation?
- question about the recurrent HOT 1
- Oops! wrong repo :-D HOT 1
- setup.py and requirements.py have same dependencies except for h5py
- Where are the experts data for GAIL get from?
- Why is episode_rewards negative when running ant_v3 with PPO?
- Why didn't run to generate log?
- Updates: Support the latest Atari environment and state entropy maximization-based exploration.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch-a2c-ppo-acktr-gail.