Comments (6)
The reason for clipping the std is mainly to prevent nans during initial transients, and the bounds are not typically active afterwards. You could try out different ways to implement clipping, but my intuition is that there isn't going to be much difference in performance and stability.
from softlearning.
Thanks @haarnoja for your reply. I have a tiny further question, it seems the 'pre-tanh' Gaussian sampled actions is implemented by Affine transformation of a unit Gaussian, is it because of better numerical stability or it can be identical implementation to directly using mean and diagonal std for MultivariateNormalDiag
?
from softlearning.
it seems the 'pre-tanh' Gaussian sampled actions is implemented by Affine transformation of a unit Gaussian, is it because of better numerical stability or it can be identical implementation to directly using mean and diagonal std for
MultivariateNormalDiag
?
Good question! You're right, this would be identical with directly using mean and std for the MultivariateNormalDiag
. The reason I implemented this way was that at the time of writing these I wasn't sure what would be the best way to fit different distributions into the keras models. Also, I wanted to fit the RealNVPPolicy
(which unfortunately doesn't yet exist in this code base) and the GaussianPolicy
into a similar pattern to make it more readable and usable.
I don't actually think that the policies we have right now are implemented in the best possible way. Better way would be to utilize the new tensorflow probability layers to do these transformations. I'm planning to refactor them once we upgrade to tf2.
from softlearning.
@hartikainen Thank you so much for your detailed explanation. I have a quick further question, in PG methods, we often stop gradients of a normally sampled action (say with reparametrization trick) when computing log-probability, otherwise there'd be zero gradient for the mean head. However, in SAC (tanh-transformed Gaussian), it seems to me that we need the gradient flow during log-abs-det part, but should we also allow gradient flow to the base distribution (i.e. original Gaussian) ?
I've tried to stop gradient of tanh-transformed sampled action, it breaks the algorithm where the gradient explodes (grad norm can grow to e+20).
from softlearning.
Thanks @hartikainen @haarnoja for your kind replies. It is much clearer to me now.
By the way, the numerically stable formula for log-abs-det proposed in this repo is really beautiful, and compared with clamping of original formula, it also leads to a bit better final performance.
from softlearning.
Thanks! This was actually originally proposed by @gjtucker and we have indeed found it to be really great.
from softlearning.
Related Issues (20)
- SAC gradients weighted inconsistently
- Question on the soft q learning implementation HOT 1
- Question on initialization of alpha and entropy HOT 7
- Not use GPU failed call to cuInit: CUDA_ERROR_NO_DEVICE HOT 4
- `dm_control` `cheetah` `run` training stops suddenly HOT 3
- error when running example
- How can I generate eval/test output video of dm_control tasks?
- No rendering/headless mode available?
- No module named 'example.instruments' HOT 2
- target_entropy discrete_space HOT 2
- Implementation of automatic entropy temperature tuning(alpha loss) HOT 6
- SQL algorithm is not working HOT 1
- MultiGoal Env not working, please give instruction. HOT 6
- Error on installing with docker HOT 1
- Incompatible with ray 1.2.0 HOT 1
- Incompatible with tensorflow 2.4.0 HOT 1
- Differences between softlearning implementation and formula 18 in paper of alpha loss
- using deterministic policy in enviroment like lunarlander?
- why target entropy is -dim(A)?
- The issue of softlearning implementation HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from softlearning.