Code Monkey home page Code Monkey logo

Comments (6)

haarnoja avatar haarnoja commented on May 19, 2024

The reason for clipping the std is mainly to prevent nans during initial transients, and the bounds are not typically active afterwards. You could try out different ways to implement clipping, but my intuition is that there isn't going to be much difference in performance and stability.

from softlearning.

zuoxingdong avatar zuoxingdong commented on May 19, 2024

Thanks @haarnoja for your reply. I have a tiny further question, it seems the 'pre-tanh' Gaussian sampled actions is implemented by Affine transformation of a unit Gaussian, is it because of better numerical stability or it can be identical implementation to directly using mean and diagonal std for MultivariateNormalDiag ?

from softlearning.

hartikainen avatar hartikainen commented on May 19, 2024

it seems the 'pre-tanh' Gaussian sampled actions is implemented by Affine transformation of a unit Gaussian, is it because of better numerical stability or it can be identical implementation to directly using mean and diagonal std for MultivariateNormalDiag ?

Good question! You're right, this would be identical with directly using mean and std for the MultivariateNormalDiag. The reason I implemented this way was that at the time of writing these I wasn't sure what would be the best way to fit different distributions into the keras models. Also, I wanted to fit the RealNVPPolicy (which unfortunately doesn't yet exist in this code base) and the GaussianPolicy into a similar pattern to make it more readable and usable.

I don't actually think that the policies we have right now are implemented in the best possible way. Better way would be to utilize the new tensorflow probability layers to do these transformations. I'm planning to refactor them once we upgrade to tf2.

from softlearning.

zuoxingdong avatar zuoxingdong commented on May 19, 2024

@hartikainen Thank you so much for your detailed explanation. I have a quick further question, in PG methods, we often stop gradients of a normally sampled action (say with reparametrization trick) when computing log-probability, otherwise there'd be zero gradient for the mean head. However, in SAC (tanh-transformed Gaussian), it seems to me that we need the gradient flow during log-abs-det part, but should we also allow gradient flow to the base distribution (i.e. original Gaussian) ?

I've tried to stop gradient of tanh-transformed sampled action, it breaks the algorithm where the gradient explodes (grad norm can grow to e+20).

from softlearning.

zuoxingdong avatar zuoxingdong commented on May 19, 2024

Thanks @hartikainen @haarnoja for your kind replies. It is much clearer to me now.

By the way, the numerically stable formula for log-abs-det proposed in this repo is really beautiful, and compared with clamping of original formula, it also leads to a bit better final performance.

image

from softlearning.

haarnoja avatar haarnoja commented on May 19, 2024

Thanks! This was actually originally proposed by @gjtucker and we have indeed found it to be really great.

from softlearning.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.