Code Monkey home page Code Monkey logo

Comments (9)

boris-il-forte avatar boris-il-forte commented on May 17, 2024

Actually, you cannot use the Boltzman policy for policy gradient methods, as the interface is lacking the gradient of the logarithm. I will put it in the ToDo list.
For (deep) actor-critic, it exists a Boltzmann policy not based on q functions: it's the "BoltzmannTorchPolicy".
In principle, you can think to use that. However, you need to handle discrete state space by yourself, which may not be easy (I never tried to use a deep actor-critic on a grid world, for obvious reasons... but I understand the curiosity to try things)

from mushroom-rl.

RylanSchaeffer avatar RylanSchaeffer commented on May 17, 2024

Actually, you cannot use the Boltzman policy for policy gradient methods, as the interface is lacking the gradient of the logarithm.

Ok thanks for clarifying! When I tried last night, I discovered that the Boltzmann policy has no ._approximator and wasn't working.

I need this categorical policy for discrete actions and discrete state spaces for my research and I'm happy to implement it myself. How would you recommend doing so?

To be clear, I don't think anything should need to be deep in gridworld. Tabular PG and Tabular AC methods should (at least in principle) be applicable to gridworld, right?

from mushroom-rl.

RylanSchaeffer avatar RylanSchaeffer commented on May 17, 2024

As a work-around, is the following a possible solution to obtain a PG agent in a discrete state space and discrete action space with the following approach?

Use a BoltzmannTorchPolicy with a torch approximator that is an S x A matrix. Then in each state s, the policy will slice the correct row from the matrix, softmax the row and sample from a Categorical distribution?

from mushroom-rl.

RylanSchaeffer avatar RylanSchaeffer commented on May 17, 2024

To explain why, for my research, I want to test policy gradient and actor-critic methods against value-based approaches in tabular domains with discrete action spaces. Is there a way to do this using mushroom-rl?

I'm happy to implement whatever I need to myself, if you give me an outline of what needs to change where (and what pitfalls to watch out for)!

from mushroom-rl.

RylanSchaeffer avatar RylanSchaeffer commented on May 17, 2024

I just tried this myself, and hit the following error inside REINFORCE:

    self.sum_d_log_pi = np.zeros(self.policy.weights_size)
AttributeError: 'BoltzmannTorchPolicy' object has no attribute 'weights_size'

Specifically stemming from the method:

    def _init_update(self):
        self.sum_d_log_pi = np.zeros(self.policy.weights_size)

from mushroom-rl.

boris-il-forte avatar boris-il-forte commented on May 17, 2024

The simplest approach is to implement the ParametricPolicy interface, with an appropriate policy. This will allow standard policy gradient to work, at least as far as I know. If that's not true, you may want to change the policy gradient approaches to support your setting or implement another approximator to support integer inputs.

I want to remark that you can define the policy however you want, there's no need to use any of the mushroom tools (but they can be helpful for more complex scenarios).

For deep actor-critic, you can use the torch Boltzmann policy, and define an appropriate network that makes sense for an integer input. In general, it doesn't seem to be a very good idea to do so, however, I'll not comment on this point further as it's out of the scope of mushroom and it's a very particular setting. Probably, you cannot expect that a deep actor-critic approach will have amazing results on grid worlds...

from mushroom-rl.

RylanSchaeffer avatar RylanSchaeffer commented on May 17, 2024

Probably, you cannot expect that a deep actor-critic approach will have amazing results on grid worlds...

I think you're misunderstanding what I want to do.

The goal is simple: REINFORCE in Gridworld using a Categorical policy. No deep learning required. This is maybe the simplest application of REINFORCE and I'm finding it surprisingly difficult to implement.

from mushroom-rl.

boris-il-forte avatar boris-il-forte commented on May 17, 2024

The solution for this is described in the post above: implement a Boltzmann policy using the ParametricPolicy interface.
We don't support policy search approaches to finite state space, in general. There are many reasons for this choice. You can try to adapt the existing code following the above solution, but I cannot ensure it will work.

My comment on deep actor-critic is that these approaches, even without deep networks, are unlikely to work. Also, they will be pretty complex to implement in this setting, requiring many complicated assumptions.

Classical actor-critic, if you make standard policy search to work, instead, can be ported similarly.

from mushroom-rl.

RylanSchaeffer avatar RylanSchaeffer commented on May 17, 2024

Ok thank you.

from mushroom-rl.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.