Code Monkey home page Code Monkey logo

Comments (5)

norci avatar norci commented on June 12, 2024 1

See also: https://arxiv.org/abs/1808.09940
"We also conduct intensive experiments in China Stock market and show that PG is more desirable in financial market than DDPG and PPO, although both of them are more advanced. "

I think PG is useful in some circumstances, although it's simpler than DDPG.

I tried to implement PG in this framework, but it is too complex for me.

from reinforcementlearningzoo.jl.

findmyway avatar findmyway commented on June 12, 2024 1

I've implemented it before. But that's a long time ago. I'll take a look at it this weekend.

from reinforcementlearningzoo.jl.

findmyway avatar findmyway commented on June 12, 2024

I'm sorry that I may not have enough time to work on this issue until the next month due to some personal issues.

I'll explain the implementation details here in case you'd like to try this by yourself:

First, we may need to have an ElasticCompact**Trajectory like structure for efficiency. (Added in RLCore)

  • We can't use Circular*Trajectory here because its capacity is constant. In REINFORCE algorithm, we need to sample a complete trajectory and split it into batches to update the policy.
  • We can use Vector as the container instead of ElasticArray. I guess there isn't too much performance difference between them for the REINFORCE algorithm. But unfortunately, I haven't exported some aliases like Vector*Trajectory in RLCore. (Also you need to wrap it in a EpisodicTrajectory)

Then the rest is simple, we update the policy only at the end of an episode, randomly select different batches and update the inner approximator. You are encouraged to read the implementation of PPO for similar implementation.

Ideally, the REINFORCE algorithm should also support tabular cases, https://github.com/JuliaReinforcementLearning/ReinforcementLearningAnIntroduction.jl/blob/master/src/extensions/policies/reinforce_policy.jl . But we can make it into another PR.

By the way, if you need the implementation similar to the one in openai/spinningup, then you need to dispatch the update! method based on the type of the inner approximator.

from reinforcementlearningzoo.jl.

norci avatar norci commented on June 12, 2024

thanks for your guidance.
I'd like to implement this algo, and this project is awesome, I like it.
But I'm a beginner in Julia & RL, so give me some time.

from reinforcementlearningzoo.jl.

findmyway avatar findmyway commented on June 12, 2024

Cool! No hurry.

I just tagged a new release of [email protected]. Now you should have all the necessary components to implement it.

from reinforcementlearningzoo.jl.

Related Issues (19)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.