Comments (1)
I think you want to say that: it is ok to use SARSA instead of Q-Learning in this specific example.
I left the code there to demonstrate the SARSA and Q-Learning is almost the same, except for a small change in the code.
However, In case of non linear approximator (neural network as in DQN), we usually use reply memory to break the correlation. Using reply memory will limit us with off-policy algorithms (such as Q-learning), and we can not/should not use on-policy algorithms (such as SARSA) while using reply-memory. (I believe the reason is that: When using reply memory, the sampled transitions taken from the reply-memory are actually generated using old-version of the policy not the current version)
Yes, I believe this is correct. You cannot use replay memory with on-policy algorithms. However, you can use asynchronous mehtods (like A3C) that break correlations with on-policy algorithms.
then the commented code may work, only because in this specific code, you are using a simple linear approximator and not using reply memory, correct?
Yes, SARSA still works because I am not using a replay memory.
from reinforcement-learning.
Related Issues (20)
- Why CliffWalkingEnv returns 'is_done=True' when reaching cliff? HOT 2
- Is a line missing in 'MC Control with Epsilon-Greedy Policies Solution.ipynb'? HOT 1
- Why is Chapter 11 excluded? HOT 2
- why DQN use kernel size 8 ?
- Gambler's Problem: 0 Stake Allowed?
- Some question in MC Control with Epsilon-Greedy Policies Solution.ipynb HOT 2
- DQL size error
- Policy Evaluation Exercise Solution Is Wrong HOT 1
- Monte Carlo AssertionError: defaultdict(<function mc_control_importance_sampling.<locals>.<lambda> at 0x7f31699ffe18>, {}) (<class 'collections.defaultdict'>)
- Lecture Slides need an update
- Clarification on DQN testing rewards on Atari games
- DQN Testing Rewards on Atari Games HOT 1
- Reinforcement learning policy HOT 1
- Minor Link fix
- A small correction in "MDPs and Bellman Equations" section
- Typo in: "Model-Free Prediction & Control with Monte Carlo (MC)" section -> "Blackjack Playground.ipynb" file:
- Issue in: reinforcement-learning/MC/MC Prediction Solution.ipynb
- please provide requirements.txt or mention the exact version of packages used.
- demystifying-deep-reinforcement-learning link is broken
- MC Control with Epsilon-Greedy Policies ---Epsilon Value and Best Action prob error HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from reinforcement-learning.