Comments (18)
What does sate-action mean? Do you mean state-action pair?
The replay buffer has a function add. You can click the hyperlink for digging it out.
from tianshou.
What does sate-action mean? Do you mean state-action pair?
The replay buffer has a function add. You can click the hyperlink for digging it out.
Yes, but I found all the state-action pair is stored in buff, which is updated in collector.
I don't konw how to get them from the collector or add them into collector.
And I want to add one compare the new reward of (s, a, r, s') with the mean of reward in buffer.
Could you please give me some advice?
from tianshou.
"add one compare the new reward of (s, a, r, s')" where is it from? E.g. computed in policy.forward()?
from tianshou.
"add one compare the new reward of (s, a, r, s')" where is it from? E.g. computed in policy.forward()?
Such as when one epsiode has done, I want to compare the reward of this epsiode with the mean reward of other epsiodes in Buff.
from tianshou.
So, do you think with another function buffer.stat()
will help you do this thing better? Or maybe add some other APIs if you want. We can discuss here.
from tianshou.
So, do you think with another function
buffer.stat()
will help you do this thing better? Or maybe add some other APIs if you want. We can discuss here.
Yes, it maybe help. But I want to know how to use the api of buffer in collector?
Because I want to control each interaction with the env.
Could you please help me?
Thanks.
from tianshou.
Aha, does it mean that a function hook before the buffer.add
operation can meet your expectations?
batch_data = preprocess_fn(batch_data)
buffer.add(batch_data)
You can store and maintain the related variables in your own processor.
from tianshou.
@ChenyangRan I add the draft implementation. Now you can pass your preprocess_fn
to the collector.
This function receives typically 7 keys, as listed here, and returns the modified part within a dict or a Batch. For example, you can write your hook as:
import numpy as np
from collections import deque
class MyProcessor:
def __init__(self, size=100):
self.episode_log = None
self.main_log = deque(maxlen=size)
self.main_log.append(0)
self.baseline = 0
def preprocess_fn(**kwargs):
"""change reward to zero mean"""
if 'rew' not in kwargs:
# means that it is called after env.reset(), it can only process the obs
return {} # none of the variables are needed to be updated
else:
n = len(kwargs['rew']) # the number of envs in collector
if self.episode_log is None:
self.episode_log = [[] for i in range(n)]
for i in range(n):
self.episode_log[i].append(kwargs['rew'][i])
kwargs['rew'][i] -= self.baseline
for i in range(n):
if kwargs['done']:
self.main_log.append(np.mean(self.episode_log[i]))
self.episode_log[i] = []
self.baseline = np.mean(self.main_log)
return Batch(rew=kwargs['rew'])
# you can also return with {'rew': kwargs['rew']}
And finally,
test_processor = MyProcessor(size=100)
collector = Collector(policy, env, buffer, test_processor.preprocess_fn)
Some examples are in test/base/test_collector.py.
from tianshou.
This could be achieved via environment wrapper. Hope the framework to be simple and clean.
from tianshou.
@edieson Yes, I know. In #25, #27 I mention gym.Wrapper as a universal solution.
The motivation I add this feature is that, the gym.wrapper cannot handle the data across different environments easily.
from tianshou.
It's just I don't think the preprocess_fn could (or should) do much, don't have much experience :)
from tianshou.
@ChenyangRan I add the draft implementation. Now you can pass your
preprocess_fn
to the collector.
This function receives typically 7 keys, as listed here, and returns the modified part within a dict or a Batch. For example, you can write your hook as:import numpy as np from collections import deque class MyProcessor: def __init__(self, size=100): self.episode_log = None self.main_log = deque(maxlen=size) self.main_log.append(0) self.baseline = 0 def preprocess_fn(**kwargs): if 'rew' not in kwargs: # means that it is called after env.reset(), we can only process the obs return {} # none of the variables are needed to be updated else: n = len(kwargs['rew']) # the number of envs in collector if self.episode_log is None: self.episode_log = [[] for i in range(n)] for i in range(n): self.episode_log[i].append(kwargs['rew'][i]) kwargs['rew'][i] -= self.baseline for i in range(n): if kwargs['done']: self.main_log.append(np.mean(self.episode_log[i])) self.episode_log[i] = [] self.baseline = np.mean(self.main_log) return Batch(rew=kwargs['rew']) # you can also return with {'rew': kwargs['rew']}And finally,
test_processor = MyProcessor(size=100) collector = Collector(policy, env, buffer, test_processor.preprocess_fn)Some examples are in test/base/test_collector.py.
Thanks,I will try it. You help me a lot.
from tianshou.
It's just I don't think the preprocess_fn could (or should) do much, don't have much experience :)
It can do more things. Instead of processing with batch-images (for atari), it can also save the log into the tensorboard / modify the reward with given info from env before inserting into the buffer / get stats in both training and test data (same preprocess_fn in different collectors). It just depends on what you want.
from tianshou.
Do you have an example for the purpose of re-training using Batch or Collector?
For example, I trained one policy A and now my train_collector contains the replay. How do I use the replay to train another policy B such that B only sees transition pairs of A without interacting with the env?
from tianshou.
Do you have an example for the purpose of re-training using Batch or Collector?
For example, I trained one policy A and now my train_collector contains the replay. How do I use the replay to train another policy B such that B only sees transition pairs of A without interacting with the env?
It is very similar to imitation learning (behavior cloning). I implemented the Dagger (the advance behavior cloning) in test/discrete/test_a2c_with_il.py
and test/continuous/test_sac_with_il.py
. You can simply play with it.
Here is a more simple version:
collector_A = Collector(policy_A, env, buf)
... # policy A is well-trained
for i in range(num_B_step):
batch, indice = buf.sample(batch_size)
policy_B.learn(batch)
# test policy_B
collector_B = Collector(policy_B, env)
collector_B.collect(n_episode=1)
from tianshou.
is learning from batch restricted to certain policy or applicable to all policies? I got the issue while using DQN:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in
4 for j in range(num_G_step):
5 batch, indice = train_collectors[0].buffer.sample(batch_size)
----> 6 global_policy.learn(batch)
7
~/anaconda3/envs/ByzantineRL/lib/python3.6/site-packages/tianshou/policy/modelfree/dqn.py in learn(self, batch, **kwargs)
160 q = self(batch).logits
161 q = q[np.arange(len(q)), batch.act]
--> 162 r = to_torch_as(batch.returns, q)
163 if hasattr(batch, 'update_weight'):
164 td = r - q
AttributeError: 'Batch' object has no attribute 'returns'
I was trying to use tianshou for my research experiments where I want to do something similar to Dagger but not exactly. Basically I want to collect experiences from some agents. augment those experiences and use the augmented experiences to train a new agent.
# take collector 0's buffer for training the new policy
num_G_step = 1000
batch_size = 128
for j in range(num_G_step):
batch, indice = train_collectors[0].buffer.sample(batch_size)
# a new policy initialized the same way
global_policy.learn(batch)
would appreciate a working example for demonstration.
from tianshou.
Oh, I forgot to add a line before learn
. Sorry.
batch, indice = buf.sample(bsz)
batch = policy.process_fn(batch, buf, indice) # <--
policy.learn(batch)
from tianshou.
Oh, I forgot to add a line before
learn
. Sorry.batch, indice = buf.sample(bsz) batch = policy.process_fn(batch, buf, indice) # <-- policy.learn(batch)
cool, thanks
from tianshou.
Related Issues (20)
- [CQL] why subtract action logprob from Q? HOT 1
- No response after setting render HOT 5
- Multidimensional discrete action space with PPO or DQN HOT 2
- Use nbqa on notebooks HOT 2
- New html docs issue HOT 10
- Atari_PPO.py set frames_stack=1 can't run HOT 2
- Atari/Breakout render issue HOT 1
- Docu fix: `result = trainer.run()` HOT 2
- Fix CI on windows HOT 1
- puzzle about parameter set-eps HOT 1
- How to successfully run a demo HOT 12
- Hello, I want to use your platform to train the Unreal built external environment, is this possible? HOT 1
- Hierarchical Imitation Learning HOT 4
- Better default for batch_size in examples
- Centrally handle persistence of running mean/std for the normalization of observations
- Include parts of atari/mujoco helpers in package code HOT 1
- Support to Multi-node Training HOT 3
- How does the first test reward come before the first epoch? HOT 1
- action mask for DiscreteSACPolicy HOT 4
- question about adding another buffer HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tianshou.