Code Monkey home page Code Monkey logo

Comments (18)

Trinkle23897 avatar Trinkle23897 commented on May 16, 2024

What does sate-action mean? Do you mean state-action pair?
The replay buffer has a function add. You can click the hyperlink for digging it out.

from tianshou.

ChenyangRan avatar ChenyangRan commented on May 16, 2024

What does sate-action mean? Do you mean state-action pair?
The replay buffer has a function add. You can click the hyperlink for digging it out.

Yes, but I found all the state-action pair is stored in buff, which is updated in collector.
I don't konw how to get them from the collector or add them into collector.
And I want to add one compare the new reward of (s, a, r, s') with the mean of reward in buffer.
Could you please give me some advice?

from tianshou.

Trinkle23897 avatar Trinkle23897 commented on May 16, 2024

"add one compare the new reward of (s, a, r, s')" where is it from? E.g. computed in policy.forward()?

from tianshou.

ChenyangRan avatar ChenyangRan commented on May 16, 2024

"add one compare the new reward of (s, a, r, s')" where is it from? E.g. computed in policy.forward()?

Such as when one epsiode has done, I want to compare the reward of this epsiode with the mean reward of other epsiodes in Buff.

from tianshou.

Trinkle23897 avatar Trinkle23897 commented on May 16, 2024

So, do you think with another function buffer.stat() will help you do this thing better? Or maybe add some other APIs if you want. We can discuss here.

from tianshou.

ChenyangRan avatar ChenyangRan commented on May 16, 2024

So, do you think with another function buffer.stat() will help you do this thing better? Or maybe add some other APIs if you want. We can discuss here.

Yes, it maybe help. But I want to know how to use the api of buffer in collector?
Because I want to control each interaction with the env.
Could you please help me?
Thanks.

from tianshou.

Trinkle23897 avatar Trinkle23897 commented on May 16, 2024

Aha, does it mean that a function hook before the buffer.add operation can meet your expectations?

batch_data = preprocess_fn(batch_data)
buffer.add(batch_data)

a

You can store and maintain the related variables in your own processor.

from tianshou.

Trinkle23897 avatar Trinkle23897 commented on May 16, 2024

@ChenyangRan I add the draft implementation. Now you can pass your preprocess_fn to the collector.
This function receives typically 7 keys, as listed here, and returns the modified part within a dict or a Batch. For example, you can write your hook as:

import numpy as np
from collections import deque
class MyProcessor:
    def __init__(self, size=100):
        self.episode_log = None
        self.main_log = deque(maxlen=size)
        self.main_log.append(0)
        self.baseline = 0
    def preprocess_fn(**kwargs):
        """change reward to zero mean"""
        if 'rew' not in kwargs:
            # means that it is called after env.reset(), it can only process the obs
            return {}  # none of the variables are needed to be updated
        else:
            n = len(kwargs['rew'])  # the number of envs in collector
            if self.episode_log is None:
                self.episode_log = [[] for i in range(n)]
            for i in range(n):
                self.episode_log[i].append(kwargs['rew'][i])
                kwargs['rew'][i] -= self.baseline
            for i in range(n):
                if kwargs['done']:
                    self.main_log.append(np.mean(self.episode_log[i]))
                    self.episode_log[i] = []
                    self.baseline = np.mean(self.main_log)
            return Batch(rew=kwargs['rew'])
            # you can also return with {'rew': kwargs['rew']}

And finally,

test_processor = MyProcessor(size=100)
collector = Collector(policy, env, buffer, test_processor.preprocess_fn)

Some examples are in test/base/test_collector.py.

from tianshou.

mtaohuang avatar mtaohuang commented on May 16, 2024

This could be achieved via environment wrapper. Hope the framework to be simple and clean.

from tianshou.

Trinkle23897 avatar Trinkle23897 commented on May 16, 2024

@edieson Yes, I know. In #25, #27 I mention gym.Wrapper as a universal solution.
The motivation I add this feature is that, the gym.wrapper cannot handle the data across different environments easily.

from tianshou.

mtaohuang avatar mtaohuang commented on May 16, 2024

It's just I don't think the preprocess_fn could (or should) do much, don't have much experience :)

from tianshou.

ChenyangRan avatar ChenyangRan commented on May 16, 2024

@ChenyangRan I add the draft implementation. Now you can pass your preprocess_fn to the collector.
This function receives typically 7 keys, as listed here, and returns the modified part within a dict or a Batch. For example, you can write your hook as:

import numpy as np
from collections import deque
class MyProcessor:
    def __init__(self, size=100):
        self.episode_log = None
        self.main_log = deque(maxlen=size)
        self.main_log.append(0)
        self.baseline = 0
    def preprocess_fn(**kwargs):
        if 'rew' not in kwargs:  # means that it is called after env.reset(), we can only process the obs
            return {}  # none of the variables are needed to be updated
        else:
            n = len(kwargs['rew'])  # the number of envs in collector
            if self.episode_log is None:
                self.episode_log = [[] for i in range(n)]
            for i in range(n):
                self.episode_log[i].append(kwargs['rew'][i])
                kwargs['rew'][i] -= self.baseline
            for i in range(n):
                if kwargs['done']:
                    self.main_log.append(np.mean(self.episode_log[i]))
                    self.episode_log[i] = []
                    self.baseline = np.mean(self.main_log)
            return Batch(rew=kwargs['rew'])
            # you can also return with {'rew': kwargs['rew']}

And finally,

test_processor = MyProcessor(size=100)
collector = Collector(policy, env, buffer, test_processor.preprocess_fn)

Some examples are in test/base/test_collector.py.

Thanks,I will try it. You help me a lot.

from tianshou.

Trinkle23897 avatar Trinkle23897 commented on May 16, 2024

It's just I don't think the preprocess_fn could (or should) do much, don't have much experience :)

It can do more things. Instead of processing with batch-images (for atari), it can also save the log into the tensorboard / modify the reward with given info from env before inserting into the buffer / get stats in both training and test data (same preprocess_fn in different collectors). It just depends on what you want.

from tianshou.

flint-xf-fan avatar flint-xf-fan commented on May 16, 2024

Do you have an example for the purpose of re-training using Batch or Collector?

For example, I trained one policy A and now my train_collector contains the replay. How do I use the replay to train another policy B such that B only sees transition pairs of A without interacting with the env?

from tianshou.

Trinkle23897 avatar Trinkle23897 commented on May 16, 2024

Do you have an example for the purpose of re-training using Batch or Collector?

For example, I trained one policy A and now my train_collector contains the replay. How do I use the replay to train another policy B such that B only sees transition pairs of A without interacting with the env?

It is very similar to imitation learning (behavior cloning). I implemented the Dagger (the advance behavior cloning) in test/discrete/test_a2c_with_il.py and test/continuous/test_sac_with_il.py. You can simply play with it.

Here is a more simple version:

collector_A = Collector(policy_A, env, buf)
...  # policy A is well-trained
for i in range(num_B_step):
    batch, indice = buf.sample(batch_size)
    policy_B.learn(batch)
# test policy_B
collector_B = Collector(policy_B, env)
collector_B.collect(n_episode=1)

from tianshou.

flint-xf-fan avatar flint-xf-fan commented on May 16, 2024

is learning from batch restricted to certain policy or applicable to all policies? I got the issue while using DQN:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
 in 
      4 for j in range(num_G_step):
      5     batch, indice = train_collectors[0].buffer.sample(batch_size)
----> 6     global_policy.learn(batch)
      7 

~/anaconda3/envs/ByzantineRL/lib/python3.6/site-packages/tianshou/policy/modelfree/dqn.py in learn(self, batch, **kwargs)
    160         q = self(batch).logits
    161         q = q[np.arange(len(q)), batch.act]
--> 162         r = to_torch_as(batch.returns, q)
    163         if hasattr(batch, 'update_weight'):
    164             td = r - q

AttributeError: 'Batch' object has no attribute 'returns'

I was trying to use tianshou for my research experiments where I want to do something similar to Dagger but not exactly. Basically I want to collect experiences from some agents. augment those experiences and use the augmented experiences to train a new agent.

# take collector 0's buffer for training the new policy
num_G_step = 1000
batch_size = 128
for j in range(num_G_step):
    batch, indice = train_collectors[0].buffer.sample(batch_size)
    # a new policy initialized the same way
    global_policy.learn(batch)

would appreciate a working example for demonstration.

from tianshou.

Trinkle23897 avatar Trinkle23897 commented on May 16, 2024

Oh, I forgot to add a line before learn. Sorry.

batch, indice = buf.sample(bsz)
batch = policy.process_fn(batch, buf, indice)  # <--
policy.learn(batch)

from tianshou.

flint-xf-fan avatar flint-xf-fan commented on May 16, 2024

Oh, I forgot to add a line before learn. Sorry.

batch, indice = buf.sample(bsz)
batch = policy.process_fn(batch, buf, indice)  # <--
policy.learn(batch)

cool, thanks

from tianshou.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.