Code Monkey home page Code Monkey logo

rainbow-is-all-you-need's Introduction

All Contributors

Do you want a RL agent nicely moving on Atari?

Rainbow is all you need!

This is a step-by-step tutorial from DQN to Rainbow. Every chapter contains both of theoretical backgrounds and object-oriented implementation. Just pick any topic in which you are interested, and learn! You can execute them right away with Colab even on your smartphone.

Please feel free to open an issue or a pull-request if you have any idea to make it better. :)

If you want a tutorial for policy gradient methods, please see PG is All You Need.

Contents

  1. DQN [NBViewer] [Colab]
  2. DoubleDQN [NBViewer] [Colab]
  3. PrioritizedExperienceReplay [NBViewer] [Colab]
  4. DuelingNet [NBViewer] [Colab]
  5. NoisyNet [NBViewer] [Colab]
  6. CategoricalDQN [NBViewer] [Colab]
  7. N-stepLearning [NBViewer] [Colab]
  8. Rainbow [NBViewer] [Colab]

Prerequisites

This repository is tested with python 3.8+

git clone https://github.com/Curt-Park/rainbow-is-all-you-need.git
cd rainbow-is-all-you-need
make setup

How to Run

jupyter lab

Related Papers

  1. V. Mnih et al., "Human-level control through deep reinforcement learning." Nature, 518 (7540):529–533, 2015.
  2. van Hasselt et al., "Deep Reinforcement Learning with Double Q-learning." arXiv preprint arXiv:1509.06461, 2015.
  3. T. Schaul et al., "Prioritized Experience Replay." arXiv preprint arXiv:1511.05952, 2015.
  4. Z. Wang et al., "Dueling Network Architectures for Deep Reinforcement Learning." arXiv preprint arXiv:1511.06581, 2015.
  5. M. Fortunato et al., "Noisy Networks for Exploration." arXiv preprint arXiv:1706.10295, 2017.
  6. M. G. Bellemare et al., "A Distributional Perspective on Reinforcement Learning." arXiv preprint arXiv:1707.06887, 2017.
  7. R. S. Sutton, "Learning to predict by the methods of temporal differences." Machine learning, 3(1):9–44, 1988.
  8. M. Hessel et al., "Rainbow: Combining Improvements in Deep Reinforcement Learning." arXiv preprint arXiv:1710.02298, 2017.

Contributors

Thanks goes to these wonderful people (emoji key):

Jinwoo Park (Curt)
Jinwoo Park (Curt)

💻 📖
Kyunghwan Kim
Kyunghwan Kim

💻
Wei Chen
Wei Chen

🚧
WANG Lei
WANG Lei

🚧
leeyaf
leeyaf

💻
ahmadF
ahmadF

📖
Roberto Schiavone
Roberto Schiavone

💻
David Yuan
David Yuan

💻
dhanushka2001
dhanushka2001

💻

This project follows the all-contributors specification. Contributions of any kind welcome!

rainbow-is-all-you-need's People

Contributors

afanaei avatar allcontributors[bot] avatar curt-park avatar daivdyuan avatar dependabot[bot] avatar dhanushka2001 avatar leeyaf avatar robertoschiavone avatar wei-1 avatar wlbksy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rainbow-is-all-you-need's Issues

Assertion error when calculating loss

tl;dr: an error occurs in 08.rainbow.ipynb while training to 100,000 steps (sometimes < 20k steps, sometimes more), here's my copy of the Colab

Hi and thanks for sharing this wonderful learning resource, I really like your implementation!

Testing with CartPole and LunarLander, inside colab, the network encounters nan inside update_priorities(self, indices, priorities) called by loss = self.update_model() inside the train() function. For CartPole, this error comes just after 17500 steps.

image

<ipython-input-20-92e9bbd87b9f> in train(self, num_frames, plotting_interval)
    222             # if training is ready
    223             if len(self.memory) >= self.batch_size:
--> 224                 loss = self.update_model()
    225                 losses.append(loss)
    226                 update_cnt += 1

<ipython-input-20-92e9bbd87b9f> in update_model(self)
    183         loss_for_prior = elementwise_loss.detach().cpu().numpy()
    184         new_priorities = loss_for_prior + self.prior_eps
--> 185         self.memory.update_priorities(indices, new_priorities)
    186 
    187         # NoisyNet: reset noise

<ipython-input-17-06b7969c0015> in update_priorities(self, indices, priorities)
     84 
     85         for idx, priority in zip(indices, priorities):
---> 86             assert priority > 0
     87             assert 0 <= idx < len(self)
     88 

AssertionError: 

You can trace this up the stack to the line
from loss_for_prior = elementwise_loss.detach().cpu().numpy()
to elementwise_loss = self._compute_dqn_loss(samples, self.gamma)

So _compute_dqn_loss holds the clue because samples, nor self.gamma have nan values.

I've left a copy of this inside Colab which is a copy of 08.rainbow.ipynb configured for LunarLander-v2. In other experiments I've played with the memory_size and set n_step to 1, but eventually the priorities variable receives a nan. I'd love to hear any ideas why it might be happening

For completeness, here's a copy of the current definition of that function _compute_dqn_loss:

    def _compute_dqn_loss(self, samples: Dict[str, np.ndarray], gamma: float) -> torch.Tensor:
        """Return categorical dqn loss."""
        device = self.device  # for shortening the following lines
        state = torch.FloatTensor(samples["obs"]).to(device)
        next_state = torch.FloatTensor(samples["next_obs"]).to(device)
        action = torch.LongTensor(samples["acts"]).to(device)
        reward = torch.FloatTensor(samples["rews"].reshape(-1, 1)).to(device)
        done = torch.FloatTensor(samples["done"].reshape(-1, 1)).to(device)
        
        # Categorical DQN algorithm
        delta_z = float(self.v_max - self.v_min) / (self.atom_size - 1)

        with torch.no_grad():
            # Double DQN
            next_action = self.dqn(next_state).argmax(1)
            next_dist = self.dqn_target.dist(next_state)
            next_dist = next_dist[range(self.batch_size), next_action]

            t_z = reward + (1 - done) * gamma * self.support
            t_z = t_z.clamp(min=self.v_min, max=self.v_max)
            b = (t_z - self.v_min) / delta_z
            l = b.floor().long()
            u = b.ceil().long()

            offset = (
                torch.linspace(
                    0, (batch_size - 1) * self.atom_size, self.batch_size
                ).long()
                .unsqueeze(1)
                .expand(self.batch_size, self.atom_size)
                .to(self.device)
            )

            proj_dist = torch.zeros(next_dist.size(), device=self.device)
            proj_dist.view(-1).index_add_(
                0, (l + offset).view(-1), (next_dist * (u.float() - b)).view(-1)
            )
            proj_dist.view(-1).index_add_(
                0, (u + offset).view(-1), (next_dist * (b - l.float())).view(-1)
            )

        dist = self.dqn.dist(state)
        log_p = torch.log(dist[range(self.batch_size), action])
        elementwise_loss = -(proj_dist * log_p).sum(1)

        return elementwise_loss

Thanks,
Greg

Atari env

Nice work!Hope you can give more Atari Game examples!

"indices" in the N-step ReplayBuffer undefined

In the sameple_batch function in "08.rainbow.ipynb", the returned variable “indices” was not defined
`
def sample_batch(self) -> Dict[str, np.ndarray]:
idxs = np.random.choice(self.size, size=self.batch_size, replace=False)

return dict(
    obs=self.obs_buf[idxs],
    next_obs=self.next_obs_buf[idxs],
    acts=self.acts_buf[idxs],
    rews=self.rews_buf[idxs],
    done=self.done_buf[idxs],
    # for N-step Learning
    indices=indices,
)

`
It should be "indices=idxs" instead? Did I miss some thing?

redundant max in double dqn

In double dqn, I found that there is max Q(~~~, argmaxQ(~~~)).

Do we need max even though we have argmax in Q?

I think the max is redundant.

Would you kindly check this for reducing confusion?

Save memory checkpoints

Hello,

I read the tutorial on rainbow 08.rainbow.ipynb and I really liked it. I need some help coding a method for efficiently saving the memory of the DQN (the ReplayBuffer object and the PrioritizedReplayBuffer object). I used numpy's savez method to save the ReplayBuffer like this:

class ReplayBuffer:
    """A simple numpy replay buffer."""

    def __init__(
        self, 
        obs_dim: tuple, 
        size: int, 
        save_dir: str,
        batch_size: int = 32, 
        n_step: int = 1, 
        gamma: float = 0.99
    ):
        self.obs_buf = numpy.zeros((size, *obs_dim), dtype=numpy.float32)
        self.next_obs_buf = numpy.zeros((size, *obs_dim), dtype=numpy.float32)
        self.acts_buf = numpy.zeros(size, dtype=numpy.int64)
        self.rews_buf = numpy.zeros(size, dtype=numpy.float32)
        self.done_buf = numpy.zeros(size, dtype=numpy.uint8)
        self.max_size, self.batch_size = size, batch_size
        self.ptr, self.size, = 0, 0

        self.save_dir = save_dir

        # for N-step Learning
        self.n_step_buffer = deque(maxlen=n_step)
        self.n_step = n_step
        self.gamma = gamma

    def store(
        self, 
        obs: numpy.ndarray, 
        act: numpy.ndarray, 
        rew: float, 
        next_obs: numpy.ndarray, 
        done: bool,
    ) -> Tuple[numpy.ndarray, numpy.ndarray, float, numpy.ndarray, bool]:
        transition = (obs, act, rew, next_obs, done)
        self.n_step_buffer.append(transition)

        # single step transition is not ready
        if len(self.n_step_buffer) < self.n_step:
            return ()
        
        # make a n-step transition
        rew, next_obs, done = self._get_n_step_info(self.n_step_buffer, self.gamma)
        obs, act = self.n_step_buffer[0][:2]
        
        self.obs_buf[self.ptr] = obs
        self.next_obs_buf[self.ptr] = next_obs
        self.acts_buf[self.ptr] = act
        self.rews_buf[self.ptr] = rew
        self.done_buf[self.ptr] = done
        self.ptr = (self.ptr + 1) % self.max_size
        self.size = min(self.size + 1, self.max_size)
        
        return self.n_step_buffer[0]

    def sample_batch(self) -> Dict[str, numpy.ndarray]:
        idxs = numpy.random.choice(self.size, size=self.batch_size, replace=False)

        return dict(
            obs=self.obs_buf[idxs],
            next_obs=self.next_obs_buf[idxs],
            acts=self.acts_buf[idxs],
            rews=self.rews_buf[idxs],
            done=self.done_buf[idxs],
            # for N-step Learning
            indices=idxs,
        )
    
    def sample_batch_from_idxs(
        self, idxs: numpy.ndarray
    ) -> Dict[str, numpy.ndarray]:
        # for N-step Learning
        return dict(
            obs=self.obs_buf[idxs],
            next_obs=self.next_obs_buf[idxs],
            acts=self.acts_buf[idxs],
            rews=self.rews_buf[idxs],
            done=self.done_buf[idxs],
        )
    
    def _get_n_step_info(
        self, n_step_buffer: Deque, gamma: float
    ) -> Tuple[numpy.int64, numpy.ndarray, bool]:
        """Return n step rew, next_obs, and done."""
        # info of the last transition
        rew, next_obs, done = n_step_buffer[-1][-3:]

        for transition in reversed(list(n_step_buffer)[:-1]):
            r, n_o, d = transition[-3:]

            rew = r + gamma * rew * (1 - d)
            next_obs, done = (n_o, d) if d else (next_obs, done)

        return rew, next_obs, done
    
    def save_buffer(self):
        save_path = self.save_dir / f"xp_buffer-{self.ptr}-{self.max_size}.npz"
        numpy.savez_compressed(save_path, state_memory=self.obs_buf, next_state_memory=self.next_obs_buf,
                               action_memory=self.acts_buf, reward_memory=self.rews_buf, terminal_memory=self.done_buf)
        print(f"Memory Buffer saved to {save_path} of size {self.ptr} and total capacity {self.max_size}")

    def load_buffer(self, chkpt_path: str):
        if not chkpt_path.exists():
            raise ValueError(f"{chkpt_path} does not exist")

        path = os.path.normpath(chkpt_path)
        chkpt = numpy.load(chkpt_path)
        tokens = path.split('-')

        self.obs_buf = chkpt['state_memory']
        self.acts_buf = chkpt['action_memory']
        self.rews_buf = chkpt['reward_memory']
        self.done_buf = chkpt['terminal_memory']
        self.next_obs_buf = chkpt['next_state_memory']
        self.ptr = int(tokens[1])
        self.max_size = int(os.path.splitext(tokens[2])[0])
        
        print(f"Loading buffer at {chkpt_path} of size {self.ptr} and total capacity {self.max_size}")

    def __len__(self) -> int:
        return self.size

I don't know how to save the object created from PrioritizedReplayBuffer class. Any help is appreciated :)

input state-action pair into Rainbow DQN

Hi, I was thinking of incorporating the action (in addition to state) as a state-action pair input into the rainbow dqn model, however I am unsure of which part to insert it. Below code shows 4 places where I am thinking of adding the actions (as input to the model), but I am unsure if it is appropriate to add them there or not. (please see "<----" symbol)

def _compute_dqn_loss(self, samples: Dict[str, np.ndarray], gamma: float) -> torch.Tensor:
        """Return categorical dqn loss."""
        device = self.device  # for shortening the following lines
        state = torch.FloatTensor(samples["obs"]).to(device)
        next_state = torch.FloatTensor(samples["next_obs"]).to(device)
        action = torch.LongTensor(samples["acts"]).to(device)
        reward = torch.FloatTensor(samples["rews"].reshape(-1, 1)).to(device)
        done = torch.FloatTensor(samples["done"].reshape(-1, 1)).to(device)
        
        # Categorical DQN algorithm
        delta_z = float(self.v_max - self.v_min) / (self.atom_size - 1)

        with torch.no_grad():
            # Double DQN
            next_state_EDIT = np.concatenate([next_state, action])   <---- concat action
            next_action = self.dqn(next_state_EDIT).argmax(1)        <---- edited state as input
            next_dist = self.dqn_target.dist(next_state_EDIT)        <---- edited state as input
            next_dist = next_dist[range(self.batch_size), next_action]

            t_z = reward + (1 - done) * gamma * self.support
            t_z = t_z.clamp(min=self.v_min, max=self.v_max)
            b = (t_z - self.v_min) / delta_z
            l = b.floor().long()
            u = b.ceil().long()

            offset = (
                torch.linspace(
                    0, (self.batch_size - 1) * self.atom_size, self.batch_size
                ).long()
                .unsqueeze(1)
                .expand(self.batch_size, self.atom_size)
                .to(self.device)
            )

            proj_dist = torch.zeros(next_dist.size(), device=self.device)
            proj_dist.view(-1).index_add_(
                0, (l + offset).view(-1), (next_dist * (u.float() - b)).view(-1)
            )
            proj_dist.view(-1).index_add_(
                0, (u + offset).view(-1), (next_dist * (b - l.float())).view(-1)
            )
        state_EDIT = np.concatenate([state, action])                  <---- concat action
        dist = self.dqn.dist(state_EDIT)                              <---- edited state as input
        log_p = torch.log(dist[range(self.batch_size), action])
        elementwise_loss = -(proj_dist * log_p).sum(1)

        return elementwise_loss
def select_action(self, state: np.ndarray) -> np.ndarray:
        """Select an action from the input state."""
        # NoisyNet: no epsilon greedy action selection
        state_EDIT = np.concatenate([state, action])               <---- concat action
        selected_action = self.dqn(
            torch.FloatTensor(state_EDIT).to(self.device)          <---- edited state as input
        ).argmax()
        selected_action = selected_action.detach().cpu().numpy()
        
        if not self.is_test:
            self.transition = [state, selected_action]
        
        return selected_action

I have seen state-action pair as input to the Q function of soft actor critic before, but not in DQN. So I am unsure if its logical to do this, especially in self.dqn.dist(state_EDIT) and selected_action = self.dqn(torch.FloatTensor(state_EDIT).to(self.device)).argmax().

Any ideas on this? thanks :)

Atari

Your rainbow implementation here is awesome!! So goood
please consider adapting it for Atari games for beginners

bias_sigma initialization in noisy net

According to Sec. 3.2 in the paper "Noisy Networks for Exploration", sigmas are initialized to a constant sigma / sort(p).
However, in this implementation, self.bias_sigma.data.fill_(self.std_init / math.sqrt(self.out_features)) is realized. Is this a typo?

Some questions on the N-step ReplayBuffer

Maybe some dumb questions about the N-step ReplayBuffer

  1. In update_model()
        if self.use_n_step:
            samples = self.memory_n.sample_batch_from_idxs(indices)
            gamma = self.gamma ** self.n_step
            n_loss = self._compute_dqn_loss(samples, gamma)

Here the assumption is in the samples, for each transition, the next_obs will be n_step away from the obs so self.gamma ** self.n_step makes sense. However, the way _get_n_step_info() is implemented, there is no guarantee that assumption is true, for example when there is a terminal state right after obs, in that case, for that transition, the next_obs will be just 1-step away from the obs, right?

  1. In the same code snippet above
    samples = self.memory_n.sample_batch_from_idxs(indices)
    Here self.memory_n is samped with same indices sampled with self.memory. However, there seems two questions here
    2.1 For the same index, the two transitions sampled from self.memory and self.memory_n respectively do not have the same obs (they have same next_obs if there is no terminal state issue, as described above)
    2.2 If there is a terminal state after obs but before the n-step, then the transitions sampled from self.memory and self.memory_n have neither same obs nor same next_obs

For example, the following was from a trace (the samples from N-step is renamed to samples2 for clarity)

(Pdb) l
138 # prevent high-variance.
139 if self.use_n_step:
140 samples2 = self.memory_n.sample_batch_from_idxs(indices)
141 if (samples2['next_obs'][0][0] != samples['next_obs'][0][0]):
142 pdb.set_trace()
143 -> gamma = self.gamma ** self.n_step
144 n_loss = self._compute_dqn_loss(samples2, gamma)
145 loss += n_loss
146
147 self.optimizer.zero_grad()
148 loss.backward()
(Pdb) p samples
{'obs': array([[-0.03751984, 0.00060021, 0.00848503, 0.02023765]],
dtype=float32), 'next_obs': array([[-0.03750784, -0.1946424 , 0.00888978, 0.31558558]],
dtype=float32), 'acts': array([0.], dtype=float32), 'rews': array([1.], dtype=float32), 'done': array([0.], dtype=float32), 'indices': array([17])}
(Pdb) samples2
{'obs': array([[-0.04211658, -0.7778139 , 0.17969099, 1.5349392 ]],
dtype=float32), 'next_obs': array([[-0.05767286, -0.974587 , 0.21038976, 1.8778918 ]],
dtype=float32), 'acts': array([0.], dtype=float32), 'rews': array([1.], dtype=float32), 'done': array([1.], dtype=float32)}

So my question is, if the samples from 1-step and N-step are not really synchronized, what is the benefit of trying? I guess even if we simply use self.memory_n.sample_batch the result is probably going to be fine since here the 1-step loss is just used to lower the variance, and that does not necessary require the two samples to have the same indices?

Thanks!

Atari

Your rainbow implementation here is awesome!!
So goood, please consider adapting it for Atari games.

Modify the description of N-step buffer

N-step buffer code is changed, but the description isn't.

You can see it doesn't actually store a transition in the buffer, unless n_step_buffer is full.

    # in store method
    if len(self.n_step_buffer) < self.n_step:
        return False

It is changed return False -> return ().

Update torch, numpy version

The version of packages (torch, numpy) should be updated. And then it should test all algorithms.
PG-is-all-you-need is updated torch==1.6.0, numpy==1.18.0.

Not handling time limits

In the DQNAgent, particularly in the step method, there seems to be a potential issue in properly distinguishing between termination and truncation, as suggested by the Gymnasium documentation available at https://gymnasium.farama.org/tutorials/gymnasium_basics/handling_time_limits/.

The following line of code, done = terminated or truncated, treats both termination and truncation equally.

Furthermore, in the _compute_dqn_loss method, the code lines:
mask = 1 - done target = (reward + self.gamma * next_q_value * mask).to(self.device)

do not seem to account specifically for truncation.

V_min and V_max - Rainbow DQN

Hello,

I'm studying your Rainbow DQN tutorial. I'm trying to understand why is V_min set to 0 and V_max to 200. Most of Rainbow DQN implementations have those variables set to -10 and 10 respectively.

There is a typo in N-step ReplayBuffer

In 08.rainbow.ipynb, There is a value, indices never used.

def sample_batch(self) -> Dict[str, np.ndarray]:
        idxs = np.random.choice(self.size, size=self.batch_size, replace=False)
        return dict(
            obs=self.obs_buf[idxs],
            next_obs=self.next_obs_buf[idxs],
            acts=self.acts_buf[idxs],
            rews=self.rews_buf[idxs],
            done=self.done_buf[idxs],
            # for N-step Learning 
            # MC Check This Function is not used
            indices=indices,
        )

Update frequency/method and warm-up period

Hey!

First: Thanks a lot for the awesome tutorial! It is really great :)

I am currently building a Rainbow RL agent for the board game of Abalone based on your tutorial/implementation and gym-abalone. You can have a look at it here.

While reading the Rainbow paper, I found some discrepancies regarding when a learning step takes place and when the target network is updated:

Issue Paper This repo
1 training starts after warm-up period training starts as soon as first batch is available
2 model update is performed every 4th agent step update is performed at every agent step
3 target net is soft-updated at every training step? target net is hard-updated every target_update steps

With issue 2, I understand that in the paper each action selected by the agent is repeated four times and that every state is a concatenation of four frames. So from that perspective updating at every agent step might make sense.

With issue 3, I am not quite sure how it is done in the paper. It seems to be done like that in this implementation, on which you build upon (right?).

Is there a reason for these differences?
I would be very happy to hear from you! :)

Best,
Max

Type error on Prioritized DQN ( unpacking list with star operation)


TypeError Traceback (most recent call last)
in ()
----> 1 agent.train(num_frames)

1 frames
in step(self, action)
134 if not self.is_test:
135 self.transition += [reward, next_state, done]
--> 136 self.memory.store(self.transition)
137
138 return next_state, reward, done

TypeError: store() missing 4 required positional arguments: 'act', 'rew', 'next_obs', and 'done'

Platform is google collab by the way

clear momory during n_step_learning

In the training loop, it might make sense to call self.memory_n_step.n_step_buffer.clear() when an episode is done to avoid (final->initial) transitions.

Google Drive ,Saving,Loading,Resuming Features.

Need ,these saving and loading ,resuming models states , i tried to write that , but i am confused somewhere between gpu weights ,cpu weights . or we need to create network class before loading ,or loading contains network definitions etc .

Save/Load capabilities

Hey,
If I may ask for the addition of a "Load" and "Save" to the Rainbow model, it will help me a lot.
If such capabilities already exist, please let me know how.

thanks in advance for your help

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.