Code Monkey home page Code Monkey logo

di-star's Introduction

Overview

DI-star: A large-scale game AI distributed training platform specially developed for the StarCraft II. We've already trained grand-master AI!This project contains:

  • Play demo and test code (try and play with our agent!)

  • First version of pre-trained SL and RL agent (only Zerg vs Zerg)

  • Training code of Supervised Learning and Reinforcement Learning (updated by 2022-01-31)

  • Training baseline with limited resource(one PC) and training guidance here (New! updated 2022-04-24)

  • Agents fought with Harstem (YouTube) (updated by 2022-04-01)

  • More stronger pre-trained RL agents (WIP)

Usage

Testing software on Windows | 对战软件下载

Please star us (click stars - di-star button in the top-right of this page) to help DI-star agents to grow up faster :)

Installation

Environment requirement:

  • Python: 3.6-3.8

1.Install StarCraftII

Note: There is no retail version on Linux, please follow the instruction here

  • Add SC2 installation path to environment variables SC2PATH (skip this if you use default installation path on MacOS or Windows, which is C:\Program Files (x86)\StarCraft II or /Applications/StarCraft II):

    • On MacOS or Linux, input this in terminal:

      export SC2PATH=<sc2/installation/path>
    • On Windows:

      1. Right-click the Computer icon and choose Properties, or in Windows Control Panel, choose System.
      2. Choose Advanced system settings.
      3. On the Advanced tab, click Environment Variables.
      4. Click New to create a new environment variable, set SC2PATH as the sc2 installation location.
      5. After creating or modifying the environment variable, click Apply and then OK to have the change take effect.

2.Install distar:

git clone https://github.com/opendilab/DI-star.git
cd DI-star
pip install -e .

3.Install pytorch:

Pytorch Version 1.7.1 and CUDA is recommended, Follow instructions from pytorch official site

Note: GPU is neccessary for decent performance in realtime agent test, you can also use pytorch without cuda, but no performance guaranteed due to inference latency on cpu. Make sure you set SC2 at lowest picture quality before testing.

Play with pretrained agent

1. Download StarCraftII version 4.10.0

Double click the file data/replays/replay_4.10.0.SC2Replay, StarCraftII version 4.10.0 will be automatically downloaded.

Note: We trained our models with versions from 4.8.2 to 4.9.3. Patch 5.0.9 has came out in March 15, 2022, Some changes have huge impact on performance, so we fix our version at 4.10.0 in evaluation.

2. Download models:

python -m distar.bin.download_model --name rl_model

Note: Specify rl_model or sl_model after --name to download reinforcement learning model or supervised model.

Model list:

  • sl_model: training with human replays, skill is equal to diamond players.
  • rl_model: used as default, training with reinforcement learning, skill is equal to master or grandmaster.
  • Abathur: one of reinforcement learning models, likes playing mutalisk.
  • Brakk: one of reinforcement learning models, likes lingbane rush.
  • Dehaka: one of reinforcement learning models, likes playing roach ravager.
  • Zagara: one of reinforcement learning models, likes roach rush.

3. Agent test

With the given model, we provide multiple tests with our agent.

Play against Agent
python -m distar.bin.play

It runs 2 StarCraftII instances. First one is controlled by our RL agent. Human player can play on the second one with full screen like normal game.

Note:

  • GPU and CUDA is required on default, add --cpu if you don't have these.
  • Download RL model first or specify other models (like supervised model) with argument --model1 <model_name>
  • In race cases, 2 StarCraftII instances may lose connection and agent won't issue any action. Please restart when this happens.
Agent vs Agent
python -m distar.bin.play --game_type agent_vs_agent

It runs 2 StarCraftII instances both controlled by our RL Agent, specify other model path with argument --model1 <model_name> --model2 <model_name>

Agent vs Bot
python -m distar.bin.play --game_type agent_vs_bot

RL agent plays against built-in elite bot.

Building your own agent with our framework

It is necessary to build different agents within one code base and still be able to make them play against each other. We implement this by making actor and environment as common components and putting everything related to the agent into one directory. The agent called default under distar/agent is an example of this. Every script under default uses relative import, which makes them portable to anywhere as a whole part.

If you want to create a new agent with/without our default agent, follow instructions here

If you want to train a new agent with our framework, follow instructions below and here is a guidance with more details of the whole training pipeline.

Supervised Learning

StarCraftII client is required for replay decoding, follow instructions above.

python -m distar.bin.sl_train --data <path>

path could be either a directory with replays or a file includes a replay path at each line.

Optionally, separating replay decoding and model training could be more efficient, run the three scripts in different terminals:

python -m distar.bin.sl_train --type coordinator
python -m distar.bin.sl_train --type learner --remote
python -m distar.bin.sl_train --type replay_actor --data <path>

For distributed training:

python -m distar.bin.sl_train --init_method <init_method> --rank <rank> --world_size <world_size>
or
python -m distar.bin.sl_train --type coordinator
python -m distar.bin.sl_train --type learner --remote --init_method <init_method> --rank <rank> --world_size <world_size>
python -m distar.bin.sl_train --type replay_actor --data <path>

Here is an example of training on a machine with 4 GPUs in remote mode:

# Run the following scripts in different terminals (windows).
python -m distar.bin.sl_train --type coordinator
# Assume 4 GPUs are on the same machine. 
# If your GPUs are on different machines, you need to configure the init_mehod's IP for each machine.
python -m distar.bin.sl_train --type learner --remote --init_method tcp://127.0.0.1 --rank 0 --world_size 4
python -m distar.bin.sl_train --type learner --remote --init_method tcp://127.0.0.1 --rank 1 --world_size 4
python -m distar.bin.sl_train --type learner --remote --init_method tcp://127.0.0.1 --rank 2 --world_size 4
python -m distar.bin.sl_train --type learner --remote --init_method tcp://127.0.0.1 --rank 3 --world_size 4
python -m distar.bin.sl_train --type replay_actor --data <path>

Reinforcement Learning

Reinforcement learning will use supervised model as initial model, please download it first, StarCraftII client is also required.

1. Training against bots in StarCraftII:
python -m disatr.bin.rl_train
2. Training with self-play
python -m disatr.bin.rl_train --task selfplay

Four components are used for RL training, just like SL training, they can be executed through different process:

python -m distar.bin.rl_train --type league --task selfplay
python -m distar.bin.rl_train --type coordinator
python -m distar.bin.rl_train --type learner
python -m distar.bin.rl_train --type actor

Distributed training is also supported like SL training.

Chat group

Slack: link

Discord server: link

Citation

@misc{distar,
    title={DI-star: An Open-sourse Reinforcement Learning Framework for StarCraftII},
    author={DI-star Contributors},
    publisher = {GitHub},
    howpublished = {\url{https://github.com/opendilab/DI-star}},
    year={2021},
}

License

DI-star released under the Apache 2.0 license.

di-star's People

Contributors

ain-soph avatar liuyuisanai avatar lkwargs avatar paparazz1 avatar rangilyu avatar upia99 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

di-star's Issues

Doesn't run with numpy version >= 1.24

Getting this error when trying to run: module 'numpy' has no attribute 'int'. Did you mean: 'inf'?.

Looks like the latest version of numpy, np.int was removed, and it's just int now, or potentially np.int32 and np.int64. I don't know enough about the codebase to do a PR with the updates, but a simple fix for now is to just pin the numpy version with numpy==1.23.5 in setup.py.

Error when running python -m distar.bin.play

I got an error when initializing the ai:

ImportError: cannot import name 'container_abcs' from 'torch._six' (C:\ProgramData\Miniconda3\lib\site-packages\torch_six.py)

Pytorch Version (Cuda v11)

torch 1.9.0
torchaudio 0.9.0
torchvision 0.10.0

Did some search and think this is relevant to torch version. What is the version that dev is using? Thanks!

开放出来的模型是蒸馏或者剪枝的吗?感觉性能很差

看大小都是150M左右的小模型,打了几场感觉应变很差,套路也很固定也基本没侦察。放点战术就死了,比如两矿堵口速飞龙,看到飞龙才知道补防空,而且打哪个矿补哪个,2矿*了一波去主矿竟然都没补。看到堵口以后完全没有应对,没下三矿也没侦察,也没预判飞龙。不知道原始的大模型是否也是这样?(ps:我zvz很菜,正常打是打不过,只能放战术)

Making external Google codes as dependencies (and some suggestions)

Hi!
After checking this repo, it seems quite some codes are from pysc2. Those codes might better serve as an installed package, rather than cloned in this repo.

And I hope that there could be an abstract to illustrate the contribution of this repo, such as the code structures, algorithm and mechanism figures. Docs and test CI workflows are also important. Currently it's a little difficult to catch up without a clear guidance, but I'm quite willing to contribute in the future.

Btw, is IA himself the maintainer of this repo? Quite amazing.

Add support for training Starcraft 1 Broodwar AIs

Hey! Awesome work so far!
It would be really cool if (a fork of) the DI-star environment also had support for training Starcraft 1 Broodwar AIs using BWAPI ( https://github.com/bwapi/bwapi ).
The Broodwar AI community has been very competitive for over a decade now and the current top bots are already at or above the playing strength of average human players.
Some more references if you are interested:
https://sscaitournament.com/
https://www.basil-ladder.net/

Keep up the good work! :)

[Feature Request] Docs Support

I'm glad to see that this repo is continuously maintained this year, and still hope to make contribution.
But I find it's quite difficult to get started, and good docs/tutorials are in strong demand.

If you can make a short introduction for the code architecture, I can help construct a doc template (like this: trojanzoo), so that we can together work on it.

Of course, this must be based on the assumption that the api architecture is already fixed.

关于HVA状态下的快捷键问题

Hello IA大哥,我在本地运行Windows版的DI-star并尝试与AI对抗时发现一个问题
基于这个环境下启动的星际争霸2没有办法正确读取我账号对应的自定义快捷键文件
请问这是我设置的问题吗?还是说是一个还没有实现的功能

期待回复

Abandon the support to Python2

I see there are many from __future__ import XXX which supports python2.

Since pytorch has terminated the support for py2, we might remove those compatible sentences.

Invalid Discord Link

The link in the last update is invalid now.

I'm a new fan(SC2 newbie). Wanna learn more. -v-

关于replay数据问题

想问一下有没有最新版本的5.0.9的replay数据,或者如何去获得这些replay数据呢?

Running into errors trying to train

Hi, I am trying to train a playable the AI to learn ZvT, but am encountering some errors.

  1. I made a trainingreplays folder and put some replays from Patch 4.9. Command I ran was python -m distar.bin.sl_train --data ./trainingreplays:
Start decoding replay with player 1, path: ./trainingreplays\reynorvssoul1.SC2Replay
12380 [ERROR] parse replay error SC2APIProtocol.ResponseReplayInfo.Error.InvalidReplayPath: ''
  File "C:\Users\Steve Lu\Downloads\di-star\di-star\distar\agent\default\replay_decoder.py", line 380, in run
    self._replay_info = self._parse_replay_info()
  File "C:\Users\Steve Lu\Downloads\di-star\di-star\distar\agent\default\replay_decoder.py", line 343, in _parse_replay_info
    replay_info = self._controller.replay_info(replay_path=self._replay_path)
  File "C:\Users\Steve Lu\Downloads\di-star\di-star\distar\pysc2\lib\remote_controller.py", line 69, in _check_error
    return check_error(func(*args, **kwargs), error_enum)
  File "C:\Users\Steve Lu\Downloads\di-star\di-star\distar\pysc2\lib\remote_controller.py", line 60, in check_error
    raise RequestError("%s.%s: '%s'" % (enum_name, error_name, details))
  1. I tried the reinforcement learning selfplay command python -m disatr.bin.rl_train --task selfplay. Am I understanding correctly that selfplay is when it trains by playing against me?
train !!!!!!!!!!!!!!!!!!!
Traceback (most recent call last):
  File "C:\Users\Steve Lu\Downloads\miniconda\lib\multiprocessing\process.py", line 315, in _bootstrap
    self.run()
  File "C:\Users\Steve Lu\Downloads\miniconda\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
TypeError: learner_run() takes 2 positional arguments but 5 were given
{'checkpoint_paths': ['C:\\Users\\Steve Lu\\Downloads\\di-star\\di-star\\experiments\\test\\league_models\\MP0H1_MP0_ckpt.pth.tar', 'C:\\Users\\Steve Lu\\Downloads\\di-star\\di-star\\experiments\\test\\league_models\\MP0_ckpt.pth.tar'], 'env_info': {'map_name': 'random', 'player_ids': ['MP0H1', 'MP0'], 'side_id': [0, 1]}, 'frac_ids': [1, 1], 'pipelines': ['default', 'default'], 'player_ids': ['MP0H1', 'MP0'], 'send_data_players': ['MP0'], 'side_ids': [0, 1], 'successive_ids': ['none', 'MP0'], 'teacher_checkpoint_paths': ['none', 'C:\\Users\\Steve Lu\\Downloads\\di-star\\di-star\\distar\\bin\\sl_model.pth'], 'teacher_player_ids': ['none', 'model'], 'update_players': ['MP0'], 'z_path': ['3map.json', '3map.json'], 'z_prob': [0.25, 0.25]}
run worker for token: MP0model at 127.0.0.1: 18992
 * Serving Flask app 'distar.ctools.worker.coordinator.coordinator' (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
z_type 1 cum False bo False
Map: NewRepugnancy Race: zerg, Born location: (16, 101), loop: 4840, idx: None
Building order:
  Build_Extractor_unit, (0, 0)
  Build_SpawningPool_pt, (21, 106)
  Train_Zergling_quick, (0, 0)
  Train_Zergling_quick, (0, 0)
  Train_Zergling_quick, (0, 0)
  Research_ZerglingMetabolicBoost_quick, (0, 0)
  Train_Zergling_quick, (0, 0)
  Train_Zergling_quick, (0, 0)
  Build_BanelingNest_pt, (16, 111)
  Train_Zergling_quick, (0, 0)
  Train_Zergling_quick, (0, 0)
  Build_Hatchery_pt, (16, 78)
  Train_Zergling_quick, (0, 0)
  Train_Baneling_quick, (0, 0)
  Train_Baneling_quick, (0, 0)
  Train_Queen_quick, (0, 0)
Cumulative stat:
  Build_BanelingNest_pt
  Build_Extractor_unit
  Build_Hatchery_pt
  Build_SpawningPool_pt
  Research_ZerglingMetabolicBoost_quick
  Train_Baneling_quick
  Train_Queen_quick
  Train_Zergling_quick

[EPISODE LOOP ERROR] 'terran'
  File "C:\Users\Steve Lu\Downloads\di-star\di-star\distar\actor\actor.py", line 140, in _inference_loop
    self.agents[idx].reset(map_name, race, game_info[idx], observations[idx])
  File "C:\Users\Steve Lu\Downloads\di-star\di-star\distar\agent\default\agent.py", line 191, in reset
    z = random.choice(z_data[self._map_name][self._race][born_location_str])

z_type 2 cum False bo False
Map: NewRepugnancy Race: zerg, Born location: (135, 18), loop: 9473, idx: None
Building order:
  Build_Hatchery_pt, (135, 43)
  Build_Extractor_unit, (0, 0)
  Build_SpawningPool_pt, (134, 30)
  Train_Queen_quick, (0, 0)
  Train_Queen_quick, (0, 0)
  Research_ZerglingMetabolicBoost_quick, (0, 0)
  Build_EvolutionChamber_pt, (126, 43)
  Train_Queen_quick, (0, 0)
  Research_ZergMeleeWeapons_quick, (0, 0)
  Train_Zergling_quick, (0, 0)
  Train_Zergling_quick, (0, 0)
  Train_Zergling_quick, (0, 0)
  Train_Zergling_quick, (0, 0)
  Train_Zergling_quick, (0, 0)
  Train_Zergling_quick, (0, 0)
  Train_Zergling_quick, (0, 0)
  Train_Zergling_quick, (0, 0)
  Build_RoachWarren_pt, (125, 40)
  Build_Hatchery_pt, (104, 27)
  Build_Extractor_unit, (0, 0)
Cumulative stat:
  Build_EvolutionChamber_pt
  Build_Extractor_unit
  Build_Hatchery_pt
  Build_RoachWarren_pt
  Build_SpawningPool_pt
  Morph_Ravager_quick
  Research_ZergGroundArmor_quick
  Research_ZerglingMetabolicBoost_quick
  Research_ZergMeleeWeapons_quick
  Train_Queen_quick
  Train_Roach_quick
  Train_Zergling_quick

[EPISODE LOOP ERROR] 'terran'
  File "C:\Users\Steve Lu\Downloads\di-star\di-star\distar\actor\actor.py", line 140, in _inference_loop
    self.agents[idx].reset(map_name, race, game_info[idx], observations[idx])
  File "C:\Users\Steve Lu\Downloads\di-star\di-star\distar\agent\default\agent.py", line 191, in reset
    z = random.choice(z_data[self._map_name][self._race][born_location_str])

Process SpawnProcess-4:
Traceback (most recent call last):
  File "C:\Users\Steve Lu\Downloads\miniconda\lib\runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\Steve Lu\Downloads\miniconda\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "C:\Users\Steve Lu\Downloads\di-star\di-star\distar\bin\rl_train.py", line 153, in <module>
    actor_run(config, args)
  File "C:\Users\Steve Lu\Downloads\di-star\di-star\distar\bin\rl_train.py", line 65, in actor_run
Traceback (most recent call last):
  File "C:\Users\Steve Lu\Downloads\di-star\di-star\distar\actor\actor.py", line 140, in _inference_loop
    self.agents[idx].reset(map_name, race, game_info[idx], observations[idx])
  File "C:\Users\Steve Lu\Downloads\di-star\di-star\distar\agent\default\agent.py", line 191, in reset
    z = random.choice(z_data[self._map_name][self._race][born_location_str])
    KeyError: 'terran'
actor.run()

Question about contributing

李培楠nb!@upia99
咱这啥时候能开发好啊🤔CN星际都要画上完美句号啦
对目前的开发进度不是很清楚。我作为external contributor能做些什么吗?

Running into exception when testing SL training with cuda = False

SL training works fine if I turn on cuda = True although I am running a little short of budget so I wanted to try if running on CPU works.

If I turn off cuda, here is the exception I get. Any idea how to fix this? I tried to change that part of the code to with same issue.

if torch.cuda.is_available():
embedding = torch.zeros(bs * shape_y * shape_x, device=x[k].device)
bias = torch.arange(bs, device=x[k].device).unsqueeze(dim=1) * shape_y * shape_x
else:
embedding = torch.zeros(bs * shape_y * shape_x)
bias = torch.arange(bs).unsqueeze(dim=1) * shape_y * shape_x

[2022-12-04 14:37:58,901][checkpoint_helper.py][line: 140][ INFO] save checkpoint in /home/mark/projects/DI-star/experiments/sl_train_protoss/checkpoint/sl_train_protoss_iteration_1.pth.tar
[2022-12-04 14:37:58,902][ log_helper.py][line: 161][ INFO] BaseLearner139853483483344 save checkpoint in /home/mark/projects/DI-star/experiments/sl_train_protoss/checkpoint/sl_train_protoss_iteration_1.pth.tar
Traceback (most recent call last):
File "/home/mark/projects/DI-star/distar/ctools/torch_utils/checkpoint_helper.py", line 364, in wrapper
return func(*args, **kwargs)
File "/home/mark/projects/DI-star/distar/ctools/worker/learner/base_learner.py", line 266, in run
self._train(data)
File "/home/mark/projects/DI-star/distar/ctools/worker/learner/base_learner.py", line 129, in wrapper
ret = fn(*args, **kwargs)
File "/home/mark/projects/DI-star/distar/agent/default/sl_learner.py", line 50, in _train
logits, infer_action_info, hidden_state = self._model.sl_train(**data, hidden_state=self.hidden_state)
File "/home/mark/projects/DI-star/distar/agent/default/model/model.py", line 182, in sl_train
self.encoder(spatial_info, entity_info, scalar_info, entity_num)
File "/home/mark/anaconda3/envs/distar/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/mark/projects/DI-star/distar/agent/default/model/encoder.py", line 43, in forward
embedded_spatial, map_skip = self.spatial_encoder(spatial_info, scatter_map)
File "/home/mark/anaconda3/envs/distar/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/mark/projects/DI-star/distar/agent/default/model/obs_encoder/spatial_encoder.py", line 74, in forward
embedding[x[k].long()] = 1.
IndexError: index 9728000 is out of bounds for dimension 0 with size 9728000

Unable to download models

Hi, when running python -m distar.bin.download_model --name rl_model i am getting Operation timed out error. Can you please check if your servers are up?

Trouble running

Hey! First of all thanks for the great project!
I just tried to run to play against the bot but I'm getting an error. Here's what I did:

  • Followed the github tutorial and was able to:
  • Install python 3.10.4
  • Installed CUDA 11
  • Installed pytorch 1.7.1
  • I think I successfully downloaded the models
  • Downloaded old sc2 version with the replay

Here's what I'm getting when I run:

python -m distar.bin.play

Traceback (most recent call last):
  File "C:\Users\yurim\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\yurim\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "c:\users\yurim\projects\di-star\distar\bin\play.py", line 86, in <module>
    actor = Actor(user_config)
  File "c:\users\yurim\projects\di-star\distar\actor\actor.py", line 38, in __init__
    self._setup_agents()
  File "c:\users\yurim\projects\di-star\distar\actor\actor.py", line 65, in _setup_agents
    state_dict = torch.load(self._cfg.model_paths[player_id], map_location='cpu')
  File "C:\Users\yurim\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 705, in load
    with _open_zipfile_reader(opened_file) as opened_zipfile:
  File "C:\Users\yurim\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\serialization.py", line 243, in __init__
    super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

Sorry I'm not familiar with python at all, anything I'm missing?
Thanks!

关于模型训练

您好,请问如果想用4.10.0版本的replays进行监督学习,要去哪里找更多对应版本(4.10.0)的replays呢?我已经试过在s2client-proto通过终端命令下载了4.10.0版本的replays,但却无法用4.10.0版本的客户端进行播放(提示游戏执行所参照的模块或地图关联内容已不可用,如下图),貌似外服录像无法用国服客户端打开?
80285d086e061d955e52342f26f40ad162d9cabe

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.