Code Monkey home page Code Monkey logo

sextant's People

Contributors

gongweibao avatar jiamliang avatar junfeiyang avatar lestdou avatar lipeng-unisound avatar lupan2015 avatar lupan92 avatar pineking avatar putcn avatar typhoonzero avatar vienlee avatar wangkuiyi avatar xuerq avatar yancey1989 avatar zh794390558 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sextant's Issues

Automatically get the available disk for Ceph OSD

Currently I just set /dev/sdb as the OSD disk. In real world, the device name may vary, and there may be more than one disk can be used for OSD. So make it automatically find all the available disks (except the one that has the system installed), and run an OSD daemon with each of them.

vmtest/run 没有做好善后工作

每次使用vmtest/run之后,都有可能遗留一两个虚拟机没有删掉。用了一段时间之后,用VirtualBox应用可以看到遗留下来不少虚拟机。

Necessary DHCP config items

@lipeng-unisound 以下这些DHCP配置项是必要的吗?

default-lease-time 600;
max-lease-time 7200;
authoritative;

option rfc3442-classless-static-routes code 121 = array of integer 8;

subnet 10.10.10.0 netmask 255.255.255.0 {
    option rfc3442-classless-static-routes 24, 192,168,6, 10,10,10,254, 24, 192,169,100, 10,10,10,254, 16, 10,200, 10,10,10,254;
}

提取 bootstrapper/* 的共性

  • in_docker_test.bash 其实是公用的,可以挪到 bootstrapper/testutil/ 里,然后每个子目录里 ln -s ../testutil/in_docker_test.bash
  • Linux distributions里各个packages的安装挪到 unit tests里(比如 dhcp/dhcp_test.go,而不是放在 dhcp/dhcp.go里),并且在源文件里注释提醒,以便将来执行一次 apt-get update 随后安装所有需要的包。

docker 访问 ceph rbd

[by @wangkuiyi] 关于这个讨论的背景,在 #109

https://ceph.com/planet/getting-started-with-the-docker-rbd-volume-plugin/
http://www.sebastien-han.fr/blog/2015/08/17/getting-started-with-the-docker-rbd-volume-plugin/
http://www.atwop.com/archives/852.html

编译这个插件报错:

go get github.com/yp-engineering/rbd-docker-plugin

# github.com/yp-engineering/rbd-docker-plugin
../../../../yp-engineering/rbd-docker-plugin/main.go:83: cannot use d (type cephRBDVolumeDriver) as type volume.Driver in argument to volume.NewHandler:
        cephRBDVolumeDriver does not implement volume.Driver (missing Capabilities method)

DHCP配置文件

目前云知声机群pxe server上的dhcp配置文件�绑定了太多fixed ip地址。

#
# DHCP Server Configuration file.
#   see /usr/share/doc/dhcp*/dhcpd.conf.example
#   see dhcpd.conf(5) man page
#
#
# DHCP Server Configuration file.
#   see /usr/share/doc/dhcp*/dhcpd.conf.example
#   see dhcpd.conf(5) man page
#
# create new
# specify domain name
# option domain-name "ai-labs.unisound.com";
# specify name server's hostname or IP address
# option domain-name-servers 10.10.10.1;
next-server     10.10.10.192;
filename "pxelinux.0";
# default lease time
default-lease-time 600;
# max lease time
max-lease-time 7200;
# this DHCP server to be declared valid
authoritative;

option rfc3442-classless-static-routes code 121 = array of integer 8;
# option ms-classless-static-routes code 249 = array of integer 8;

# specify network address and subnet mask
subnet 10.10.10.0 netmask 255.255.255.0 {
# specify the range of lease IP address
# range dynamic-bootp 10.10.10.206 10.10.10.212;
# specify broadcast address
    option broadcast-address 10.10.10.255;
# specify default gateway
    option routers 10.10.10.192;
# specify domain name
    option domain-name "ailab.unisound.com";
# specify name servers
    option domain-name-servers 10.10.10.192, 8.8.8.8, 8.8.4.4;
    option rfc3442-classless-static-routes 24, 192,168,6, 10,10,10,254, 24, 192,169,100, 10,10,10,254, 16, 10,200, 10,10,10,254;
#    option ms-classless-static-routes 32, 111, 111, 111, 254, 0, 0, 0, 0, 111, 111, 111, 254;

        host zodiac-01   {
                hardware ethernet 00:25:90:C0:F7:80 ;
                fixed-address 10.10.10.201 ;
        }
        host zodiac-02   {
                hardware ethernet 00:25:90:C0:F6:EE ;
                fixed-address 10.10.10.202 ;
        }
        host zodiac-03   {
                hardware ethernet 00:25:90:C0:F6:D6 ;
                fixed-address 10.10.10.203 ;
        }
        host zodiac-04   {
                hardware ethernet 00:25:90:C0:F7:AC ;
                fixed-address 10.10.10.204 ;
        }
        host zodiac-05   {
                hardware ethernet 00:25:90:C0:F7:7E ;
                fixed-address 10.10.10.205 ;
        }
        host zodiac-06{
                hardware ethernet 00:25:90:c0:f7:62;
                fixed-address 10.10.10.206;
        }
        host zodiac-07{
                hardware ethernet 00:25:90:c0:f7:68;
                fixed-address 10.10.10.207;
        }
        host zodiac-08{
                hardware ethernet 00:25:90:c0:f7:7a;
                fixed-address 10.10.10.208;
        }
        host zodiac-09{
                hardware ethernet 00:25:90:c0:f7:c8;
                fixed-address 10.10.10.209;
        }
        host zodiac-10{
                hardware ethernet 00:25:90:c0:f7:88;
                fixed-address 10.10.10.210;
        }
        host zodiac-11{
                hardware ethernet 00:25:90:c0:f7:7c;
                fixed-address 10.10.10.211;
        }
        host zodiac-12{
                hardware ethernet 00:25:90:c0:f7:86;
                fixed-address 10.10.10.212;
        }
        host coreos-191 {
                hardware ethernet 00:e0:81:ee:82:c4;
                fixed-address 10.10.10.191;
        }
}

bootstrapper/skydns 里有没有用的常量

skydns.go:98:1:warning: getSkyDNSFile is unused (deadcode)
skydns_test.go:17:2:warning: unused global variable systemdContent (varcheck)
skydns_test.go:30:2:warning: unused global variable upstartContent (varcheck)

docker-in-docker for Go unit testing

在写bootstrapper的时候,测试比较有挑战——要测试在Ubuntu和CentOS里,我们的代码是否正确。

一个直接的想法是在docker里跑:比如创建一个Dockerfile,FROM ubuntu:14.04,然后在里面跑我们的程序,看看效果是否如我们期待。

技术上,我们确实可以用Go原因写一个unit test,来创建和执行这样一个docker image,因为有一个很好用的Go的docker API库: https://github.com/fsouza/go-dockerclient 。我尝试着调用这样一个库,写unit tests,在本地(我的Mac和Linux电脑)执行没问题: https://github.com/wangkuiyi/learn-ci-docker/blob/master/main_test.go

但是,在Github+TravisCI环境里,这个办法就不成了。因为TravisCI是在一个Docker container里执行我们的Go unit test的。而当我们的程序去连接docker daemon的时候,会发现不存在 Unix socket /var/run/docker.sock

$ go test -v ./...
=== RUN   TestDockerAPI
--- SKIP: TestDockerAPI (0.00s)
    main_test.go:18: Cannot list iamges: Get http://unix.sock/images/json: dial unix /var/run/docker.sock: connect: no such file or directory

实际上,如果我们对TravisCI更加可控的话,我们可以干这么一招儿:

  1. TravisCI可以启动docker container,是因为其租用的虚拟机上有docker daemon,也就有 /var/run/docker.sock

  2. 如果TravisCI启动docker container的时候,用这篇文章里的做法

    docker run -v /var/run/docker.sock:/var/run/docker.sock ...
    

    那么docker container里就有 /var/run/docker.sock了。这样我们的Go unit test 就可以运行了。

可惜,作为TravisCI的用户,我们没法控制TravisCI如何启动containers。但是如果将来有一天,我们自己构建Jenkins之类的CI系统,可以用这个办法的。

因为目前我们用的就是TravisCI,所以我用 _test.bash 脚本来启动docker containers,来测试我们的代码的正确性。

需要PXE server配置脚本

目前没有一个脚本程序(bash或者ansible)配置PXE server。所以PXE server的配置并不自动。

vmtest/run 不能用public key登录CentOS guest

错误信息如下

default: Inserting generated public key within guest...
default: Removing insecure key from the guest if it's present...
default: Key inserted! Disconnecting and reconnecting using new SSH key...
default: Warning: Authentication failure. Retrying...
default: Warning: Authentication failure. Retrying...

最后会询问用户名和密码。都输入vagrant就可以登录进去执行unit test了。

bootstrapper服务器上安装和检查nginx

我试着在一台Ubuntu虚拟机上和一台CentOS7虚拟机上安装Nginx。

CentOS 7

CenOS 7 上的安装需要两步:

  1. sudo yum -y install epel-release
  2. sudo yum -y install nginx

其中 -y 答题用户回答Yes。

Ubuntu

Ubuntu 上安装 nginx 也用了两步:

  1. sudo apt-get update
  2. sudo apt-get install -y nginx

检查

在Ubuntu和CentOS 7上都是:安装之前不存在目录 /etc/nginx;之后有了 —— 验证了 @pineking 建议的检查方法。

Failed to get D-Bus connection: Operation not permitted - CentOS 7

根据 moby/moby#7459

  1. 问题:在docker CentOS 7 执行systemctl 命令会报 Failed to get D-Bus connection: No connection to service manager,以 systemctl status systemd-journald 命令为例
liuqs@BlackTurtle:~>docker run -ti centos:7 /bin/bash
[root@7430895f5868 /]# systemctl status systemd-journald
Failed to get D-Bus connection: Operation not permitted

解决方案:采用 /usr/sbin/init 自动启动 dbus daemon

$ docker run --privileged -d -ti -e "container=docker"  -v /sys/fs/cgroup:/sys/fs/cgroup  centos:7  /usr/sbin/init
6dd3234f6c9d3475fd56c2996ab25269d646aca4bde219166b0d4f6c9570046e

$ docker exec -it 6dd323 /bin/bash

[root@6dd3234f6c9d /]# systemctl status systemd-journald
● systemd-journald.service - Journal Service
   Loaded: loaded (/usr/lib/systemd/system/systemd-journald.service; static; vendor preset: disabled)
   Active: active (running) since Sat 2016-07-30 11:35:10 UTC; 2min 22s ago
     Docs: man:systemd-journald.service(8)
           man:journald.conf(5)
 Main PID: 21 (systemd-journal)
   Status: "Processing requests..."
   CGroup: /docker/6dd3234f6c9d3475fd56c2996ab25269d646aca4bde219166b0d4f6c9570046e/system.slice/systemd-journald.service
           └─21 /usr/lib/systemd/systemd-journald

参考:

  1. https://github.com/DatawiseIO/c7-systemd-dbus

go get from private Github repo

The Problem

If Go code is in a private Github repo, we wouldn't be able to run go get github.com/account/repo to retrieve the code. It is true that there are workarounds like those use Github SSH keys, but which would introduce other problems like go get -u doesn't work.

The Solution

The easiest solution is to

  1. generate a personal token from your Github account which has the right to access the private repo, and
  2. tell git, which is called by go get, to access the repo via https://<token>:github.com/account/repo instead of via https://github.com/account/repo.

Generate Personal Token

  1. Go to the Web page of Github, login, and go to Github settings page. At the "personal access tokens" tab, click the button "generate personal access token".

    screen shot 2016-07-26 at 12 42 34 pm
  2. Select "repo" in "scopes":

    screen shot 2016-07-26 at 12 44 40 pm
  3. Copy and paste the generated token number, and save it somewhere you won't forget. An unsafe and reasonable place is your email box.

Tell Git About Your Token

Run the following command

git config --global url."https://c61axxxxxxxxxxxxxxx:[email protected]/".insteadOf "https://github.com/"

where c61axxxxxxxxxxxxxxx is your Github personal access token.

Then you will find something new was added to your ~/.gitconfig file:

[url "https://c61axxxxxxxxxxxxxxx:[email protected]/"]
    insteadOf = https://github.com/

Test It!

Now, run go get github.com/k8sp/auto-install/cloud-config-server.

Proposal: configs是静态存储在github的,初始化新的集群需要修改代码中指向的config url

可以参考etcd的discovery服务:

  1. 创建一个新的集群配置:使用ccts_client解析cluster的配置文件成json格式。发送POST请求到/new_cluster,POST body携带config的json
  2. CCTS收到json,使用uuid4生成一个对应这个json的uuid作为唯一标示,并把uuid和json配置存储在etcd集群中
  3. 配置集群PXE Server中的cloud-config文件,使用这个uuid请求CCTS获得本集群的config

这样,CCTS即可作为一个全局的config server

k8s-install-systemd-unit/README.md 描述不清晰

Step1 生成 k8s 机群 keypair 文件
修改 environment 文件中 KUBERNETES_MASTER_IPV4 地址为机群 master 节点 IP

k8s-install-systemd-unit 里多个子目录里有 environment 文件。这里说的是哪一个?

在PxeServer自动生成TLS需要的证书

根据我们目前的需求,每个PXE Server只对应一个集群,证书的生产可以在CCTS启动时一次性生成好,我觉得流程可以是这样:

  1. 启动时生成CA Rootca.pem, ca-key.pem这两个文件
  2. Node启时候,CCTS根据传来的MAC地址进行判断,master和worker节点返回不同的证书信息:
    • 如果为kube-master节点,则将ca.pem, apiserver.pem, apiserver-key.pem这三个文件的信息写入返回的cc文件中
    • 如果为kube-worker节点,则根据本地存储的ca.pem, ca-key.pem生成worker-key.pem, worker.pem写入返回的cc文件中

相关伪码如下:

  1. 启动时候生成RootAC
func generateRootAC() {
    if fileExist("./tls/ca.pem") && fileExist("./tls/ca-key.pem") {
        fmt.Print("ca.pem has already exists.")
        return false
    }
    // Generate ca.pem, ca-key.pem located ./tls
    out, err := exec.Command("/bin/bash", "./script/generate_root_ca.sh").Output()
    if err != nil {
        fmt.Printf("Generate Root AC Failed: %s\n", out)
    }
}

  1. 若请求的mac地址为kube-master节点,伪代码如下:
func processKubeMasterCert(ip string) bool {
    // Generate apiserver.pem, apiserver-key.pem located ./tls/master-${master_ip}/
    out, err := exec.Command("/bin/bash", "./script/generate_master_ca.sh", ip).Output()
    if err != nil {
        fmt.Printf("Generate Master AC Failed: %s\n", out)
        return false
    }
    return true
}
  1. 若请求的mac地址为kube-worker节点,伪代码如下:
func processKubeNodeCert(ip string) {
    // Generate work.pem, work-key.pem located ./tls/work-${work_ip}/
    out, err := exec.Command("/bin/bash", "./script/generate_work_ca.sh". ip).Output()
    if err != nil {
        fmt.Printf("Generate Work AC Failed: %s\n", out)
        return false
    }
    return true
}

证书生产的脚本可参考https://github.com/k8sp/bigdata/tree/master/install/tls

网络环境中已存在DHCP服务的情况下,如何使用另一个DHCP服务自动安装k8s集群

如题,k8s集群的部署环境会基于某些已经存在DHCP服务的集群环境,比如云知声的环境和百分点的环境。所以需要一种方式,能避免k8s集群自动安装后自动分配的IP和现有已存在的环境发生冲突。现有的方案和想法:

  1. 将k8s集群和现有的集群物理隔开
  2. 使用DHCP,将第一次分配给同一MAC地址的IP,之后始终分配给这个MAC地址
  3. 使用DHCP作为安装过程引导的IP自动分配,安装过程使用DHCP客户端或自定义的分配IP的程序完成IP的分配,并配置成为静态IP,之后的系统重启就不会再访问DHCP服务了

如何使用内网ACI镜像?

在启动kubernetes进程时,需要通过rkt启动ACI格式的hyperkube,flannel等镜像,而这个镜像地址默认是在公网quay.io上面,这会带来两个问题:

  1. 外网下载速度太慢。
  2. 安全原因,实际生产环境我们也不希望所有机器都有外网权限。

是否有搭建私有ACI仓库的方案或者其他解决此问题的方法呢?

全自动安装和配置Kubernetes机群

我脑海里设想的流程和架构如下,请大家看看我们的理解是否是一致的,以及这样的设想是否有什么问题。

机群和网络

实际网络环境千差万别,为了让工作目标清晰可以执行,我建议我们只考虑以下网络情况,请大家看是否可以:

  1. 若干node通过switch级联成一棵树。
  2. 顶级swtich的upstream link接着一个可翻墙的router,连接Internet。
  3. 机群中有一个node,预先安装好了Ubuntu或者CentOS,并且在/etc/network/interfaces里写死了IP地址(static IP)。这台机器会是我们的bootstrapper server。
  4. bootstrapper server上有DHCP服务,负责给机群中除了bootstrapper server之外的所有其他机器分配IP地址。

【讨论】我们的笔记本电脑怎么连接机群?如果要通过笔记本电脑ssh到bootstrapper server,是不是笔记本电脑也得是机群中的一员?

IP地址、hostname 和 TLS crt

  1. 机群中,除了bootstrapper server之外,有几台nodes是etcd cluster members。这些机器虽然也从DHCP服务获取IP,但是DHCP被bootstrapper程序配置为对这些nodes总是分配一个固定的IP(fixed IP)。这样其他机器上的etcd proxy进程可以明确知道etcd cluster member nodes的IP地址,从而加入etcd机群。
  2. 除了 etcd members nodes,其他nodes的IP地址是完全动态分配的(dynamic IP)。
  3. 每个node上有一个我们配置的服务,在node启动时,把node的IP地址和hostname写入bootstrapper server上的SkyDNS,这样我们就可以通过hostname来访问nodes了。
    1. hostname就是机器的primary NIC的MAC地址;
    2. 向SkyDNS注册很重要,因为每个node有自己独到的TLS key和crt,而生成crt是需要机器的域名(hostname)的。

【讨论】既然所有机器都会把自己的IP和hostname注册到SkyDNS里,是不是etcd cluster members就不需要fixed IP了,直接用他们的hostname就好了?

配置流程

在笔记本电脑上

  1. 获取或者生成 ca.key 和 ca.crt
  2. scp ca* bootstrapper-server:/
  3. scp $GOPATH/bin/bootstrapper bootstrapper-server:/
  4. ssh bootstrapper-server -c "sudo /bootstrapper -ca-key=/ca.key -ca-crt=/ca.crt

在bootstrapper server上

bootstrapper 程序配置和启动各个服务,包括cloud-config-server:

cloud-config-server -ca-key=/ca.key -ca-crt=/ca.crt

这样,cloud-config-server也就有了CA,从而可以为每个node生成各自的crt。

node加入机群

每台电脑买来的时候BIOS里默认的启动设备顺序是:硬盘优先,网络启动随后。这个默认顺序不需要修改,于是有以下流程:

  1. 插电源线,插网线,按启动按钮。
  2. 因为硬盘里没有操作系统,所以通过网络启动,向网络里广播DHCP请求。
  3. bootstrapper server上的DHCP请求响应,返回一个IP地址(fixed IP或者dynamic IP),以及TFTP服务提供的pxelinux启动镜像的URL。
  4. 新node获取镜像,启动系统,并且去TFTP上获取CoreOS启动镜像以及第一级cloud-config文件,引导到CoreOS,并且执行第一级cloud-config文件。
  5. 第一级cloud-config文件其实是一个很短的shell script。也就是 https://github.com/k8sp/bootstrapper 里的 install.sh
  6. install.sh一方面访问bootstrapper server上的cloud-config-server,提供node的primary NIC的MAC地址,得到node的cloud-config文件(第二级cloud-config),这是一个YAML配置文件,
    1. cloud-config-server把node的MAC地址作为其hostname,从而为node生成crt。这个crt以及对应的key被嵌入在第二级cloud-config文件里返回给node。
  7. 随后install.sh执行coreos-install脚本把CoreOS操作系统安装到本地磁盘,并且将第二级cloud-config文件配置为本地CoreOS启动时执行。
  8. 最后install.sh重启node。

node重启

  1. 因为磁盘里有CoreOS了,所以这次不走网络启动,而是直接从磁盘启动。
  2. 启动之后执行第二级cloud-config文件。
  3. 第二级cloud-config文件里包含所有需要的配置信息,
    1. 配置etcd
    2. 配置Ceph
    3. 配置Kubernetes(包括flannel等)
    4. 配置一个服务,每次启动的时候把DHCP获得的IP地址和hostname(primary NIC MAC地址)注册到SkyDNS。

由于auto-install的方式改变,所以需要更改template的内容

目前https://github.com/k8sp/auto-install/blob/master/cloud-config-server/template/cloud-config.template 中的内容有一些说通过shell 脚本进行wget从外网获取zip包等方式进行安装。我们更倾向于通过配置文件加载,以service的方式进行,另外一些命令等二进制包可以直接通过bootstrapper上的nginx服务器进行提供。
另外对此修改也需要一些bootstrapper的功能上的支持,和对https://github.com/k8sp/auto-install/blob/master/cloud-config-server/template/unisound-ailab/build_config.ymlhttps://github.com/k8sp/auto-install/blob/master/cloud-config-server/template/template.go 进行一些修改和添加字段。
#53

Keep only one OSD for one disk

Currently, every time the system is booted, the script that starts the Ceph OSD daemon will check if there is an existing Ceph OSD container that was once run for a disk. If so, then docker start is executed, otherwise docker run a new container and create a new OSD.

However, we observed that after the system is updated, this method fails, because when we run ceph osd tree we see two OSDs, with one down and one up. I guess this is because after the system is updated, the docker containers that were running in the old system are lost.

We need to find a more proper way to deal with this problem.

dhcp server通过dhcp.conf里的子网来选择监听的NIC

我在 #48 里引入了一个办法:为了验证bootstrapper可以正确地在Ubuntu里安装、配置和启动DHCP服务,我写了 in_docker_test.bash 在 ubuntu:14.04 这样的Docker container里跑Go的unit test程序。

但是碰到了一个问题:service isc-dhcp-server start 执行失败。DHCP服务启动不起来。 @pineking 在物理机,我在虚拟机上也都碰到了一样的问题。

根据 https://github.com/k8sp/bare-metal-coreos/tree/master/pxe-on-rasppi 描述,DHCP服务其实是通过/usr/sbin/dhcpd启动的。我们手工运行/usr/sbin/dhcpd,可以看到屏幕上打印的错误信息。 @pineking 在物理机上的截图如下:

dhcpd_cannot_match_subnet

可见,DHCP服务在抱怨没找到匹配的子网。 @pineking 据此尝试修改了unit test里使用的子网配置,使其与物理机上的一致,DHCP就可以启动起来了。

随后,我查到 https://docs.docker.com/v1.7/articles/networking/ 上解释了Docker的默认子网设置。据此修改了Go unit test里的子网配置,使其可以在Docker container里启动DHCP服务。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.