kubevirt / kubevirt-ansible Goto Github PK
View Code? Open in Web Editor NEWSet of Ansible roles & playbooks for KubeVirt deployment
License: Apache License 2.0
Set of Ansible roles & playbooks for KubeVirt deployment
License: Apache License 2.0
Ansible Playbook Bundles can be used to containerize playbooks for easier deployment.
This issue is about tracking this progress.
Dependencies:
Since kubevirt v0.1.0 there is a manifest yaml part of the release which needs fewer hacks.
The playbook should use this.
it will always work with fist host in the group.
it is https://github.com/kubevirt/kubevirt-ansible/blob/master/docker-storage-setup-defaults
, I think it should link to https://github.com/openshift/openshift-ansible-contrib/blob/master/roles/docker-storage-setup/defaults/main.yaml
I think we don't need to maintain flow which deploys KubeVirt from sources.
@gbenhaim you got awesome demo, I think it worth to include it in our readme!
Under section https://github.com/kubevirt/kubevirt-ansible#deploy-a-new-kubernetes-or-openshift-cluster-and-kubevirt-with-lago
I'm trying to preparing kubernetes cluster and deploy KubeVirt on my fedora 27 workstation, my system crashed because the playbook removes NetworkManager and all its dependencies. I know it's better to use a server to run the playbooks, but it would be better to be able run it on user's workstation, especially for those who just come to the project.
Documentation for setting storage for docker shall be added to https://github.com/kubevirt/kubevirt-ansible/blob/master/playbooks/README.md .
Or even a playbook for that.
In addition, please review installation steps to become copy&paste while considering that the user has Centos 7 minimal installed and willing to deploy openshift 3.9 with latest kubevirt.
Right now playbook runs inside of mock as a part of lago environment, I would like to see it to be able to run also outside of mock under regular user.
openshift/roles/kubevirt
role requires kubeconfig file, and in addition there are credentials required - I think it should be convergedWhen trying to run playbook with ansible 2.4 and above (I tried 2.4.1 and 2.5.0)
Last working for me was ansible-2.4.0, also working with 2.3.1.
[lbednar@lbednar kubevirt-ansible]$ ansible-playbook -i inventory.my -e "openshift_ansible_dir=openshift-ansible" deploy-openshift.yml
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
The error appears to have been in 'deploy-openshift.yml': line 13, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
name: "{{ openshift_ansible_dir }}/roles/openshift_facts"
- name: Load openshift facts
^ here
The goal of a top level variables file is to provide the user with a single file that lists the top level variables of kubevirt-ansible. The variables are mostly playbook level variables that will have a significant impact on how playbooks will run. Some variables listed here can also be variables that users are likely to change or variables we want a user to be aware of.
### Cluster ###
cluster: openshift
namespace: kube-system
### KubeVirt ###
manifest_version: release
docker_tag: latest
### Storage ###
enable_cinder_ceph: false
enable_gluster: false
TASK [Configure oc admin user for testing] *******************************************************************************************************
Friday 02 March 2018 18:42:33 +0800 (0:00:00.793) 0:08:26.062 **********
fatal: [dhcp-14-107.nay.redhat.com]: FAILED! => {
"changed": true,
"cmd": "user_name=\"test_admin\"\n oc login -u system:admin\n oc get user \"$user_name\" || oc create user \"$user_name\"\n oc adm policy add-cluster-role-to-user cluster-admin \"$user_name\"",
"delta": "0:00:00.715857",
"end": "2018-03-02 18:42:33.996095",
"rc": 1,
"start": "2018-03-02 18:42:33.280238"
}
STDERR:
error: The server uses a certificate signed by unknown authority. You may need to use the --certificate-authority flag to provide the path to a certificate file for the certificate authority, or --insecure-skip-tls-verify to bypass the certificate check and use insecure connections.
Unable to connect to the server: x509: certificate signed by unknown authority
Unable to connect to the server: x509: certificate signed by unknown authority
Unable to connect to the server: x509: certificate signed by unknown authority
MSG:
non-zero return code
Add --config /etc/origin/master/admin.kubeconfig
to every command or copy /etc/origin/master/admin.kubeconfig to /root/.kube/config
can fix the problem.
oc login -u system:admin --config /etc/origin/master/admin.kubeconfig
oc get user "$user_name" --connfig /etc/origin/master/admin.kubeconfig || oc create user "$user_name" --config /etc/origin/master/admin.kubeconfig
oc adm policy add-cluster-role-to-user cluster-admin "$user_name" --config /etc/origin/master/admin.kubeconfig
After running deployment playbook I would expect to see populated KubeVirt's pods, for virtualization
on behalf of @cynepco3hahue
@lukas-bednar this part is problematic currently because we need to configure security context for libvirt and virt-handlers daemon sets, patch that must fix it kubevirt/kubevirt#418
In additional openshift does not support CustomResourseDefinition, with this patch we will get rid of it
kubevirt/kubevirt#355
copied from https://github.com/cynepco3hahue/kubevirt-ansible/issues/6
There are playbooks to deploy kubevirt to a k8s or openshift cluster, but we also need to cover the use-case where a cluster already exists.
There are KubeVirt features which doesn't work on 3.7 and we need to give option to people to deploy KubeVirt on the top of 3.9.
Be aware that byo/config.yml
playbook doesn't exists 3.9: openshift/openshift-ansible#6503
We need to think how to test our Ansible playbooks.
There were suggestions to go with Vagrant, but feel free to come with other ways.
We have:
I think that we should have only one of them, and I prefer the name vars
since it's part of the ansible jargon.
In addition, we have all.yaml
and global_vars.yml
, do you think that we should keep both of them?
I think that global_vras.yml
should contain miscellaneous vars, while all.yaml
should contain vars that are related to Kubevirt's deployment, and should be renamed to a meaningful name.
kubevirt-ansible/deploy-openshift.yml
Line 27 in 1274f08
The playbook is now at playbooks/deploy_cluster.yml
When you have clean system without any rpm keys installed the rpm_key
ansible module fails to add any rpm keys.
It is not really problem of any playbook here, but I want to track that since it affects us: ansible/ansible#31483
The STDCI functionality for this project is associated with the older project name, since the project was moved the configuration needs to be updated for the CI to work.
Will accelerate the installation process
Kubernetes repository doesn't provide required docker
package for CentOS 7.4
https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
TASK [/home/lbednar/work/kubevirt-org/kubevirt-ansible/kubernetes/roles/prerequisites : install all kubernetes packages] **********************************************************************
failed: [vm-69-21.qa.lab.tlv.redhat.com] (item=[u'jq', u'sshpass', u'bind-utils', u'net-tools', u'docker', u'kubeadm', u'kubelet', u'kubectl', u'kubernetes-cni']) => {"changed": false, "failed": true, "item": ["jq", "sshpass", "bind-utils", "net-tools", "docker", "kubeadm", "kubelet", "kubectl", "kubernetes-cni"], "msg": "No package matching 'docker' found available, installed or updated", "rc": 126, "results": ["No package matching 'docker' found available, installed or updated"]}
failed: [vm-69-15.qa.lab.tlv.redhat.com] (item=[u'jq', u'sshpass', u'bind-utils', u'net-tools', u'docker', u'kubeadm', u'kubelet', u'kubectl', u'kubernetes-cni']) => {"changed": false, "failed": true, "item": ["jq", "sshpass", "bind-utils", "net-tools", "docker", "kubeadm", "kubelet", "kubectl", "kubernetes-cni"], "msg": "No package matching 'docker' found available, installed or updated", "rc": 126, "results": ["No package matching 'docker' found available, installed or updated"]}
failed: [vm-69-1.qa.lab.tlv.redhat.com] (item=[u'jq', u'sshpass', u'bind-utils', u'net-tools', u'docker', u'kubeadm', u'kubelet', u'kubectl', u'kubernetes-cni']) => {"changed": false, "failed": true, "item": ["jq", "sshpass", "bind-utils", "net-tools", "docker", "kubeadm", "kubelet", "kubectl", "kubernetes-cni"], "msg": "No package matching 'docker' found available, installed or updated", "rc": 126, "results": ["No package matching 'docker' found available, installed or updated"]}
After repo organization (issue #32) the current repo documentation can be organized as following:
/kubevirt-ansible
README.md (1)
CONTRIBUTING.md (2)
/playbooks
README.md (3)
(1) main Readme.md may have the following content:
See [1] as an example
[1] https://github.com/RedHatQE/rhui3-automation/blob/master/README.md
(2) Contributing.md
As it is in PR #74 plus
(3) Readme.md in playbooks
The most of PR #71 plus more details which will follow with a repo organization and writing more playbooks and introducing tests.
As part of the adding a cns storage flavor, I would like to add some things to the inventory file to conditionally deploy cns. If cluster == openshift and storage_role == storage_cns I'd want to make the following changes to the inventory. What is the best way to achieve this?
[OSEv3:children]
...
glusterfs
[OSEv3:vars]
...
# Namespace for CNS pods (will be created)
openshift_storage_glusterfs_namespace=app-storage
# Automatically create a StorageClass referencing this CNS cluster
openshift_storage_glusterfs_storageclass=true
# glusterblock functionality is not supported outside of Logging/Metrics
openshift_storage_glusterfs_block_deploy=false
# Disable any other default StorageClass
openshift_storageclass_default=false
[glusterfs]
<master> glusterfs_devices='[ "/dev/vdd" ]'
<node0> glusterfs_devices='[ "/dev/vdd" ]'
<node1> glusterfs_devices='[ "/dev/vdd" ]'
...
install-kubevirt-release.yml
, there is no need to install andrewrothstein.go-dev
from ansible galaxy.Subject says it all. Is this a redundancy that needs to be cleared out? Where should new roles be added?
Installing OpenShift 3.7 cluster fails with "Couldn't find test_admin user".
>> ansible-playbook -i inventory \
-e "openshift_ansible_dir=openshift-ansible/ \
openshift_playbook_path=playbooks/byo/config.yml \
openshift_ver=3.7" playbooks/cluster/openshift/config.yml
<...>
TASK [Configure oc admin user for testing] *************************************************************
Tuesday 27 February 2018 20:55:36 +0100 (0:00:10.566) 1:44:28.987 ******
changed: [10.8.241.23] => {
"changed": true,
"cmd": "user_name=\"test_admin\"\n oc login -u system:admin\n oc get user \"$user_name\" || oc create user \"$user_name\"\n oc adm policy add-cluster-role-to-user cluster-admin \"$user_name\"",
"delta": "0:00:03.117986",
"end": "2018-02-27 19:55:54.181482",
"rc": 0,
"start": "2018-02-27 19:55:51.063496"
}
STDOUT:
Logged into "https://172.16.216.21:8443" as "system:admin" using existing credentials.
You have access to the following projects and can switch between them with 'oc project <projectname>':
* default
kube-public
kube-service-catalog
kube-system
logging
openshift
openshift-ansible-service-broker
openshift-infra
openshift-node
Using project "default".
user "test_admin" created
cluster role "cluster-admin" added: "test_admin"
STDERR:
Error from server (NotFound): users "test_admin" not found
PLAY RECAP *********************************************************************************************
10.8.241.23 : ok=633 changed=125 unreachable=0 failed=0
localhost : ok=15 changed=0 unreachable=0 failed=0
INSTALLER STATUS ***************************************************************************************
Initialization : Complete
Health Check : Complete
etcd Install : Complete
NFS Install : Complete
Master Install : Complete
Master Additional Install : Complete
Node Install : Complete
Hosted Install : Complete
Service Catalog Install : Complete
Tuesday 27 February 2018 20:55:54 +0100 (0:00:18.364) 1:44:47.351 ******
===============================================================================
openshift_master : Update journald setup ------------------------------------------------------ 105.99s
openshift_master_certificates : Check status of master certificates ---------------------------- 80.80s
openshift_storage_nfs : remove exports from /etc/exports --------------------------------------- 70.62s
openshift_storage_nfs : Ensure export directories exist ---------------------------------------- 70.59s
openshift_hosted_facts : Set hosted facts ------------------------------------------------------ 69.45s
openshift_hosted_facts : Set hosted facts ------------------------------------------------------ 66.85s
openshift_hosted_facts : Set hosted facts ------------------------------------------------------ 63.54s
openshift_hosted_facts : Set hosted facts ------------------------------------------------------ 62.90s
openshift_hosted_facts : Set hosted facts ------------------------------------------------------ 62.62s
tuned : Ensure files are populated from templates ---------------------------------------------- 58.77s
openshift_node_certificates : Check status of node certificates -------------------------------- 52.92s
Ensure openshift-ansible installer package deps are installed ---------------------------------- 47.25s
openshift_hosted : Create default projects ----------------------------------------------------- 41.72s
openshift_hosted : Ensure OpenShift pod correctly rolls out (best-effort today) ---------------- 36.10s
openshift_hosted : Ensure OpenShift pod correctly rolls out (best-effort today) ---------------- 36.06s
openshift_master : Add iptables allow rules ---------------------------------------------------- 35.40s
openshift_node : Add iptables allow rules ------------------------------------------------------ 35.21s
openshift_master : Create the ha systemd unit files -------------------------------------------- 29.49s
Run health checks (install) - EL --------------------------------------------------------------- 27.59s
etcd : file ------------------------------------------------------------------------------------ 27.15s
In order to support a CNS deployment, the hosts need to have an extra, unused disk that can be given to gluster. Can we update the automation such that vms will be created with this added disk? The exact size is negotiable but I'd rather have too much (since it is thin provisioned) as opposed to seeing random failures when we run out of space.
Following Readme instructions:
>> ansible-playbook -i localhost playbooks/kubevirt.yml
<...>
TASK [kubevirt : Check for kubevirt.yaml template in {{ kubevirt_template_dir }}] *****************************************************************************************************************************
Wednesday 28 February 2018 14:59:04 +0100 (0:00:00.885) 0:00:04.578 ****
fatal: [localhost]: FAILED! => {}
MSG:
The task includes an option with an undefined variable. The error was: 'kubevirt_template_dir' is undefined
The error appears to have been in '/home/igulina/git_projects/kubevirt-ansible/roles/kubevirt/tasks/provision.yaml': line 28, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Check for kubevirt.yaml template in {{ kubevirt_template_dir }}
^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:
with_items:
- {{ foo }}
Should be written as:
with_items:
- "{{ foo }}"
exception type: <class 'ansible.errors.AnsibleUndefinedVariable'>
exception: 'kubevirt_template_dir' is undefined
Currently README states that supported platforms are:
CentOS Linux release 7.3.1611 (Core), OpenShift 3.7 and Ansible 2.3.1
We should probably create a new testing matrix table and include the currently supported versions
and the ones which are in progress.
e.g:
Supported | In Development |
---|---|
CentOS 7.4 | N/A |
OpenShift 3.7 | OpenShift 3.9 |
Ansible 2.3.1 | Ansible 2.4.2 |
Openshift need python3 support, so it need to add ansible_python_interpreter=/usr/bin/python3
to the host variables. However, run the playbook meet below error on Fedora27:
TASK [Install openshift_facts requirements] ******************************************************************************************************
Saturday 03 March 2018 13:41:10 +0800 (0:00:12.931) 0:00:20.304 ********
failed: [dhcp-xxxx] (item=['python-yaml', 'python-ipaddress', 'wget', 'git', 'net-tools', 'bind-utils', 'iptables-services', 'bridge-utils', 'bash-completion', 'kexec-tools', 'sos', 'psacct', 'docker']) => {
"changed": false,
"item": [
"python-yaml",
"python-ipaddress",
"wget",
"git",
"net-tools",
"bind-utils",
"iptables-services",
"bridge-utils",
"bash-completion",
"kexec-tools",
"sos",
"psacct",
"docker"
]
}
MSG:
python2 yum module is needed for this module
Use package
instead of yum
can fix the problem.
# git diff
diff --git a/playbooks/cluster/openshift/config.yml b/playbooks/cluster/openshift/config.yml
index 49c04f0..5fd78f9 100644
--- a/playbooks/cluster/openshift/config.yml
+++ b/playbooks/cluster/openshift/config.yml
@@ -33,9 +33,10 @@
yum:
name: "{{ epel_release_rpm_url }}"
state: present
+ when: ansible_distribution in ["CentOS","RedHat"]
- name: Install openshift_facts requirements
- yum:
+ package:
name: "{{ item }}"
with_items:
- python-yaml
The readme should be extended to explain step by step how to deploy kubevirt to openshift
Please add some human readable decriptions to tell the user what this repo is about.
Merge following two repositories
https://github.com/petrkotas/openshift-env
https://github.com/cynepco3hahue/kubevirt-ansible
As a result this repository contains Ansible roles & playbooks to deploy KubeVirt on
We need to have some discussion how to proceed with merging process.
# ansible-playbook -i inventory1 playbooks/cluster/kubernetes/config.yml
ERROR! the role '/root/git/kubevirt-ansible/playbooks/cluster/kubernetes/roles/node' was not found in /root/git/kubevirt-ansible/playbooks/cluster/kubernetes/roles:/root/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/root/git/kubevirt-ansible/playbooks/cluster/kubernetes
#105 should fix the issue.
CI should ignore commits containing [ci skip] and [skip ci] in their title or description. [ci skip] and [skip ci] can be used for documentation and minor changes which don't require CI check.
In order to deploy KubeVirt pods on master node, we need to set selectorNode in manifests.
Right now the playbook is taking FQDN of master node and use it as selector.
On other hand openshift-ansible allows you to change name of node in inventory file, and if this happen the assumption above will be false.
So we need to make sure that we have proper label assigned to node or take that openshift_host
variable under consideration.
In additional I would change line in inventory file
https://github.com/kubevirt-incubator/kubevirt-ansible/blob/34e790e827f130909b652c9ef76130f08e237113/inventory#L22
Since it is required to have master node to be schedulable .
Without that the KubeVirt pods will not be be able to run.
copied from https://github.com/cynepco3hahue/kubevirt-ansible/issues/8
Since the content of repository got re-organized, our installation demo got outdated.
#97 (comment)
We need to create new, and publish it as a part of readme.
In several locations some plays are run against hosts: all. This presents a problem when more than just the k8s/openshift cluster is managed in the inventory, e.g. when there are ceph or gluster nodes in place or when the setup is being run on a local hypervisor and the hypervisor is in the inventory as "hypervisor" and should not be altered like the k8s cluster nodes.
kubevirt-ansible/deploy-openshift.yml
Line 2 in 2ef2c8b
kubevirt-ansible/deploy-with-lago.yml
Line 39 in b9f4b40
>> ansible-playbook -i inventory deploy-kubernetes.yml
<...>
TASK [/home/igulina/git_projects/kubevirt-ansible/kubernetes/roles/node : deploy host as kubernetes node] *********************************************************************************************************
Friday 23 February 2018 12:32:44 +0100 (0:00:08.917) 0:07:25.616 *******
fatal: [XYZ]: FAILED! => {
"changed": true,
"cmd": [
"kubeadm",
"join",
"--token",
"abcdef.1234567890123456",
"XYZ:6443",
"--skip-preflight-checks"
],
"delta": "0:00:01.197376",
"end": "2018-02-23 11:32:52.838288",
"rc": 3,
"start": "2018-02-23 11:32:51.640912"
}
STDOUT:
[preflight] Running pre-flight checks.
STDERR:
Flag --skip-preflight-checks has been deprecated, it is now equivalent to --ignore-preflight-errors=all
[WARNING Hostname]: hostname "igulina-master" could not be reached
[WARNING Hostname]: hostname "igulina-master" lookup igulina-master on XYZ: no such host
[WARNING Port-10250]: Port 10250 is in use
[WARNING DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[WARNING FileExisting-crictl]: crictl not found in system path
discovery: Invalid value: "": using token-based discovery without DiscoveryTokenCACertHashes can be unsafe. set --discovery-token-unsafe-skip-ca-verification to continue
MSG:
non-zero return code
to retry, use: --limit @/home/igulina/git_projects/kubevirt-ansible/deploy-kubernetes.retry
PLAY RECAP ********************************************************************************************************************************************************************************************************
XYZ : ok=23 changed=15 unreachable=0 failed=1
>> ansible --version
ansible 2.4.3.0
config file = /home/igulina/git_projects/kubevirt-ansible/ansible.cfg
configured module search path = ['/home/igulina/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.3 (default, Oct 9 2017, 12:07:10) [GCC 7.2.1 20170915 (Red Hat 7.2.1-2)]
Following the README to deploy openshift got bellow error, it is looking for openshift-ansible under directory kubevirt-ansible/playbooks/cluster/openshift
, however the openshift-ansible is cloned to kubevirt-ansible
.
# ansible-playbook -i inventory \
> -e "openshift_ansible_dir=openshift-ansible/ \
> openshift_playbook_path=playbooks/byo/config.yml \
> openshift_ver=3.7" playbooks/cluster/openshift/config.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
ERROR! Unable to retrieve file contents
Could not find or access '/root/kubevirt-ansible/playbooks/cluster/openshift/openshift-ansible/playbooks/prerequisites.yml'
playbooks/kubevirt.yml returns login failure.
>> ansible-playbook -i localhost playbooks/kubevirt.yml
<...>
TASK [kubevirt : Create kube-system namespace] *********************************************************
Wednesday 28 February 2018 10:09:05 +0100 (0:00:00.345) 0:00:00.453 ****
[WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}.
Found: ns.stdout != "{{ namespace }}"
fatal: [localhost]: FAILED! => {
"changed": true,
"cmd": "kubectl create namespace kube-system",
"delta": "0:00:00.097699",
"end": "2018-02-28 10:09:05.327740",
"rc": 1,
"start": "2018-02-28 10:09:05.230041"
}
STDERR:
error: You must be logged in to the server (Unauthorized)
In the CI, the entire automation runs inside a mock environment http://ovirt-infra-docs.readthedocs.io/en/latest/CI/Using_mock_runner/index.html
We should document how to use in order to run the automation locally on a laptop.
After successful installation of OpenShift cluster (3.7), following Readme, I run into
>> ansible-playbook -i localhost playbooks/kubevirt.yml
<...>
TASK [kubevirt : Create kube-system namespace] *********************************************************
Wednesday 28 February 2018 09:57:19 +0100 (0:00:00.254) 0:00:00.361 ****
[WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}.
Found: ns.stdout != "{{ namespace }}"
fatal: [localhost]: FAILED! => {
"changed": true,
"cmd": "kubectl create namespace kube-system",
"delta": "0:00:00.001448",
"end": "2018-02-28 09:57:19.575954",
"rc": 127,
"start": "2018-02-28 09:57:19.574506"
}
STDERR:
/bin/sh: kubectl: command not found
MSG:
non-zero return code
Should I have kubectl
installed? If so, it should be documented. But the best is to include it in the playbook. After installing origin-clients
by dnf
the playbooks proceeds normally.
Hello,
I would like to help reorganize the playbooks and roles in this project but before I commit work it would be great to come to consensus what it should look like here are my suggestions:
ansible.cfg
to support path changesBased on #27 the following gist can be used in order to make it work on openshift:
https://gist.github.com/karmab/d9e8346b9005891dc8e83cc54eed32f2
The playbooks should be using teh commands - which are really clear!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.