Code Monkey home page Code Monkey logo

tarantool-operator's Introduction

Tarantool Kubernetes Operator CE

Tests Lint

This is a Kubernetes Operator which deploys Tarantool Cartridge-based cluster on Kubernetes.

If you are a Tarantool Enterprise customer, or need Enterprise features such as rolling update, scaling down and may others you can use the Tarantool Operator Enterprise.

IMPORTANT NOTICE

Begins from v1.0.0-rc1 Tarantool Kubernetes Operator CE was completely rewrote.

API version was bumped and any backward compatibility was dropped.

There is only one approved method to migrate from version 0.0.0 to versions >=1.0.0-rc1, please follow migration guide.

Table of contents

Getting started

Documentation

The documentation is work in progress...

At the moment you can use official helm-chart and receive useful information from comments in default values.yaml file

Contribute

Please follow the development guide

tarantool-operator's People

Contributors

artembo avatar artur-barsegyan avatar d-enk avatar dependabot[bot] avatar eugenepaniot avatar kluevandrew avatar lenkis avatar nickvolynkin avatar onvember avatar opomuc avatar timjohncurtis avatar vanyarock01 avatar vasiliy-t avatar zwirec avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tarantool-operator's Issues

Use namespace from release with helm install

Now for install tarantool operator you need to specify the name of the namespace two times:

  1. For chart template
  2. For application release
$ helm install --set namespace=tarantool tarantool-operator tarantool/tarantool-operator --namespace tarantool --create-namespace --version 0.0.6

Make the chart template use the namespace from the release.

Deprecation warnings when installing tarantool-operator with helm

$ helm install tarantool-operator tarantool/tarantool-operator --namespace tarantool --create-namespace --version 0.0.8
W0301 17:58:24.936399  737284 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
...
NAME: tarantool-operator
LAST DEPLOYED: Mon Mar  1 17:58:27 2021
NAMESPACE: tarantool
STATUS: deployed
REVISION: 1
TEST SUITE: None

$ helm version
version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

Complete documentation

We have a README and Tarantool Cartridge in K8s guide.
But there is no complete documentation. New content cannot always be put into a guide or README.

I offer documentation with the following structure:

  • Quick start (deploy operator and cartridge with helm)
  • Design/Architecture (TODO)
  • Tools compatibility(kubectl, minikube, kind, helm)
  • Tarantool Operator
  • Tarantool Cartridge in K8s
    • Creating an application
    • Install
    • Access Cluster (TODO)
    • Cluster management
      • Adding a new replica
      • Adding a shard (replicaset)
      • Updating application version
      • Update replicaset roles
      • Failover settings
      • Deleting a cluster
    • Customization
      • Sidecar containers
    • Troubleshooting
      • Insufficient CPU
      • Insufficient disk space
      • Recreating replicas (TODO)
    • Administration (TODO)
      • ???
    • Chart parameters table (example: https://github.com/helm/charts/tree/master/stable/postgresql#parameters) (TODO)
  • Guides
    • Contribution Guide (TODO)
    • Installation in an internal network
      • Delivery of tools
      • Installing the Tarantool Kubernetes operator
      • Installing the Tarantool Cartridge app
    • Running multiple Tarantool Cartridge clusters in different namespaces
    • Prometheus + Grafana Monitoring (TODO)
    • Crud ready-to-use application in Kubernetes

Make RoleConfig mergeable in Cartridge helm chart

Now the RoleConfig is array of role object. This does not allow merging custom defined values.
For example:

RoleConfig:
- RoleName: firstRole
  params: values
  ...
- RoleName: secondRole
  params: values
  ...

I suggest using RoleName as a key to the role object:

RoleConfig:
  firstRole:
    params: values
    ...
  secondRole:
    params: values
  ...

Add a replicas restart guide to the troubleshooting section

There was a need to recreate a replica whose replication was broken.
Simple attempts to delete the pod were unsuccessful because the module is in a statefulset and the tarantool .xlog and .snap files are not removed.
Need instructions on how to recreate replicas in case of an accident.

cartridge chart: unable to install without service values

Problem

cartridge chart values.yaml includes service key. it's value used to test container to test tarantool instances accessibility.
chart fails if user does not provides service.

Solution

set service port to 8081 (it's a sane default value to every cartridge cluster) and make it configurable via service

Release automatically

Releases are now hand-made. This is bad...

The release is done like this:

  • Build tarantool-operator docker image (command available in Makefile) and push it to the docker hub.
  • Make a commit with updated version of charts and changelog to master (remember the commit hash).
  • Pack charts to .cr-release-packages:
    $ mkdir -p .cr-release-packages
    $ helm package ci/helm-chart/ && mv tarantool-operator-X.Y.Z.tgz .cr-release-packages/
    $ helm package examples/kv/ && mv cartridge-X.Y.Z.tgz .cr-release-packages/
  • Use chart-releaser for make release on commit.
    $ cr upload -c <TARGET_COMMIT_HASH> -r tarantool-operator -o tarantool -t <ACCESS_TOKEN>
  • Checkout to branch gh-pages
  • Update helm repo index (check the correctness of the diff):
    $ mkdir -p .cr-index/ && cr index -c https://tarantool.github.io/tarantool-operator -r tarantool-operator -o tarantool
    $ diff .cr-index/index.yaml index.yaml
    $ mv .cr-index/index.yaml .
    $ git add index.yaml && git commit ...

It is necessary to automate.

Refactor: rename Role.Spec.Replicas to some other

Problem statement

Replicas denotes number of equal entities but Role.Spec.Replicas specifies number of different entities - number of shards within Role. This results in confusing expectations when scaling role's

Possible solution

Rename .spec.replicas to size

Names must be predictable. e.g. replicas means number of equal things, number, size, quantity means everything and nothing.

Failed to create pod: no IP addresses available in range set

I am try to install cartridge app:

Error:

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "xxx" network for pod "api-0-0":
networkPlugin cni failed to set up pod "xxx" network: failed to allocate for range 0: no IP addresses available in range set: ...

k8s versions:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.9", GitCommit:"4fb7ed12476d57b8437ada90b4f93b17ffaeed99", GitTreeState:"clean", BuildDate:"2020-07-15T16:10:45Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Join error

Hello.

Sometime in operator log i see errors like this:

{
   "level":"error",
   "ts":1612859086.4840052,
   "logger":"controller_cluster",
   "msg":"Join error",
   "Request.Namespace":"dmp-base-stage",
   "Request.Name":"dmp-cluster",
   "error":"Post http://x.x.x.x:8081/admin/api: dial tcp x.x.x.x:8081: connect: connection refused",
   "stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/tarantool/tarantool-operator/pkg/controller/cluster.(*ReconcileCluster).Reconcile\n\t/app/pkg/controller/cluster/cluster_controller.go:328\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:215\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88"
}

My RoleConfig:

RoleConfig:
  - RoleName: storage
    ReplicaCount: 2
    ReplicaSetCount: 1
    DiskSize: 5Gi
    CPUallocation: 0.25
    MemtxMemoryMB: 1024
    RolesToAssign:
      - vshard-router
      - vshard-storage
      - crud-router
      - crud-storage
      - metrics
      - migrator

Operator version: 0.0.8
Tarantool version in docker image: 2.6.2

Procedure 'box.execute' is not defined (0x21)

What's the possible cause of error if when doing locally using docker it works fine, but when using this tarantool operator on a kubernetes deployment, it shows an error when called from go-tarantool client?

After restart minikube, tarantool cluster cannot be configured

Dependencies:

  • minikube version: v1.12.3
  • tarantool/tarantool-operator chart version: 0.0.5
  • tarantool/cartridge chart version: 0.0.5

Step to reproduce:

  1. Run minikube: minikube start

  2. Install tarantool-operator and cartridge app.

  3. Enter into any tarantool app docker container: kubectl -n tarantool exec -it routers-0-0 /bin/bash and check volume:

    bash-4.4$ ls /var/lib/tarantool/test-app.routers-0-0/
    00000000000000000000.snap  00000000000000000000.xlog  config  config.backup
    bash-4.4$
    
  4. Restart minikube

    minikube stop
    minikube start
    
  5. Wait until the cluster is Ready. For check execute: kubectl -n tarantool describe clusters.tarantool.io app-name and find line with State.

    ...
    Spec:
      Selector:
        Match Labels:
          tarantool.io/cluster-id:  test-app
    Status:
      State:  Ready
    Events:   <none>
    
  6. Move to cartridge web-app
    image

  7. Enter into any tarantool app docker container: kubectl -n tarantool exec -it routers-0-0 /bin/bash and check data directory:

    bash-4.4$ ls /var/lib/tarantool/test-app.routers-0-0/
    bash-4.4$
    

StatefulSet fields not updated

ReplicasetTemplate changes does change only image and num replicas in statefulset and there is no way to change any other params via replicasettemplate

Update the k8s-guide intro

Update the introductory guide by comments from #57, #59

  • minimal required version for kind is v0.6.0
  • use outputs from installing a clean build of minikube
  • freeze kubernetes and kubectl versions at 1.16.X
  • add a section about CrashLoopBackOff error in the Troubleshooting chapter

Ready to use applications

Tarantool CRUD and queue are pretty popular tarantool use cases. We could prepackage them into their own helm charts including dependencies (operator).

Configurable Role variables

Problem statement

Tarantool Cartridge roles could have configuration variables, e.g. built in vshard role have bucket_count

Possible solution

Extend Role resource definition with .spec.config array of {Name, Value} object which will be passed to cluster upload_config endpoint

Getting started guide

We already have nice getting started in russian which includes complete developer walk through and several operations scenarios. We should

  • translate it into english

  • publish it on tarantool.io

Panic on operator binary when adding custom annotations to Service

When attempting to add custom annotations to the Service in the example to use consul auto service discovery, the operator binary panics.

Modified examples/kv/deployment.yaml to include the consul annotations as below:

apiVersion: v1
kind: Service
metadata:
  name: router
  labels:
    tarantool.io/role: "router"
  annotations:
    consul.hashicorp.com/service-tags: "kvstore,router"
spec:
  ports:
    - port: 8081
      name: web
      protocol: TCP
  selector:
    tarantool.io/role: "router"

the panic trace:

{"level":"info","ts":1576754904.0431921,"logger":"controller_role","msg":"Got selector","Request.Namespace":"default","Request.Name":"storage","selector":"tarantool.io/replicaset-template=storage-template"}
{"level":"error","ts":1576754904.0437276,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"cluster-controller","request":"default/examples-kv-cluster","error":"Operation cannot be fulfilled on endpoints \"examples-kv-cluster\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"info","ts":1576754904.6028752,"logger":"controller_role","msg":"Reconciling Role","Request.Namespace":"default","Request.Name":"routers"}
{"level":"info","ts":1576754904.6053317,"logger":"controller_role","msg":"Got selector","Request.Namespace":"default","Request.Name":"routers","selector":"tarantool.io/replicaset-template=router-template"}
{"level":"info","ts":1576754905.0440688,"logger":"controller_cluster","msg":"Reconciling Cluster","Request.Namespace":"default","Request.Name":"examples-kv-cluster"}
{"level":"info","ts":1576754905.0441573,"logger":"controller_cluster","msg":"Already owned","Request.Namespace":"default","Request.Name":"examples-kv-cluster","Role.Name":"storage"}
{"level":"info","ts":1576754905.0441651,"logger":"controller_cluster","msg":"Already owned","Request.Namespace":"default","Request.Name":"examples-kv-cluster","Role.Name":"routers"}
{"level":"info","ts":1576754905.0441687,"logger":"controller_cluster","msg":"Roles reconciled, moving to pod reconcile","Request.Namespace":"default","Request.Name":"examples-kv-cluster"}
E1219 11:28:25.044325       1 runtime.go:69] Observed a panic: "assignment to entry in nil map" (assignment to entry in nil map)
/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76
/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/Cellar/go/1.12.5/libexec/src/runtime/panic.go:522
/usr/local/Cellar/go/1.12.5/libexec/src/runtime/map_faststr.go:204
/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/pkg/controller/cluster/cluster_controller.go:232
/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215
/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158
/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/usr/local/Cellar/go/1.12.5/libexec/src/runtime/asm_amd64.s:1337
panic: assignment to entry in nil map [recovered]
	panic: assignment to entry in nil map

goroutine 274 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x105
panic(0x13e8860, 0x177e000)
	/usr/local/Cellar/go/1.12.5/libexec/src/runtime/panic.go:522 +0x1b5
github.com/tarantool/tarantool-operator/pkg/controller/cluster.(*ReconcileCluster).Reconcile(0xc000594920, 0xc000c2d8a9, 0x7, 0xc000f189a0, 0x13, 0x23f3d00, 0xc00035fed0, 0x43dab3, 0xc00035ff30)
	/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/pkg/controller/cluster/cluster_controller.go:232 +0x25e4
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000427900, 0x0)
	/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215 +0x1cc
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1()
	/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158 +0x36
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc000312290)
	/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000312290, 0x3b9aca00, 0x0, 0xc000511a01, 0xc000085da0)
	/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xf8
k8s.io/apimachinery/pkg/util/wait.Until(0xc000312290, 0x3b9aca00, 0xc000085da0)
	/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start
	/Users/v.tyubek/Development/go/src/github.com/tarantool/tarantool-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157 +0x311```

Endpoint leader annotation update problem

After recreating pods operator left endpoint annotation tarantool.io/leader unchanged with old IP address. So operator lost ability to manage cartridge configuration because of not existing node.
In operator logs there are some errors about connection timeout while trying to connect to old IP.

Flaky test

Some tests sometimes break. This happens due to untimely checking for the appearance of annotations in test assert.

[Fail] role_controller unit testing role_controller should react to the sts-template change and update the sts update rolesToAssign annotation in sts-template [It] set rolesToAssign by creating sts

RAM reserve for the lua process

The problem is that the lua process can take extra memory (within 2GB).
If we have a machine with 2gb of RAM and we run a tarantool instance with a 2gb requirement for memtix, then a problem may arise if the lua process takes up part of the memory. Namely, we cannot store 2 gb of data in the tarantool, since part of the memory will be occupied by the lua process.

Configurable cluster_name

Problem statement

We use services and dns to communicate within cluster. Instance DNS name contains kubernetes cluster_name (cluster.local) and it is hardcoded in operator. users with different cluster names will not be able to bootstrap cluster

Possible solutions

  • remove cluster name from dns name

  • support configurable cluster_name (whatever we call it) in operator and charts

TARANTOOL_HTTP_PORT env variable not set

Problem

cartridge allows to set configuration default values in lua. To override those defaults one should pass env variable or command line flags. if cartridge http port set to value different from 8081 in lua (the one required by operator to connect to instances) then operator is not able to manage cluster.

Solution

Pass TARANTOOL_HTTP_ENV to override any defaults set from within code.

Test on kubernetes v1.15.12

The client is using this version of Kubernetes. We need to either test the operator on it, or add the version to the testing matrix.

support helm --skip-crds option

Helm handles crd installation in a special way which causes errors when attempting to install crds multiple times.

Install operator multiple times e.g. in a different namespaces to reproduce:

  1. helm install tarantool-operator --namespace NS1 this is OK

  2. helm install tarantool-operator --namespace ANOTHER_NS this causes error

How to avoid errors:

There is an option --skip-crds for helm which causes helm to skip crds installation/upgrade when provided. Option takes affect when crds in proper directory (chart-dir/crds) which is not our case (chart-dir/templates/crds). So we should move crds to appropriate dir.

Cannot verify user is non-root

I'm trying to deploy an application with non-privileged (any non-root user) containers:

Error: container has runAsNonRoot and image has non-numeric user (tarantool), cannot verify user is non-root

Leader switch

Problem statement

Leader elected only once during cluster initial bootstrap

Possible solution

  1. Annotate Cluster resource with some field to denote tarantool cartridge cluster wide config generation

  2. In case of leader failure cross check annotation with every instance config generation

  3. select any alive instance with matching config generation as leader

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.