Code Monkey home page Code Monkey logo

flink-on-k8s-operator's Introduction

This project has been deprecated. Please use Google Cloud Dataproc to create managed Apache Flink instances on Google Compute Engine or Apache Flink Kubernetes Operator to run self-managed Apache Flink on Google Kubernetes Engine.

Kubernetes Operator for Apache Flink

This is not an officially supported Google product.

Kubernetes Operator for Apache Flink is a control plane for running Apache Flink on Kubernetes.

Community

Project Status

Beta

The operator is under active development, backward compatibility of the APIs is not guaranteed for beta releases.

Prerequisites

  • Version >= 1.15 of Kubernetes
  • Version >= 1.15 of kubectl (with kustomize)
  • Version >= 1.7 of Apache Flink

Overview

The Kubernetes Operator for Apache Flink extends the vocabulary (e.g., Pod, Service, etc) of the Kubernetes language with custom resource definition FlinkCluster and runs a controller Pod to keep watching the custom resources. Once a FlinkCluster custom resource is created and detected by the controller, the controller creates the underlying Kubernetes resources (e.g., JobManager Pod) based on the spec of the custom resource. With the operator installed in a cluster, users can then talk to the cluster through the Kubernetes API and Flink custom resources to manage their Flink clusters and jobs.

Features

  • Support for both Flink job cluster and session cluster depending on whether a job spec is provided
  • Custom Flink images
  • Flink and Hadoop configs and container environment variables
  • Init containers and sidecar containers
  • Remote job jar
  • Configurable namespace to run the operator in
  • Configurable namespace to watch custom resources in
  • Configurable access scope for JobManager service
  • Taking savepoints periodically
  • Taking savepoints on demand
  • Restarting failed job from the latest savepoint automatically
  • Cancelling job with savepoint
  • Cleanup policy on job success and failure
  • Updating cluster or job
  • Batch scheduling for JobManager and TaskManager Pods
  • GCP integration (service account, GCS connector, networking)
  • Support for Beam Python jobs

Installation

The operator is still under active development, there is no Helm chart available yet. You can follow either

  • User Guide to deploy a released operator image on gcr.io/flink-operator to your Kubernetes cluster or
  • Developer Guide to build an operator image first then deploy it to the cluster.

Documentation

Quickstart guides

API

How to

Tech talks

  • CNCF Webinar: Apache Flink on Kubernetes Operator (video, slides)

Contributing

Please check CONTRIBUTING.md and the Developer Guide out.

flink-on-k8s-operator's People

Contributors

ayush-singhal28 avatar bnu0 avatar chrismoos avatar deliangfan avatar elanv avatar enriquel8 avatar functicons avatar guruprasatht avatar hgonggg avatar hongyegong avatar hzxuzhonghu avatar irvifa avatar jaredstehler avatar jeffwan avatar kinderyj avatar laughingman7743 avatar marcomicera avatar mdolinin avatar medb avatar mrart avatar nicladas avatar renecouto avatar rpadovani avatar ryanmwright avatar shashken avatar syucream avatar thebalu avatar yalctay93 avatar yanghui16355 avatar yaron-idan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flink-on-k8s-operator's Issues

Define a new CRD to submit job to session cluster

Currently after a session cluster is up and running, we have to submit jobs to the cluster through Flink's API endpoint, which usually means the user has to install Flink CLI on their local machine.

We could consider adding a new CRD and controller for job submission, after a job CR is created the controller gets notified and submit the job to the cluster automatically, it also polls the job status and update the CR. We might also migrate the current job submission to job cluster to the approach as well.

Separate job client Docker image and Flink JobManager/TaskManager image

Currently, the Flink job client runs as a Kubernetes job, which uses the same container as the Flink JobManager and TaskManager. There is a benefit of separate the two. Some functionalities such as downloading remote Job jar can be supported in the job client image, while using the official Flink image for JM/TM.

We could create a dedicated Docker image for the job client, we could also use the operator image (by adding the capability of job submission) for the job client. The latter approach seems to be easier to manage for end users.

Generate a random ID suffix for child resources

Sometimes I notice after deleting and recreating a CR very fast, the job pod from the old CR was in a crash loop for a while, and there were 2 flink jobs found from the Flink get jobs API, which was unexpected. We need to avoid this race condition. The approach I am thinking now is to generate a random ID suffix for child resources of the CR, like what deployment does for pods.

Support start job from a savepoints dir without specifying savepoint ID

Currently when taking savepoint, users specify a savepoints (parent) dir, Flink automatically generate a savepoint ID and put data in the corresponding subdir. When starting a job from a savepoint, you have to specify the savepoint ID, which means when using the operator, the job args part of the CR has to be changed to point to the specific savepoint.

We want to allow users to only specify savepoints dir in jobSpec, and the operator automatically selects (the latest) savepoint in the dir. This will in turn allow us to update job seamlessly, because savepoint ID becomes transparent.

Support for various job submission types and job resource types

Flink job can take various forms. It would be nice if the Flink job spec had extensibility to support various submission types. One example would be if apache beam python is supported as well as flink jars.

Next, it would be nice if various types of resource staging were supported. It would be good to download the resource if it is located remotely, or extract it if it is archived.

In summary:

  • Various Job submit type support
         - Flink application jar
         - Apache beam python

  • Staging processing by job resource type
         - Remote file download
         - Archived file extraction

Support remote job jar

Currently, the Flink job submission client runs as a Kubernetes job, which uses the same container as the Flink JobManager and TaskManager. The job jar has to be locally available in the job container, which means users need to create a custom image based on the Flink official image and add their job jar.

We want to support remote job jar and let the job submission client automatically downloads the job jar before submitting it to Flink.

Trigger reconcile periodically

Now reconcile is triggered by watched resources status change only, we want to trigger reconcile periodically as well to make sure there is not missed events.

Add a Dockerfile for build environment

To make it easier to setup the development environment, we can add a Dockerfile which creates an image with required build dependencies (Go 1.12+, Kustomize, Kubebuilder...). Then developers can simply mount the current source directory into the container and run build and tests.

GCS submit job should use a provided service account

Looking at a docker image, it downloads provided jar file from GCP using gsutil and it looks like it is using default permissions of the kube node.
There should be a way to provide a key instead (as it is possible for task and job managers).
Can be as simple as:
gcloud auth activate-service-account --key-file=...

Options to handle cluster when job is finished.

It would be nice to be given the option to handle the flink cluster when the job finishes. Three options will be possible.

  1. Delete entire cluster
    If the cluster is no longer needed after the job finishes.

  2. Leave only jobmanager
    If you want to leave only the job manager for checking the dashboard after the job is finished. If all logs are being collected in a separate repository, only the job manager may be left to check the dashboard.

  3. Leave the entire cluster
    If you want to apply the task-local recovery feature when replaying a job or restarting a failed job, you need to preserve the cluster.
    And you may want to preserve the cluster to check the running environment and logs for debugging purposes.

Upgrade streaming job cluster or session cluster

Currently, to upgrade a streaming job cluster or session cluster, you must delete the existing job or cluster and then create a new CR. In this case, the order is as follows.

  1. Manually cancel with savepoint using Flink API
  2. Delete the current flinkcluster CR
  3. Locate the saved savepoint.
  4. Create a new CR specifying the manually created savepoint

If the operator can automate the upgrade, it will automate the process.

Considerations by upgrade types:

  • Flink image upgrade
    • If the Flink version or java version changes.
    • In this case, tear the existing cluster and re-deploy it with a new image.
  • Job only upgrade
    • Restart job by replacing jar or job resources
    • In the same way as upgrading the cluster image, you can delete the cluster and deploy a new one.
    • Alternatively, you can leave the existing cluster as is and replace only the job and resubmit it to the existing cluster. If you want to apply task-local recovery feature, you may want the latter. It may be provided as an option for users to select as needed.

More than one job were found

We found a case that the first job submission was successful but the job client timed out somehow, then the operator submitted the job again which caused 2 jobs running in a job cluster.

2019-11-01T21:49:18.413Z	INFO	controllers.FlinkCluster	Observed Flink job status list	{"cluster": "default/flinkjobcluster-sample", "jobs": [{"ID":"cd8dca05e24e8c8f55b56bb9e8b40e61","Status":"RUNNING"},{"ID":"8ba52ec4edb730bdcc340b1f07d30a05","Status":"RUNNING"}]}
2019-11-01T21:49:18.413Z	ERROR	controllers.FlinkCluster		{"cluster": "default/flinkjobcluster-sample", "jobs": [{"ID":"cd8dca05e24e8c8f55b56bb9e8b40e61","Status":"RUNNING"},{"ID":"8ba52ec4edb730bdcc340b1f07d30a05","Status":"RUNNING"}], "error": "more than one Flink job were found"}

Strange TaskManager deployment status update behavior

Sometimes I notice that JM/TM deployment status changes from Ready to NotReady, this is unexpected, need to investigate why.

Events:
  Type    Reason        Age                    From           Message
  ----    ------        ----                   ----           -------
  Normal  StatusUpdate  2m47s                  FlinkOperator  Cluster status: Creating
  Normal  StatusUpdate  2m45s                  FlinkOperator  JobManager deployment status: Ready
  Normal  StatusUpdate  2m45s                  FlinkOperator  JobManager service status: Ready
  Normal  StatusUpdate  2m45s                  FlinkOperator  TaskManager deployment status: Ready
  Normal  StatusUpdate  2m45s                  FlinkOperator  Cluster status changed: Creating -> Running
  Normal  StatusUpdate  2m45s                  FlinkOperator  JobManager deployment status changed: Ready -> NotReady
  Normal  StatusUpdate  2m45s                  FlinkOperator  Cluster status changed: Running -> Reconciling
  Normal  StatusUpdate  2m43s                  FlinkOperator  TaskManager deployment status changed: Ready -> NotReady
  Normal  StatusUpdate  2m26s                  FlinkOperator  JobManager deployment status changed: NotReady -> Ready
  Normal  StatusUpdate  2m23s (x2 over 2m23s)  FlinkOperator  (combined from similar events): Cluster status changed: Reconciling -> Running

Wait until cluster status is stable before taking reconciliation action

Currently, there are 4 steps in the Reconcile() method:

  1. observe the current state of all related resources (CR, JM deployment, ...);
  2. compute the desired state;
  3. update the status of the CR to match the actual status derived from the observed state, e.g., if the CR.status=CREATING, and all observed child resources (JM deployment, ...) have been ready, we need to update CR.status to RUNNING;
  4. take actions to drive the observed state towards the desired state.

The sequence is actually not quite right. A better sequence would be:

  1. observe the current state of all related resources (CR, JM deployment, ...);
  2. update the status of the CR to match the actual status derived from the observed state. If status changed, do not proceed, instead requeue a reconcile request after 5s . Only until the status is stable, there is no update in this step, go step 3 and 4.
  3. compute the desired state;
  4. take actions to drive the observed state towards the desired state.

Support cancel job with savepoint

We want to support cancel job with auto savepoint. The approach I am thinking about is to add a property cancelled in jobSpec and support updating CR with the property. After detecting the cancelled set to true, the controller calls Flink API to take savepoint, then cancel the job. We might introduce another property to control whether to force cancel the job if taking savepoint failed.

Failed to update status

Sometimes we found error updating status in the controller, saying "the object has been modified; please apply your changes to the latest version and try again". Need to investigate the cause and fix it.

example:

2019-10-30T20:31:49.207Z	ERROR	controllers.FlinkCluster	Failed to update cluster status	{"cluster": "default/flinkjobcluster-sample", "error": "Operation cannot be fulfilled on flinkclusters.flinkoperator.k8s.io \"flinkjobcluster-sample\": the object has been modified; please apply your changes to the latest version and try again"}
github.com/go-logr/zapr.(*zapLogger).Error
	/root/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
github.com/googlecloudplatform/flink-operator/controllers.(*FlinkClusterHandler).reconcile
	/workspace/controllers/flinkcluster_controller.go:141
github.com/googlecloudplatform/flink-operator/controllers.(*FlinkClusterReconciler).Reconcile
	/workspace/controllers/flinkcluster_controller.go:74
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/root/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:216
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/root/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:192
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
	/root/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:171
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
	/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152
k8s.io/apimachinery/pkg/util/wait.JitterUntil
	/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153
k8s.io/apimachinery/pkg/util/wait.Until
	/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88
2019-10-30T20:31:49.208Z	ERROR	controller-runtime.controller	Reconciler error	{"controller": "flinkcluster", "request": "default/flinkjobcluster-sample", "error": "Operation cannot be fulfilled on flinkclusters.flinkoperator.k8s.io \"flinkjobcluster-sample\": the object has been modified; please apply your changes to the latest version and try again"}
github.com/go-logr/zapr.(*zapLogger).Error
	/root/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/root/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:218
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/root/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:192
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
	/root/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:171
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
	/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152
k8s.io/apimachinery/pkg/util/wait.JitterUntil
	/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153
k8s.io/apimachinery/pkg/util/wait.Until
	/root/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88

Support checkpoint and savepoint to GCS

Provide a Dockerfile with GCS connector integration based on the official Flink image, then the image could be used to support checkpoint and savepoint to GCS.

Cancel jobs and take savepoints when deleting a cluster

Currently when the user issues a request to delete the session/job cluster CR, the controller simply deletes all the resources. We need to call Flink API to cancel jobs and (optionally) take savepoints before actually deleting the resources.

Job submission failed for akka.pattern.AskTimeoutException

Sometimes job submission fails for akka.pattern.AskTimeoutException, then retry succeeded.

+exec /docker-entrypoint.sh /opt/flink/bin/flink run --jobmanager flinkjobcluster-sample-jobmanager:8081 --class org.apache.beam.examples.WordCount --parallelism 2 /opt/flink/job/word-count-beam-bundled-0.1.jar --runner=FlinkRunner --inputFile=./README.txt --output=gs://dagang-test/beam/wordcount/output
Setting HADOOP_CONF_DIR=/etc/hadoop/conf because no HADOOP_CONF_DIR was set.
Starting execution of program

------------------------------------------------------------
 The program finished with the following exception:

org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: Pipeline execution failed
	at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:593)
	at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
	at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
	at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
	at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
	at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
	at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
	at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
	at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
	at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
Caused by: java.lang.RuntimeException: Pipeline execution failed
	at org.apache.beam.runners.flink.FlinkRunner.run(FlinkRunner.java:116)
	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:315)
	at org.apache.beam.sdk.Pipeline.run(Pipeline.java:301)
	at org.apache.beam.examples.WordCount.runWordCount(WordCount.java:185)
	at org.apache.beam.examples.WordCount.main(WordCount.java:192)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
	... 12 more
Caused by: org.apache.flink.client.program.ProgramInvocationException: Could not retrieve the execution result. (JobID: 606c01ee993b6870c27a7e9a703e2f93)
	at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:255)
	at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338)
	at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:326)
	at org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62)
	at org.apache.beam.runners.flink.FlinkPipelineExecutionEnvironment.executePipeline(FlinkPipelineExecutionEnvironment.java:139)
	at org.apache.beam.runners.flink.FlinkRunner.run(FlinkRunner.java:113)
	... 21 more
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
	at org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$8(RestClusterClient.java:382)
	at java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
	at java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
	at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
	at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
	at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$8(FutureUtils.java:263)
	at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
	at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
	at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
	at java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
	at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
	at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.runtime.rest.util.RestClientException: [Internal server error., <Exception on server side:
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://flink/user/dispatcher#-1594246162]] after [10000 ms]. Message of type [org.apache.flink.runtime.rpc.messages.LocalFencedMessage]. A typical reason for `AskTimeoutException` is that the recipient actor didn't send a reply.
	at akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635)
	at akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635)
	at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:648)
	at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:205)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
	at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
	at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)
	at akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)
	at akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283)
	at akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)
	at java.lang.Thread.run(Thread.java:748)

End of exception on server side>]
	at org.apache.flink.runtime.rest.RestClient.parseResponse(RestClient.java:389)
	at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$3(RestClient.java:373)
	at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:952)
	at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
	... 4 more

Resource leak after deleting JM service on GKE

When setting JobManager service access scope to VPC, GKE creates a set of GCE resources (forwarding rule, target http proxy, url map, backend service) for the service. I noticed after deleting the service (along with the cluster CR), the GCE resources are not deleted automatically. Need to investigate.

Delay deleting job cluster resources after the job finishes

Currently when the job of a job cluster finishes or fails, the controller immediately starts deleting the JM/TM resources, we want to delay the process for a while (e.g., 5 mins by default) to allow checking logs and metrics. Maybe make the delay configurable as a property of the CR.

JM service status not accurate

Currently we determine JM service status purely based on itself, it could be considered as ready when its target pods are not, i.e., there is no corresponding endpoints. It would be more accurate to make sure corresponding endpoints are available.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.