Code Monkey home page Code Monkey logo

autoscaler's Introduction


Autoscaler tool for Cloud Spanner

Autoscaler

An open source tool to autoscale Spanner instances
Home · Poller component · Scaler component · Forwarder component · Terraform configuration · Monitoring

Table of Contents

Overview

The Autoscaler tool for Cloud Spanner is a companion tool to Cloud Spanner that allows you to automatically increase or reduce the number of nodes or processing units in one or more Spanner instances, based on their utilization.

When you create a Cloud Spanner instance, you choose the number of nodes or processing units that provide compute resources for the instance. As the instance's workload changes, Cloud Spanner does not automatically adjust the number of nodes or processing units in the instance.

The Autoscaler monitors your instances and automatically adds or removes compute capacity to ensure that they stay within the recommended maximums for CPU utilization and the recommended limit for storage per node, plus or minus an allowed margin. Note that the recommended thresholds are different depending if a Spanner instance is regional or multi-region.

Architecture

architecture-abstract

The diagram above shows the high level components of the Autoscaler and the interaction flow:

  1. The Autoscaler consists of two main decoupled components:

    These can be deployed to either Cloud Functions or Google Kubernetes Engine (GKE), and configured so that the Autoscaler runs according to a user-defined schedule. In certain deployment topologies a third component, the Forwarder, is also deployed.

  2. At the specified time and frequency, the Poller component queries the Cloud Monitoring API to retrieve the utilization metrics for each Spanner instance.

  3. For each instance, the Poller component pushes one message to the Scaler component. The payload contains the utilization metrics for the specific Spanner instance, and some of its corresponding configuration parameters.

  4. Using the chosen scaling method, the Scaler compares the Spanner instance metrics against the recommended thresholds, (plus or minus an allowed margin), and determines if the instance should be scaled, and the number of nodes or processing units that it should be scaled to. If the configured cooldown period has passed, then the Scaler component requests the Spanner Instance to scale out or in.

Throughout the flow, the Autoscaler writes a step by step summary of its recommendations and actions to Cloud Logging for tracking and auditing.

Deployment

To deploy the Autoscaler, decide which of the following strategies is best adjusted to fulfill your technical and operational needs:

In both of the above instances, the Google Cloud Platform resources are deployed using Terraform. Please see the Terraform instructions for more information on the deployment options available.

Monitoring

The autoscaler publishes the following metrics to Cloud Monitoring which can be used to monitor the behavior of the autoscaler, and to configure alerts.

Poller

  • Message processing counters:

    • cloudspannerecosystem/autoscaler/poller/requests-success - the number of polling request messages recieved and processed successfully.
    • cloudspannerecosystem/autoscaler/poller/requests-failed - the number of polling request messages which failed processing.
  • Spanner Instance polling counters:

    • cloudspannerecosystem/autoscaler/poller/polling-success - the number of successful polls of the Spanner instance metrics.
    • cloudspannerecosystem/autoscaler/poller/polling-failed - the number of failed polls of the Spanner instance metrics.
    • Both of these metrics have projectid and instanceid to identify the Spanner instance.

Scaler

  • Message processing counters:
    • cloudspannerecosystem/autoscaler/scaler/requests-success - the number of scaling request messages recieved and processed successfully.
    • cloudspannerecosystem/autoscaler/scaler/requests-failed - the number of scaling request messages which failed processing.
  • Spanner Instance scaling counters:
    • cloudspannerecosystem/autoscaler/scaler/scaling-success - the number of succesful rescales of the Spanner instance.

    • cloudspannerecosystem/autoscaler/scaler/scaling-denied - the number of Spanner instance rescale attempts that failed

    • cloudspannerecosystem/autoscaler/scaler/scaling-failed - the number of Spanner instance rescale attempts that were denied by autoscaler configuration or policy.

    • These three metrics have the following attributes:

      • spanner_project_id - the Project ID of the affected Spanner instance
      • spanner_instance_id - the Instance ID of the affected Spanner instance
      • scaling_method - the scaling method used
      • scaling_direction - which can be SCALE_UP, SCALE_DOWN or SCALE_SAME (when the calculated rescale size is equal to the current size)
      • In addition, the scaling-denied counter has a scaling_denied_reason attribute containing the reason why the scaling was not performed, which can be:
        • SAME_SIZE - when the calculated rescale size is equal to the current instance size.
        • MAX_SIZE - when the instance has already been scaled up to the maximum configured size.
        • WITHIN_COOLDOWN - when the instance has been recently rescaled, and the autoscaler is waiting for the cooldown period to end.
        • IN_PROGRESS - when an instance scaling operation is still ongoing.

Configuration

The parameters for configuring the Autoscaler are identical regardless of the chosen deployment type, but the mechanism for configuration differs slightly:

Licensing

Copyright 2020 Google LLC

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Getting Support

The Autoscaler is a Cloud Spanner Ecosystem project based on open source contributions. We'd love for you to report issues, file feature requests, and send pull requests (see Contributing). You may file bugs and feature requests using GitHub's issue tracker or using the existing Cloud Spanner support channels.

Contributing

autoscaler's People

Contributors

afarbos avatar akshaykalbhor avatar bgood avatar caddac avatar davidcueva avatar dependabot[bot] avatar garupanojisan avatar henrybell avatar j143 avatar kartikagrawal-tudip avatar klanmiko avatar nielm avatar rbrtwng avatar release-please[bot] avatar renovate-bot avatar skuruppu avatar spidercensus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autoscaler's Issues

Add metrics margins

Add a "margin" parameter to each metric.

For example:
high_prio_cpu = 65%
high_prio_cpu_margin = 5%

With the margin, the autoscaler will only scale up if the current high CPU measurement is over 70%, or will only scale down if the high CPU measurement is under 60%.
The margin effectively creates two bands. The metric is permitted to exist between those two bands without triggering an autoscale.

Why do this?
If we rely only on a specific number, for example 65%, the autoscaler will ALWAYS scale up or down when the cool down period runs out.

An alternative to a "margin" is to explicitly have min and max for each metric.
The advantage of having a margin instead of min/max is that the user only needs to think about a single number, and we can have a sensible default for the margin.

Scaler throwing Permission denied when ran with spanner.admin role

Hi Team,
We recently deployed the autoscaler components on our GCP GKE clusters to autoscale our regional(europe-west2) Spanner instance, all the deployments & cronJobs are working as expected, but the scaler deployment is not able to down/up scale the spanner nodes. - getting "Error 7 PERMISSION DENIED" in pod logs

We have created scaler & poller serviceAccounts with different names (not poller-sa & scaler-sa as suggested in the gke terraform documentation) & given predefined roles to them - roles/spanner.viewer to poller svcAcct & roles/spanner.admin to scaler svcAcct (using workload identities annotated with gcp service accounts in deployment yaml for both)

Plz suggest if we are missing something, or is this a known issue?

autoscaler doesn't work with dataflow load (export/import)

Dataflow is considered as medium priority https://cloud.google.com/spanner/docs/cpu-utilization
so when there is high Dataflow load (import/export) and low high priority load autoscaler just doesn't work at all.
image

while it is possible to fix with custom metric

      "metrics" = [
        {
          "name"                     = "cpu_utilization_total"
          "filter"                   = "metric.type=\"spanner.googleapis.com/instance/cpu/utilization\""
          "regional_threshold"       = 90
          "multi_regional_threshold" = 90
        },

probably would be great if such rule will be added into default set

Custom thresholds defined in the instance configuration

It can be desirable to scale on thresholds that are lower than the defaults. Currently those custom thresholds need to be set in the code. Doing so changes the threshold for all Cloud Spanner instances being autoscaled with that deployment of the Poller. If the custom thresholds could be included in the configuration JSON objects they could be set on a per instance basis and code changes wouldn't be required.

Example configuration:

[
    {
        "projectId": "my-spanner-project",
        "instanceId": "spanner1",
        "scalerPubSubTopic": "projects/my-spanner-project/topics/spanner-scaling",
        "minNodes": 1,
        "maxNodes": 3,
        "metrics": [
           {  name: "high_priority_cpu",
              regional_threshold: 30,
              multi_regional_threshold: 20
           }
       ]
    }
]

Autoscale on Cloud Alerting notifications

The autoscaler could be more responsive if it was able to scale up or down Spanner instance when a Cloud Alerting threshold is triggered. That would allow the autoscaler to act outside of the polling interval enabling it to scale up as soon as the scale up threshold is passed.

Update Terraform module to create the Firestore DB through a terraform resource

Looks like the App Engine Terraform resource can now create a Firestore db, so we should update the Autoscaler Terraform. This will clean up some steps in the deployment process and simplify the installation process.

Reference:
https://cloud.google.com/firestore/docs/solutions/automate-database-create#create_a_database_with_terraform
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/app_engine_application

Custom metrics defined in the instance configuration

It can be desirable to scale on metrics other than CPU and Storage. Currently to add new metrics to scale on they need to be defined in the Poller function. If the custom metric could be included in the configuration JSON objects they could be set on a per instance basis and code changes wouldn't be required.

Example configuration:

[
    {
        "projectId": "my-spanner-project",
        "instanceId": "spanner1",
        "scalerPubSubTopic": "projects/my-spanner-project/topics/spanner-scaling",
        "minNodes": 1,
        "maxNodes": 3,
        "metrics": [
        { "name":  "high_priority_cpu",
           filter: 'metric.type="spanner.googleapis.com/instance/cpu/utilization_by_priority" AND metric.label.priority="high"',
           reducer: 'REDUCE_SUM',
           aligner: 'ALIGN_MAX',
           period: 60,
           regional_threshold: 30
           multi_regional_threshold: 20
           }
       ]
    }
]

Direct Cloud Scheduler configuration changes are reset

Changes made to the Cloud Scheduler configuration as per the documentation are reset by Terraform on the next run. To support idempotency it would be optimal to make this configurable, with either Terraform or the direct Cloud Scheduler configuration specified as the source of truth.

Dashboard variables without default stop Terraform

The following terraform variables:

  • dashboard_threshold_high_priority_cpu_percentage
  • dashboard_threshold_rolling_24_hr_percentage
  • dashboard_threshold_storage_percentage

only have defaults within the monitoring module, but not in the root modules.
Therefore, they cause the terraform scripts to stop and ask for values, which most users will not know what they are for.

Deployment script is not idempotent

Following the steps to Deploy a distributed Autoscaler tool for Cloud Spanner and forgot to enable Cloud Scheduler API beforehand.

Retried after enabling Cloud Scheduler API and it failed with the following error:

module.forwarder.google_storage_bucket.bucket_gcf_source: Creating...
╷
│ Error: googleapi: Error 409: You already own this bucket. Please select another name., conflict
│
│   with module.forwarder.google_storage_bucket.bucket_gcf_source,
│   on ../../modules/forwarder/main.tf line 40, in resource "google_storage_bucket" "bucket_gcf_source":
│   40: resource "google_storage_bucket" "bucket_gcf_source" {

This indicates that the deployment script is not idempotent.

Feature Request: More configurable thresholds per spanner instance

Currently, the following fields are hard coded into the code:

  • OVERLOAD_THRESHOLD for high priority CPU - currently at 90%
  • regional_threshold for high priority CPU - currently at 65%

There are a couple others, but these are the main ones that are likely to impact Spanner performance.
It would be good to have these configurable for each spanner instance.

Spanner is a P0 dependency for some of our services and we already notice latency spikes at 65% high priority CPU utilization. Based on this, 90% is a too high of an overload threshold for us.

We currently manually change these thresholds ourselves when we deploy the autoscaler to 55% target and 75% overload.

Distributed deployment with state in Spanner throws errors

When deploying the Autoscaler in distributed mode with its state stored in Spanner, logs show the following error:

The Cloud Firestore API is not available for Firestore in Datastore Mode

Because of this error, the scaling operation is not completed.

Improve Observability: add support for Metrics and Tracing

To get proper monitoring of the autoscaler, the tool should

  • trace of internal processes (scaling decisions,.....) and different calls (http, pubsub, grpc) with the instance tagged
    • this would the benefit to get errors rate and latency distributions metrics for those
  • have metrics for scaling events per instance with the the related GCM (high cpu, storage...) also, both as tags:
    • successful scaling count
    • scaling "abort" with the reason as a tag (bigger than max size, lower than min size)

I would suggest to use a open source library like OpenTelemetry for this.

Increasing resource usage over time

Long-running components (such as is the case when the autoscaler is deployed to Kubernetes, particularly with the scaler component) are observed to consume an increasing amount of resources over time -- this is primarily memory consumption. A suspected memory leak may be the cause of this, which is currently being investigated.

Overlapping IAM permissions for Poller component

Currently the IAM permissions for the Poller component service account are multiply assigned in different Terraform source files, and overlapping IAM roles are applied (roles/spanner.viewer and roles/monitoring.viewer). Using a custom role would enable use of a simplified and more granular set of permissions.

Long running operation tracking

The autoscaler is only tracking the last time a scaling action is taken. It would also be helpful to capture the operation id for the last action taken, so we could log the status of the operation as it continues to monitor the Spanner instance.

Instance configuration auditing

It would be great if there were tools that could collect all the schedules and configurations for a Spanner instance. This would be helpful in understanding the configuration where multiple methods are being used and in troubleshooting scenarios where the autoscaler isn’t working as intended.

Convert Terraform working directories to a Terraform module

It would be nicer if these were implemented as Terraform modules so they could be integrated into existing Terraform working directories. I've done so in a private fork for the per-project case and it was not very complex. Obviously, it might be more complex for the other working directories and then there's the matter of updating Google's docs, etc. so I'm not saying it'd be trivial; something to think about.

Aligner / Reducer functions incorrect for high priority CPU

Currently the High priority CPU aligner is align_max and reducer is reduce_sum. According to the documentation you point to in GCP monitoring alerts for spanner (https://cloud.google.com/spanner/docs/monitoring-cloud#high-priority-cpu) the high priority cpu aligner should be mean, while the aggregator (which I'm understanding to mean the reducer) should be max. This is different than the rolling 24 hour version.

The impact of this is that (in my case) I am seeing scale up events in my multi-region spanner instance when two locations sum to greater than 45% (which I don't think is ideal, if I am understanding those numbers).

Prefix for terraform

Hi!

First of all, this repository is great and can be help me. Today I work into in project shared into my company, and to create a new resource ever need declare an team prefix, however the autoscaler don't provide this into your terraform configs.

So, would it be possible to provide an prefix variable for terraform? If the answer is yes, I can pull this implementation and send a PR.

Soft delete of custom roles can result in (re)build failure

For the custom roles created via Terraform:

  • spannerAutoscalerMetricsViewer
  • spannerAutoscalerCapacityManager

Because a delete operation actuated by terraform destroy is only a soft delete, this can result in an inability to (re)build during a certain time window (7-37 days):

"You can't create a role_id (spannerAutoscalerCapacityManager) which has been marked for deletion."

To work around this, a random string may be appended to the role name(s), circumventing the restriction.

Dependabot

Enable Dependabot for library update tracking.

pubsub_target.data in poller_job is no longer adjustable

Previously we used to adjust each parameter in the data for pubsub_target of poller_job (e.g. minSize, maxSize, scalingMethod, etc.).

However, the data is currently a fixed value.

data = base64encode(jsonencode([
merge ({
"projectId" : "${var.project_id}",
"instanceId" : "${var.spanner_name}",
"scalerPubSubTopic" : "${var.target_pubsub_topic}",
"units" : "PROCESSING_UNITS",
"minSize" : 100,
"maxSize" : 2000,
"scalingMethod" : "LINEAR",
"stateDatabase": var.terraform_spanner_state ? {
"name": "spanner",
"instanceId": "${var.state_spanner_name}"
"databaseId": "spanner-autoscaler-state"
} : {
"name": "firestore",
}
},
var.state_project_id != null ? {
"stateProjectId" : "${var.state_project_id}"
} : {})
]))

We want to adjust and use each parameter of data, so could you please modify it to be configurable by variables as before?

Using a single table persistent multiple instances state

Using spanner persistent state, when a scaler manages multiple instances, the spannerAutoscaler table has only one record. Is it possible to separate multiple lines of processing. Row id can be set to projects/${spanner.projectId}/instances/${spanner.instanceId}
e.g
id
projects/project-1/instances/spanner-1
projects/project-1/instances/spanner-2

Poller function encounters javascript error

Poller function fails with below error

{ insertId: "000000-c7f61764-5b66-477e-8c8e-2448fc1cddff" jsonPayload: { message: "An error occurred in the Autoscaler poller function TypeError: spanners is not iterable at checkSpanners (/workspace/index.js:348:25) at processTicksAndRejections (internal/process/task_queues.js:95:5) at async exports.checkSpannerScaleMetricsPubSub (/workspace/index.js:360:5)" payload: { } } labels: { execution_id: "e9blyz99v02m" } logName: "****" receiveTimestamp: "2021-10-19T04:47:20.303390670Z" resource: { labels: { function_name: "tf-poller-function" project_id: "****" region: "****" } type: "cloud_function" } severity: "ERROR" timestamp: "2021-10-19T04:47:09.633Z" trace: "projects/*****" }

Notifications on scale up/down events

When the autoscaler scales up or down a Cloud Spanner instance it would be nice if it could notify users that it made a change to a Cloud Spanner instance.

Useful notification channels could be:

  • Chat services such as Slack or Google Chat
  • Email

Useful information could be:

  • Spanner Instance
  • Observed Metric, value and threshold
  • Number of nodes added or removed

terraform config overrides permissions set via UI

https://github.com/cloudspannerecosystem/autoscaler/blob/master/terraform/modules/spanner/main.tf
uses authoritative resources and if there is already assigned roles - they will be silently overwritten

the non-authoritative resources should be used instead (and probably in other places), i.e.
google_spanner_instance_iam_member
google_project_iam_member

https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/google_project_iam

resource "google_spanner_instance_iam_binding" "spanner_metadata_get_binding" {
  instance = var.spanner_name
  role = "roles/spanner.viewer"
  project      = var.project_id

  members = [
    "serviceAccount:${var.poller_sa_email}",
  ]
  depends_on = [google_spanner_instance.main]
}

resource "google_spanner_instance_iam_binding" "spanner_admin_binding" {
  # Allows scaler to change the number of nodes of the Spanner instance
  instance = var.spanner_name
  role = "roles/spanner.admin"
  project      = var.project_id

  members = [
    "serviceAccount:${var.scaler_sa_email}",
  ]
  depends_on = [google_spanner_instance.main]
}

resource "google_project_iam_binding" "poller_sa_cloud_monitoring" {
  # Allows poller to get Spanner metrics
  role    = "roles/monitoring.viewer"
  project = var.project_id

  members = [
    "serviceAccount:${var.poller_sa_email}",
  ]
}

Logs can be challenging to interpret with large numbers of autoscaled Spanner instances

When the number of target Spanner instances being autoscaled grows large, it can be difficult to interpret the logs from the poller and scaler components. Possibilities for addressing this include some combination of:

  • Adding a logline prefix and/or other identifying field
  • Moving to more structured logging
  • Using Log Analytics to enable structured querying
  • Creating a custom dashboard to be able to drill down into logs

Add alternative scheduler for Cloud Scheduler

Cloud Scheduler is currently the only supported scheduler mechanism. While Cloud Scheduler is serverless, which has the benefits of having a low maintenance overhead there are some limitations:

  • Publishing Pub/Sub messages cross project
  • Lack of VPC service controls
  • Requires AppEngine

These could be worked around with an alternative scheduler, perhaps a Spring base scheduler or Kubernetes Cron Job.

setMetadata promise not triggering an error upon quota limits

We are seeing an issue with a promise not triggering an error upon quota limits. Our calls all return without an error in the promise (We should see an error since this will fail due to insufficient quota). Our usage:

const spannerInstance = spannerClient.instance(spanner.instanceId);
  spannerInstance.setMetadata(metadata)
    .catch(err => {
      console.log("----- " + spanner.projectId + "/" + spanner.instanceId + ": Unsuccessful scaling attempt.\n\tThis may be caused by a previous ongoing scaling operation.");
      console.debug(err);
    });

However, this approach will successfully show the error:

spannerInstance.setMetadata(metadata, function(err, operation, apiResponse) {
    if (err) {
        console.log('err on func')
        return
    }
 
    operation
        .on('error',  function(err) {
                console.log(err)
        })
        .on('complete', function() {
                console.log('we finished!')
        });
  });

Output:

% node app.js
GoogleError: Project ************* cannot add 4 nodes in region us-central1.
    at Operation._unpackResponse (/Users/******/Desktop/projects/testjs/node_modules/google-gax/build
/src/longRunningCalls/longrunning.js:133:31)
    at /Users/*******/Desktop/projects/testjs/node_modules/google-gax/build/src/longRunningCalls/longr
unning.js:119:18 {
  code: 8
}

Using only units + minSize/maxSize results in default minNodes/maxNodes overriding provided minSize/maxSize

Given,

spanners[sIdx] = {...nodesDefaults, ...spanners[sIdx]};
if(spanners[sIdx].minNodes && spanners[sIdx].minSize != spanners[sIdx].minNodes) {
    log(`DEPRECATION: minNodes is deprecated, remove minNodes from your config and instead use: units = 'NODES' and minSize = ${spanners[sIdx].minNodes}`,
       'WARNING');
    spanners[sIdx].minSize = spanners[sIdx].minNodes;
}

Because this logic does not care that minSize was specified and because minNodes is provided by nodesDefaults if a payload config does not provide minNodes then the default minNodes overrides the provided minSize.

Same all applies to maxSize/maxNodes.

Also note that the warning for the maxNodes case accidentally mentions minNodes.

Cloud Monitoring Dashboards

Many time teams have a dashboard that tracks important Spanner metrics. It would be helpful if the we had a starter dashboard users could deploy to get started.

Good metrics to include on the dashboard would be the scaling metrics and node count grouped by instance:

  • High CPU Utilization
  • Rolling CPU Utilization
  • Storage Utilization
  • Node Count

Stepwise method not scaling-in when stepSize<1000

When a Spanner instance reaches a size > 1000 PUs, then the STEPSIZE method does not scale-in the instance if stepSize<1000.

For example:
An instance currently has 4000 PUs and stepSize 100
The instance needs to be scaled-in.
The STEPWISE method will suggest 3900 PUs. However, this number will be rounded up to 4000 PUs and scaling-in will not proceed.

Support deployment to Kubernetes

Currently the Spanner Autoscaler has dependencies on a number of Google Cloud services, adjacent to Cloud Spanner:

For organisations/teams that cannot or do not desire to use these services, a deployment option to run the Autoscaler within a Google Kubernetes Engine (GKE) cluster could be introduced. This would enable customers that already run GKE to use the Autoscaler without taking on additional GCP service dependencies.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.