Code Monkey home page Code Monkey logo

ha-sap-terraform-deployments's Introduction

Automated SAP/HA Deployments in Public and Private Clouds with Terraform

Build StatusπŸ”—

Supported terraform version 1.1.X


About

This Project provides a high configurable way to deploy SAP HANA database and SAP S/4HANA (or SAP NetWeaver) on various cloud platforms.

Both public cloud and private cloud scenarios are possible. The major cloud providers Google Cloud Platform (GCP), Microsoft Azure, and Amazon Web Services (AWS) are supported. Furthermore OpenStack and libvirt/KVM can be used.

It shall give an improved user experience for our SAP customers and partners. and deployment will takes minutes/hours instead of days. You can use it for POC or production deploymentes

Everything is powered by SUSE Linux Enterprise Server for SAP Applications.

Overview

Project Components

The diagram above shows components for an example setup. Several features can be enabled or disabled through configuration options to control the behavior of the HA Cluster, the SAP HANA and SAP S/4HANA or SAP NetWeaver. The setup is also dependent on the cloud provider which is used.

Components Details

  • SAP HANA Database: HANA might be deployed as a single SAP HANA database instance, or as a two-node configuration with system replication. Even HANA Scale-Out scenarios can be deployed, depending on the cloud provider (see Features section). In addition a SUSE HA cluster can be set on top of that. Please also have a look at Preparing SAP software

  • SAP S/4 HANA (or NetWeaver): S/4HANA can be deployed with a single PAS instance or as a full stack including ASCS, ERS, PAS and AAS (multiple) instances. In the latter sce case, a SUSE HA cluster is set on top of ASCS/ERS. For more information see S/4HANA and NetWeaver and Preparing SAP software.

  • ISCSI server: This provides Stonith Block Devices used by the sbd fencing mechanism. Also see Fencing mechanism Native fencing mechanisms are available for some cloud environments (see Features section).

  • Monitoring server: The monitoring solution is based on prometheusπŸ”— and grafanaπŸ”—. It provides informative and customizable dashboards to users and administrators. Every node has prometheus exporters installed which are used to collect the needed metrics. For more information see Monitoring of cluster.

  • DRBD cluster: It is used to provide a highly available NFS server for cloud providers that lack a native solution. It will be used to mount SAP NetWeaver shared files. For more information see DRBD. Some cloud providers have native solutions for high available NFS (see Features section), which should be preferred over the DRBD solution.

  • Bastion server: A bastion server is used to have a single internet-facing entry point (ssh) for the administrator and the provisioning process. Security-wise, it is a best practice to access you machines this way. The availability of this solution depends again on the used cloud provider (see Features section).

For more on various topics have a look on the following documentation:

Products

This repository supports deployment with following products:

Vendor Product
SUSE SUSE Linux Enterprise Server for SAP Applications 12 SP5
Certification: SLES for SAPπŸ”— and SAP Process AutomationπŸ”—
SUSE SUSE Linux Enterprise Server for SAP Applications 15 SP4 (or older)
Certification: SLES for SAPπŸ”— and SAP Process AutomationπŸ”—
SAP SAP HANA 2.0 with SPS >= 02
SAP SAP NETWEAVER 7.5 (and later)
SAP SAP S/4HANA 1610
SAP SAP S/4HANA 1709
SAP SAP S/4HANA 1809
SAP SAP S/4HANA 1909
SAP SAP S/4HANA 2020
SAP SAP S/4HANA 2021

Cloud Providers

This repository supports deployment on the following SAP certified providers cloud providers:

Vendor Product Certification
Amazon Amazon Web Services (AWS) SAP Hardware Directory for AWSπŸ”—
Microsoft Azure SAP Hardware Directory for AzureπŸ”—
Google Google Cloud Platform (GCP) SAP Hardware Directory for GCPπŸ”—
OpenInfra OpenStack Depends on deployed hardware,
get an overview in SAP's Hardware DirectoryπŸ”—
libvirt.org Libvirt not certified

Features

The following features are implemented:

Feature AWS Azure GCP OpenStack Libvirt
SUSE saptune / SAP sapnotes
SUSE's saptune is applied with the correct solution template to configure the systems based on SAP sapnotes recommendations.
For additional information see Tuning Systems with saptuneπŸ”—.
β˜’ β˜’ β˜’ β˜’ β˜’
HANA single node
Deployment of HANA on a single node.
For additional information see SAP Hardware Directory for AWSπŸ”—
β˜’ β˜’ β˜’ β˜’ β˜’
HANA Scale-Up - performance optimized
Deployment of HANA with system replication in a performance optimized setup.
For addition information see SAP HANA System Replication Scale-Up - Performance Optimized ScenarioπŸ”—.
β˜’ β˜’ β˜’ β˜’ β˜’
HANA Scale-Up - cost optimized
Deployment of HANA with system replication in a cost optimized (additional tenant DB) setup.
For additional information see SAP HANA System Replication Scale-Up - Cost Optimized ScenarioπŸ”—.
β˜’ β˜’ β˜’ β˜’ β˜’
HANA Scale-Out - performance optimized
Deployment of HANA Scale-Out (multi node) with system replication in a performance optimized setup.
For additional information see SAP HANA System Replication Scale-Out - Performance Optimized ScenarioπŸ”— and SAP HANA System Replication Scale-Out High Availability in Amazon Web ServicesπŸ”—.
β˜’ β˜’ β˜’ β˜’ β˜’
HANA Scale-Out - with standby nodes (HANA Host-Auto-Failover)
Deployment of HANA Scale-Out (multi node) with system replication and Host-Auto-Failover via standby nodes.
For additional information see Setting Up Host Auto-FailoverπŸ”— and Azure: Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on SUSE Linux Enterprise ServerπŸ”—.
🚫 β˜’ 🚫 β˜’ ☐
SAP S/4HANA ENSA 1
Deployment of a SAP S/4HANA in Enqueue Replication (ENSA) 1 scenario.
For additional information see SAP NetWeaver Enqueue Replication 1 High Availability Cluster - Setup Guide for SAP NetWeaver 7.40 and 7.50 πŸ”—.
β˜’ β˜’ β˜’ β˜’ β˜’
SAP S/4HANA ENSA 2
Deployment of a S/4HANA in Enqueue Replication (ENSA) 2 scenario.
For additional information see SAP S/4HANA - Enqueue Replication 2 High Availability Cluster - Setup Guide πŸ”—.
β˜’ β˜’ β˜’ β˜’ β˜’
SAP S/4HANA single PAS
Deployment of a single S/4HANA PAS (primary instance).
For additional information see SAP S/4HANA - Enqueue Replication 2 High Availability Cluster - Setup Guide πŸ”—.
β˜’ β˜’ β˜’ β˜’ β˜’
SAP S/4HANA High Availability Cluster
Deployment of a full SAP S/4HANA stack including ASCS, ERS, PAS and AAS (multiple) instances.
For additional information see SAP S/4HANA - Enqueue Replication 2 High Availability Cluster - Setup Guide πŸ”—.
β˜’ β˜’ β˜’ β˜’ β˜’
Deployment in different Availability Zones/Sets
Deployment of virtual instances in different Availability Zones/Sets for HA on hardware level.
β˜’ β˜’ β˜’ ☐ ☐

Legend:

Symbol Explanation
β˜’ feature implemented in this repository
☐ not implemented in this repository
🚫 not recommended by vendor

Project Structure

This project heavily uses terraformπŸ”— and saltπŸ”— for configuration and deployment.

Terraform is used to create the required infrastructure in the specified cloud.

The code is divided into sub directories for each terraform provider and split into different terraform modules. There are also some abstracted generic_modules

./ha-sap-terraform-deployments
β”œβ”€β”€ aws
│    └── modules
β”œβ”€β”€ azure
│    └── modules
β”œβ”€β”€ generic_modules
│    └── ...
β”œβ”€β”€ gcp
│    └── modules
β”œβ”€β”€ libvirt
│    └── modules
β”œβ”€β”€ openstack
│    └── modules
…

This makes the code modular and more maintainable.

Salt configures all virtual machine instances that are provisioned by terraform. This includes configuring the operating system, mounting filesystems, installing SAP software, installing HA components. It does so by using pillars and grains which are injected by terraform in a flexible and customizable way.

./ha-sap-terraform-deployments
β”œβ”€β”€ pillar_examples
│    └── automatic
β”‚Β Β Β Β     └── drbd
β”‚Β Β Β Β     └── hana
β”‚Β Β Β Β     └── netweaver
β”œβ”€β”€ salt
│    └── bastion
│    └── cluster_node
│    └── ...
…

Terraform will first build up the infrastructure/machines and salt will do the actual provisioning.

Under the hood, shaptoolsπŸ”— and salt-shaptoolsπŸ”— are used, to have a stable API to access SAP HANA and Netweaver functionalities.

The whole architecture stack can be seen here:

Architecture

This repository is intended to be configured and run from a local workstation, but should also be runnable from your cloud provider's cloud shell.

Each provider folder has it own provider relevant documentation, modules and example configuration. Be sure to get familiar with these before trying this out.

Getting started

SUSE/SAP HA automation project

The SAP software media has to be available and prepared according to Preparing SAP software.

After you prepared the SAP software, make sure to have terraform and salt installed. Clone this repository and follow the quickstart guides of the favored provider. They can be found in ./<provider>/README.md or linked below:

The SUSE SAP automation guides contain a lot more detailed explanations than the short quick start guides.

Each provider folder contains a minimal working configuration example terraform.tfvars.example.

Please be careful which instance type you will use! The selection of systems certified by SAP could lead to expensive unexpected costs.

Troubleshooting

In case you have some issue, take a look at this troubleshooting guide.

ha-sap-terraform-deployments's People

Contributors

alvarocarvajald avatar angelabriel avatar arbulu89 avatar ayoub-belarbi avatar clanig avatar cschneemann avatar diegoakechi avatar faust64 avatar hsehic avatar jamesongithub avatar juadk avatar ldevulder avatar mallozup avatar melzer-b1 avatar mfriesenegger avatar mr-stringer avatar nick-wang avatar pablomh avatar pirat013 avatar ricardobranco777 avatar rtorrero avatar simranpal avatar stefanotorresi avatar steffenv-msft avatar stephenmogg avatar varkoly avatar yeoldegrove avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ha-sap-terraform-deployments's Issues

Documentation update about workspace name

We need to precise that the terraform workspace name does not contain - or _ characters. Otherwise, HANA installation will failed with a "invalid hostname" message.

NEXT-RELEASE 2.2.0

Description:

Checklist of things need to be done before next release in GitHub.

Checklist

  • fix output.tf - useful for openQA tests (#217 and #220)
  • add monitoring for GCP (#219)
  • add monitoring for AWS (#203)
  • have the choice to disable monitoring on Azure (#182)
  • ntpd issue on SLE12+ (#208)
  • disable ha_exporter by default (#181)

Validation

Test runs should passed on all providers with pillar_examples/automatic/*.sls files and with default values

Validation with background = true (recommended value):

  • Azure
  • AWS
  • GCP
  • Libvirt

Validation with background = false (default value, and used for automated QA tests with openQA):

  • Azure
  • AWS
  • GCP
  • Libvirt

AWS terraform todo

Descriptions:

todos doesn't belong to readme.md files.

I'm moving the todos form AWS to a github ISSUE because the todos will be removed from AWS

To Do for aws

  • Investigate if it is possible to upload the images directly with terraform
  • Check AWS documentation for Hana setup and add required resources. Current configuration works for build validation of new images, but lacks certain resources that are probably needed (Load Balancer, for example) for a complete setup of Hana in AWS.
  • Find if it's possible to create more than one device with iscsi-formula.
  • Add SLES12 compatibility for iscsi-formula.

azure: fix hardcode image values

description:

I have left the monitoring image hardcoded to a specific image.

I think we could make this parametrizable or at least we should research.

I personally don't think we should change that often, so even a commit for updating them could be a valid solution.

  // TODO add variable later
  storage_image_reference {
    id        = azurerm_image.monitoring.0.id
    publisher = "SUSE"
    offer     = "SLES-SAP-BYOS"
    sku       = "15"
    version   = "2019.07.17"
  }

remove variable running

variable "running" {
  description = "whether this host should be turned on or off"
  default     = true
}

this variable is cosmetical to me. When we deploy a VM we want it always running.

We should keep our codebase KISS and remove it.

@nick-wang @ldevulder @alvarocarvajald any thouhgts on this? do you want to keep it? I would be in favor to remove it, since nobody use it.

[EPIC] Monitoring: monitor and alert promotion/demotion of secondary node to primary

descriptions:

This card will serve me as epic for organizing our epic and track some issues, order them.

Tasks

  • investigate how to gather data

Problems:

dashboard related:

  • if node1 reboots we loose all the HA metrics. (hawk-apiserver) since the dashboard is pointing to node1.

resource migrations of exporters

  • if node1 reboots we should migrate the hanadb exporter to the node2 where the hanadb will live (since the node1 will have no data)

Research how to improve salt error to be less generic

current situation:

everyone has already encountered this The problem is that this error doensn't anything useful for debugging and can be for real different reasons

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/salt/utils/templates.py", line 392, in render_jinja_tmpl
    output = template.render(**decoded_context)
  File "/usr/lib/python3.6/site-packages/jinja2/asyncsupport.py", line 76, in render
    return original_render(self, *args, **kwargs)
  File "/usr/lib/python3.6/site-packages/jinja2/environment.py", line 1008, in render
    return self.environment.handle_exception(exc_info, True)
  File "/usr/lib/python3.6/site-packages/jinja2/environment.py", line 780, in handle_exception
    reraise(exc_type, exc_value, tb)
  File "/usr/lib/python3.6/site-packages/jinja2/_compat.py", line 37, in reraise
    raise value.with_traceback(tb)
  File "<template>", line 23, in top-level template code
jinja2.exceptions.UndefinedError: 'dict object' has no attribute 'init'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/salt/utils/templates.py", line 169, in render_tmpl
    output = render_str(tmplstr, context, tmplpath)
  File "/usr/lib/python3.6/site-packages/salt/utils/templates.py", line 402, in render_jinja_tmpl
    buf=tmplstr)
salt.exceptions.SaltRenderError: Jinja variable 'dict object' has no attribute 'init'
[CRITICAL] Rendering SLS 'base:cluster' failed: Jinja variable 'dict object' has no attribute 'init'
local:
    Data failed to compile:
----------
    Rendering SLS 'base:cluster' failed: Jinja variable 'dict object' has no attribute 'init'

expected results:

we should research and find a better error, and research if we can error-out before or in a more precise way, controlling why this happen with a more isolated small error. this will help debug and UX

azure: cluster doesn't deploy on azure

this is followup from my last pr: #143

I found several issue with the cluster on azure.

hana_public_sku     = "15"
hana_public_version = "2019.03.30"

Given this images the nodes didn't deployed

devel_mode grain and var should be removed

Description:

The devel_mode grain and variable should be in future removed.

IT add to much behavioural differences.. E.g it can be really tricky to test.

In theory we should have just 1 way of doing things, avoiding each variable if it can be spared.

Reorganize the terraform files into modules

Terraform provides the capability of creating modules to better code organization and to create reusable components. See: Creating Modules.

Moving some of the components to modules could help to create more complexes deployments, like Scale-out, HANA with multi system replication, reusing the already existing code.

salt-state names need unique format

description:

we have states called foo-bar and foo_bar2. We need to stick to the suggested upstream format and unify them all to have a convention.

Enhancement - Keeping the fileshare with HANA clean (by just uploading SAP car and SAR file)

Consider just having SAPCAR.EXE (Linux edition) and HANA SAR file in the upload.
Then get the automation to download and extract locally on VM from cloud storage.

Pros.
Keeps the Fileshare / S3 bucket nice and clean with just two files in (or perhaps can use BLOB storage).
Easy to upload (initially)
Easy to update with new/multiple versions. (ie. V1 or V2 etc)

Cons.
Inefficient - automation will have to extract each time - so will add to build time.

Improve monitoring exporter mechanism

description:

we need to improve the way we handle the monitoring of port/adress exporters.

Right now we add a default value to the hosts.

But we can improve this mechanism further, adding a grain for example or other things to have more control of what we monitor (adress), and in which port

Followup discussion with @arbulu89

I think that we need to have a specific grains entry for the monitoring module, something like ['address1:port1', 'address2:port2...]

the dashboard of monitoring should be able to work with different cluster

description:

Question regarding HA monitoring with Grafana. Are we already considering the dashboards to work with several clusters? I'm saying that because when NW is deployed together with HANA, we will have 2 HA clusters. Just wondering....

This issiue is a followup to this question whic might be legitim to keep in mind.

In future we can close this issue or move it elsewhere

find out a more effective way to deploy to libvirt/kvm when we have low bandwith

Problem:

I don't have much bandwith for deploying to the central deplyement server.

I deploy from France terraform apply to NUE KVM server where we can deploy.

Deploying this way take times, especially for copying from France to NUE the images.

( there is no way that we can do it on the terraform-libvirt-kvm, I have already researched).

Currently if the base-image exists, we prompt out and error out that resource already exists.

We need to research a way to create a global shared image that it is created, and used in a cached way

Technical implementation:

This problem can be solved in many ways.

  1. create a terraform null_resource which execute a script for checking that the image already exist and is the latest one, otherwise recreate/download the lates.

  2. it could be also a crontab job running on the server which download and create the shared img and the terraform files depends explitely from that.

For solution 2, we should create like a special golum directory terraform files, which we state that we depends from that img beeing create by x custom script

  1. an other implementations ..

Jinja variable 'dict object' has no attribute 'monitoring_enabled'

Applying the GCP salted configuration with init_type = "skip-hana", the deployment fails with the following error:

local:
    Data failed to compile:
----------
    Rendering SLS 'base:hana_node' failed: Jinja variable 'dict object' has no attribute 'monitoring_enabled'

Checked also with init_type = "all", and the same issue is observed.

Terraform variable monitoring_enabled usage

I set the variable monitoring_enabled to false in my terraform.tfvars and still terraform continue to create the monitoring module, Also asking to specify "monitoring_srv_ip"

Configure Chrony instead of NTP for SLE-15.

NTP was replaced by Chrony on SLE-15 and higher code streams, so the deployments should be adjusted to configure Chrony instead of NTP.

NTP is still available on the sle-module-legacy and it is supported, but as the legacy module is not supported for the entire product lifecycle, this switch should be done ASAP.

README.md in salt folder doesn't make sense

I have noticed that the README in the salt folder doesn't make any sense.

We should update it to show the relevant information about the folder.
Otherwise, it would make actually more sense to delete the doc file

research how to remove hcl weirdness [later]

description:

This issue is a follow-up.

We should perhaps research how to improve hcl syntax. I will try to search if I can do it, but I have quite already digged and the current syntax is the ugly cleanest one :grin

so it can be tricky to find a better one

details

What would happen here if we have 2 disk in the additional_disk list?
The solution looks quite strange (maybe there is not any better).
Anyway, we could improve it in the future as well, I don't really mind if it works

Originally posted by @arbulu89 in #116

Consider renaming (or link) AWS and GCP directories

I'm doing some test to be able to execute our Public Cloud test within openQA (to have tests automatically triggered for each new build) and a minor change may be needed to be able to reuse some code from qa-kernel team: they are using 'EC2' for AWS and 'GCE' for 'GCP' (these acronyms are valid), and in our code we use 'aws' and 'gcp' for the directory names.

So could be either rename the directories or add links?

Documentation

It might be worth commenting in the 'How to use section' that the image.tf file needs to be removed if using public images.

(There is a section on renaming the virtualmachines.tf-publicimg, but not the fact that image.tf is not needed either).

If future versions will be designed for 'public images first' then you can ignore this.

grains cleanup and minimalisation

We have several grains which are not coherent along providers and codebase.

We can remove some of them or merge them in order to have less variable as possible

research:

propose and research which grain can be removed or merged togheter

Tasks: (ordered by prio)

Also unreleated but is part of this refactoring:

#105 (use unique syntax - vs _ etc)

Provide token mechanism for AUTH scc registration

Problem Description

In our terraform.tfvars we ask for REG_CODE and REG_email

This is kind tedious since it might force people/newbie to create the reg-credentials etc.

Having a token mechanism is more straightforward mechanism and we could in theory re-use the same token everywhere also for CI.

Technical consideraitons:

(SUSE/connect#416)

  • Consider if we want to keep the 2 methods or having only 1 replacing the others.

ssh keys create step missing in README quickstart

At least in libvirt provider README file, the ssh keys creation step is not mentioned in the quickstart chapter.

The use would need to run:

cd salt/hana_node/files
mkdir sshkeys
cd sshkeys
ssh-keygen # Create ssh keys with the name id_rsa_cluster and without passphrase

Bring back maps for azure config

desc:

We are using variables instead of a map for input paramters.

Originally the problem was that terraform has problem with the map, I suspect it was a terraform internal issue.

We might need to investiagate on this and bring them back if needed

historic:

Honestly, I preferred the map kind of data for this variable. But I don't know how it behaves with terraform 12.
Anyway, It's just a comment

Originally posted by @arbulu89 in #143

ha_exporter installed by default

Right now, the ha_exporter is installed by default, without considering if the monitoring stack is created or not.
In order to align all the component, the ha_exporter should be only installed together with the whole monitoring stack.
What do you think?
For that, we should just set this value to False in the cloud_providers (aws and gcp don't even have monitoring stack) and in the automatic pillar example depending on the monitoring_enabled parameter.

Examples:
https://github.com/SUSE/ha-sap-terraform-deployments/blob/master/pillar_examples/aws/cluster.sls#L13
https://github.com/SUSE/ha-sap-terraform-deployments/blob/master/pillar_examples/automatic/cluster.sls#L29

In fact, we should take advantage, and do the same with the hanadb_exporter (only enable according this monitoring_enabled parameter.

Release a new snapshot before terraformv12 and it's refactoring

Problem/Description:

We need to release a new GitHub release before we migrate to terraformV12 and do refactoring with the new HCL lang.

In this way we could have 2 different releases and events for keeping an historic track.

tasks:

  • release with github.

notes:

I don't think we need much work on the release itself. Anyone could pick this up.

I wrote this card a promemoria.

AZURE cloud provider improvements

description:

This card is more a meta-tracker for issue I found in azure. To keep track of them.

I wil refine the card step by step and update it.

tasks

improve doc for google cloud terraform for the gpc_credentials_file json creation

The gcp documentation is not that bad but we need more hints about the gcp_credential file creation for newbies

we need a way to improve the doc, especially we need to provide some hints and clear steps for a new user who want to create a new gpc credentials file

https://github.com/SUSE/ha-sap-terraform-deployments/tree/master/gcp/terraform_salt#prerequisites

# Credentials file for GCP gcp_credentials_file = "/home/brother-newbie/SUSE/cloud/foo.json"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.