Code Monkey home page Code Monkey logo

aks-azcli-terraform's Introduction

Getting started with Terraform and Kubernetes on Azure AKS

We will learn how to create Kubernetes clusters on Azure Kubernetes Service (AKS) with the Azure CLI and Terraform.

Automate creating two clusters (dev and prod) complete with an Ingress controller.

Lessons

Create and update AKS cluster though AZ CLI

Ways to provide a cluster

3 ways

  • (1) Using AWS Web interface through ClickOps -> avoid because does not scale well and it is error prone
  • (2) Use Azure CLI
  • (3) Define the cluster with code tool such as Terraform

You should aim to use 2 and 3 together.

In this lesson we will do the following though Azure CLI:

  • Learn basic commands
  • Create a new Resource Group
  • Create a cluster
  • Update cluster configurations such as location and node count
  • Delete everything previously created

Main concepts

  • AKS is Azures equirvalent of Amazons EKS. A Kubernetes cloud service.
  • Kubectl is AKS CLI.
  • AKS manages the cluster control plane, Kubernetes API and etcd database
  • Use az-cli

How to

First list and can see if there are any clusters already created

az aks list

To create a new clust, first we need to have a resource group to assign one to.

Use an existing one

az group list

Or, create a new one.

az group create --namd ResourceGroupName --location brazil

Register a resource provider

az provider -n Microsfot.ContainerService

Create the cluster assigning it to a ResourceGroup. If you do not provide ssh credentials, you need to have them generated. Optionally you can chose how many nodes it should have.

az create -g ResourceGroupName -n ClusterName --generate-ssh-keys --node-count 2

Update it passing the parameters you wish to update. You should always provider the cluster and its group name

az aks update --resource-group ResourceGroupName --name ClusterName --enable-duster-auscaler --min-count 1 --max-count 2

DONE! Cluster created

Deleting resources

First we shall delete the cluster

az aks delete -name ClusterName --resource-group Resource GroupName

Then finally the group

az group delete --resource-group ResourceGroupName

DONE! All resources previously created were destroyed, check by using the list command for both group and cluster

AKS cluster with Terraform

AKS cluster with Terraform

  • Terraform is an opensource IaaC tool
  • With HCL, plan and translate it into code so Terraform can take on the rest
  • Make sure to have Terraform binary installed

How to

    1. Get subscription ID and take note of it
az account list
    1. We need to create a Contributor Service Principal to provide Terraform so it can act on our behalf
az ad sp create for-rabc --role="Contributor" --scopes="/subscriotions/YOUR_SUB_ID"

Will return appId, password, tenant, displayName and name.

    1. Export environment variables so Terraform can access them
export ARM_CLIENT_ID = appId
export ARM_SUBSCRIPTION_ID = subscription ID 
export ARN_TENANT_ID = tenant
export ARM_CLIENT_SECRET = password
    1. Create the most basic .tf file called main.tf and write the following code. As we can see, we can co-relate with lesson number 1.

Here, instead of running every single command through Az CLI, we will set it up and Terraform will automate its creation.

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=2.48.0"
    }
  }
}

provider "azurerm" {
  features {}
}

resource "azurerm_resource_group" "rg" {
  name     = "learnk8sResourceGroup"
  location = "northeurope"
}
    1. Run init so Terraform can prepare by translating the instructions into API calls and it will create a state file to keep track of what it has already done
terraform init
    1. Plan and revise before creating. The following command allows us to do just that
terraform plan
    1. Apply the plan
terraform apply

And DONE! We have successfully used Terraform to create a resource group

    1. Destroy everything created
terraform destroy
Create production-ready cluster adding a second pool to it
  1. Reduce node count from 2 to 1, under default_node_pool in main.tf file

  2. Add following resouce creating to main.tf

resource "azurerm_kubernetes_cluster_node_pool" "mem" {
 kubernetes_cluster_id = azurerm_kubernetes_cluster.cluster.id
 name                  = "mem"
 node_count            = "1"
 vm_size               = "standard_d11_v2"
}
  1. Plan, check and apply
terraform plan
terraform apply
  1. Verify the new pool has been created
kubeclt get nodes --kubeconfig kubeconfig
Deploy simple app
  1. Create YAML deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-kubernetes
spec:
  selector:
    matchLabels:
      name: hello-kubernetes
  template:
    metadata:
      labels:
        name: hello-kubernetes
    spec:
      containers:
        - name: app
          image: paulbouwer/hello-kubernetes:1.8
          ports:
            - containerPort: 8080
  1. Submit definition to the cluster
kubectl apply -f deployment.yaml

*Export Kubeconfig file to ~/.kube/config so you dont have to passs --kubeconfig kubeconfig all the time

  1. Get the name of the pod
kubectl get pods
  1. Connect to the pod
kubectl port-foward <NAME-OF-THE-POD> 8080:8080

The application is exposed, but that is not a great way to do it. Use service of type loadbalancer to expose them.

  1. Create service-loadbalancer.yaml file
apiVersion: v1
kind: Service
metadata:
  name: hello-kubernetes
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 8080
  selector:
    name: hello-kubernetes
  1. Submit YAML
kubectl apply -f service-loadbalancer.yaml
  1. DONE! Application is exposed through its public IP
kubectl get svc

However, the loadbalancer created only services one service at a time. Load balancers are expensive and theres a way to get around this.

Routing traffic with ingress
  1. Azure add-on to use Ingress controller with Nginx. Append main.tf under cluster ressource config
 addon_profile {
    http_application_routing {
      enabled = true
    }
  }
  1. Plan terraform plan

  2. Apply terraform apply

4)Check created pods

kubectl get pods -n kube-system | grep addon
  1. Create ingress.yaml file
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hello-kubernetes
  annotations:
    kubernetes.io/ingress.class: addon-http-application-routing
spec:
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: hello-kubernetes
                port:
                  number: 80
  1. Deploy ingress file
kubectl apply -f ingress.yaml

DONE! Fully working cluster that can route live traffic!]

To serve HTTPS traffic, you need to use HELM

Fully automated Dev & Production environments with Terraform modules

Creating separates environments when provisioning infra is one of the most common tasks.

We shall use Terraform modules to encapsulate resources using variables and expressions.

  1. From task 5, move files main.tf, variables.tf and outputs.tf to a new folder named main and create a main.tf where the old one used to be

  2. In the subfolder, append the env_name variable to the ResourceGroup

# output truncated for clarity
resource "azurerm_resource_group" "rg" {
  name     = "learnk8sResourceGroup-${var.env_name}"
}
  1. In the main folder, edit the main.tf file and set it to create 2 clusters. One for dev and another for prod.
module "dev_cluster" {
    source       = "./main"
    env_name     = "dev"
    cluster_name = "learnk8scluster"
}

module "prod_cluster" {
    source       = "./main"
    env_name     = "prod"
    cluster_name = "learnk8scluster"
}
  1. Set the default node pool count to 1, to avoid going beyond what Azures free tier allow us to
 default_node_pool {
    name       = "default"
    node_count = "1"
    vm_size    = "standard_d2_v2"
  }
  1. Preview changes and apply terraform plan terraform apply

  2. If you wish to have different types of instances for each environment, create a new variable, edit the vm_size (default_node_pool) parameter to a dinamic one and add a new parameter to the main main.tf file.

variable "instance_type" {
  default = "standard_d2_v2"
}
default_node_pool {
  name       = "default"
  node_count = "1"
  vm_size    = var.instance_type
}
module "dev_cluster" {
    source       = "./main"
    env_name     = "dev"
    cluster_name = "learnk8scluster"
    instance_type= "standard_d2_v2"
}

module "prod_cluster" {
    source       = "./main"
    env_name     = "prod"
    cluster_name = "learnk8scluster"
    instance_type= "standard_d11_v2"
}

Done!

Thank you!

@Kristijan Mitevski

aks-azcli-terraform's People

Contributors

mrclalmda avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.