This is an example for a cartographer supply chain that is atypical.
A typical supply chain deploys an application to kubernetes where as this supply chain deploys a VM on vSphere.
This example illustrates how an App Operator group could set up a software
supply chain such that source code gets continuously built using the best
practices from [Packer] and deployed to vSphere as a VM using [Terraform].
Git Source --> VM Template --> VM
- Kubernetes v1.19+
- vSphere 6.7+
- Carvel tools for templating and groups of Kubernetes objects to the api server
- ytt: templating the credentials
- kapp: submitting objects to Kubernetes
- kapp controller: Optional but will be very helpful for installation of all prerequisites in the cluster
- Cartographer
- Flux Source Controller
- Tekton
- Terraform Controller
- Tanzu CLI: this is not required but is used bellow for installation of prerequisites and for ease of management
- Helm v3 CLI
The easiest way to install all the needed prerequisites is to use the TAP OSS project.
To install just the needed parts you can run the following set of commands:
# Preperation
kubectl create ns tap-oss
tanzu package repository add tap-oss -n tap-oss --url ghcr.io/vrabbi/tap-oss-repo:0.2.4
# Installation
tanzu package install cert-manager -p cert-manager.tap.oss -v 1.6.1 -n tap-oss
tanzu package install cartographer -p cartographer.tap.oss -v 0.2.1 -n tap-oss
tanzu package install flux-source-controller -p flux-source-controller.tap.oss -v 0.21.1 -n tap-oss
tanzu package install tekton-pipelines -p tekton.tap.oss -v 0.32.1 -n tap-oss
helm repo add kubevela-addons https://charts.kubevela.net/addons
helm upgrade --install terraform-controller -n terraform --create-namespace kubevela-addons/terraform-controller --set backend.namespace=terraform
- Update the values file in this repo to include your environment specific values for example:
#@data/values
---
#! VM Folder in vSphere inventory to create the Template and VM in
vm_folder: demo-folder
#! The vSphere Portgroup name to use for the template creation and VM deployment (Must have DHCP enabled on the network)
vm_network: demo-tf-vms
#! The vSphere Cluster to deploy to
cluster_name: Demo-Cluster
#! The VM name to deploy - (VM name will be suffixed with the deployment time stamp automatically)
vm_name: web-01
#! Datastore name in vSphere to deploy to
datastore_name: vsanDatastore
#! The vSphere Datacenter to deploy to
datacenter_name: Demo-Datacenter
#! vSphere user name to use for packer and Terraform operations
vcenter_username: [email protected]
#! Password for the vSphere user to use for packer and Terraform operations
vcenter_password: Bpovtmg1!
#! SSH username to create on the new VM
vm_user: demo
#! SSH Password for the new user being created on the VM
vm_password: Aa123456
#! FQDN or IP of the vCenter server this supply chain will target
vcenter_fqdn: demo-vc-01.terasky.demo
#! Boolean value to allow or deny insecure vSphere access aka no cert validation
vcenter_insecure: true
- Use YTT and Kapp to deploy the supply chain objects to the cluster using the values file you just updated:
ytt -f ./templates/ -f values.yaml | kapp deploy -a vsphere-vm-carto-sc -f - -y
- Deploy an example workload
kubectl apply -f workload.yaml
- Once all steps have been completed you can view to object tree that was created for our workload using a CLI tool like kubectl lineage:
# Option 1 - Tree view
kubectl lineage workload/demo-app --exclude-types events
# Option 2 - Table view
kubectl lineage workload/demo-app --exclude-types events --output=split
- To get the IP of the deployed VM from kubernetes you can do the following:
kubectl get configurations.terraform.core.oam.dev demo-app -o jsonpath='{.status.apply.outputs.ip.value}'