Code Monkey home page Code Monkey logo

azure-schema-operator's Introduction

ArtifactType Documentation Language Platform Stackoverflow Tags
kubernetes-operator
golang
AKS
URL
operator,schema,kusto,sqlserver

Azure-Schema-Operator

Go Report Card Build Status

Note: The API is expected to change (while adhering to semantic versioning). Alpha and Beta resources are generally not recommended for production environments. Alpha, Beta, and Stable mean roughly the same for this project as they do for all Kubernetes features.

Azure Schema Operator project is aimed to manage schema changes of various Azure Resources (e.g. Azure SQL, Azure Data Explorer, Azure EventsHub) using declarative approach via Kubernetes resources.

A developer defines the schema in source location (e.g. configMap) and the operator will make sure to apply it on all the defined clusters.

Schema-Operator flow

The Operator offloads the heavy lifting to schema tools such as delta-kusto and instead focuses on ensuring the validity of the deployment process.
The operator will validate that the schema was deployed on all databases on all clusters or rollback to a previous successful version.

Currently supports:

  • Azure Data Explorer (Kusto)
  • SQL Server
  • Eventhubs (Schema-Registry)

Soon To Be supported:

  • Cosmos

Status

The project is currently in Alpha status, you can follow status and project plans in the open tasks page

Usage

Follow the installation guide to deploy the operator.
The schema operator expects a configMap with the kql data (generated by delta-kusto on the dev environment)

apiVersion: v1
kind: ConfigMap
metadata:
  name: dev-template-kql
  namespace: default
data:
  kql: |
    .create-or-alter function  Add(a:real,b:real) {a+b}

In this simple example we have a configMap that defines a single function. Now we create the SchemaDeployment object (our schema object to apply on all clusters)

apiVersion: dbschema.microsoft.com/v1alpha1
kind: SchemaDeployment
metadata:
  name: master-test-template
spec:
  type: kusto
  applyTo:
    clusterUris: ['https://sampleadx.westeurope.kusto.windows.net']
    db: 'tenant_'
  failIfDataLoss: false
  failurePolicy: rollback
  source:
    name: dev-template-kql
    namespace: default

The template defines the cluster list (sampleadx) and a regular expression to filter databases by their name (tenant_ is our sample filter but can be any regexp).

Authentication

The schema-operator needs access and perimssions on the target databases. Authorization can be defined either by MSI (recommended) or by defining a secret.

To use MSI, the AZURE_USE_MSI environment variable needs to be defined on the manager pod.

To use a secret, we need to define a secret with the relevant credentials:

apiVersion: v1
kind: Secret
metadata:
  name: schemaoperator
  namespace: schema-operator-system
type: Opaque
data:
  AZURE_SUBSCRIPTION_ID: <base64 encoding of the subscription>
  AZURE_TENANT_ID: <base64 encoding of the tenant id>
  AZURE_CLIENT_ID: <base64 encoding of the client id>
  AZURE_CLIENT_SECRET: <base64 encoding of the client secret>

and later define these as env entries:

env:
  - name: AZURE_USE_MSI
    value: 'false'
  - name: SCHEMAOP_CLIENT_ID
    valueFrom:
      secretKeyRef:
        key: AZURE_CLIENT_ID
        name: schemaoperator
        optional: true
  - name: SCHEMAOP_CLIENT_SECRET
    valueFrom:
      secretKeyRef:
        key: AZURE_CLIENT_SECRET
        name: schemaoperator
        optional: true
  - name: SCHEMAOP_TENANT_ID
    valueFrom:
      secretKeyRef:
        key: AZURE_TENANT_ID
        name: schemaoperator
        optional: true

Prerequisites

The schema operator is written in GO. To develop the project you need the following:

  • Go
  • operator-sdk
  • Docker
  • sqlpackge
  • delta-kusto

Running the tests

The project uses Ginkgo with envtest

To run the tests locally simple run:

make test
โ— Mac M1 users (arm64) should run under rosetta or in the dev container, envtest does not support darwin/arm64.

Built With

The project is build using the Operator SDK

Contributing

Please read our CONTRIBUTING.md which outlines all of our policies, procedures, and requirements for contributing to this project. As well as the contribution docs on development and testing.

Versioning and changelog

We use SemVer for versioning. For the versions available, see the releases on this repository.

Authors

  • Jony Vesterman Cohen
  • Dmitry Meytin

License

This project is licensed under the MIT License - see the LICENSE file for details

Acknowledgments

azure-schema-operator's People

Contributors

cohenjo avatar dmeytin avatar haidvo avatar microsoft-github-operations[bot] avatar ramcohen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

azure-schema-operator's Issues

validate helm release

The "Create Release" pipeline doesn't package the helm chart.
If the chart is not pre-packaged it will fail.

We need to trigger task package-helm and uploaded the generated package before releasing.
This is a simple fix as the task already exists.

failure handling tests

Test the Schema operator failure handling process.

We have 3 scenarios:

  1. abort - stop reconcile on a broken state
  2. ignore - stop reconcile on a successful state (ignoring any errors)
  3. rollback - rollback to previous version and attempt deployment.

Choose a local k8s dev env

The Azure-service-operator use kind http://kind.sigs.k8s.io/ to for local deployments.
the provide a simple way to install locally.

4 options:

  • Use the same method they use (we can take the scripts/tasks as is)2.
  • Minikube
  • kube in Docker client
  • We can also use k3s, which has lower resource footprint.

Bug: Running Dacca on schema without templateName causes unhandled exception

Describe the bug
When running the Dacpac on a specific target schema without specifying a template schema name in the configmap
cfgMap.Data["templateName"]

will result in an unhandled exception.
the manager will restart and the ClusterExecuter will remain stuck in Running state.

To Reproduce
Steps to reproduce the behavior:
Run the SQLServer flow on a target without specifying a templateName in the dacpac configMap.

Expected behavior
There are multiple issues and behaviors that need fixing here:

  • WebHook validation on the input is expected as this is not a valid input.
  • Error Event should be caught and handled
  • ClusterExecuter should not remain in stuck state.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.