Code Monkey home page Code Monkey logo

dataroaster's Introduction

DataRoaster

DataRoaster is open source data platform running on Kubernetes.

DataRoaster Architecture

DataRoaster Architecture

DataRoaster has a simple architecture. There are several operators in DataRoaster to install data platform components easily.

Data Platform Components supported by DataRoaster

Components supported by DataRoaster:

  • Hive Metastore: standard data catalog in data lakehouses.
  • Spark Thrift Server: hive server compatible interface through which hive queries will be executed on spark.
  • Trino: popular query engine in data lakehouses.
  • Redash: popular BI tool.
  • Jupyterhub: multi-user version of jupyter notebook.
  • Kafka: popular event streaming platform.
  • Airflow: popular workflow.

Install DataRoaster

helm repo add dataroaster-operator https://cloudcheflabs.github.io/dataroaster-operator-helm-repo/
helm repo update;

helm install \
dataroaster-operator \
--create-namespace \
--namespace dataroaster-operator \
--version v3.0.8 \
--set image=cloudcheflabs/dataroaster-operator:4.3.0 \
--set dataroastermysql.storage.storageClass=oci \
dataroaster-operator/dataroaster-operator;

dataroastermysql.storage.storageClass is mysql storage class which needs to be replaced with one of your kubernetes cluster.

You need to copy the randomly generated admin password which will be used to access dataroaster api.

kubectl logs -f $(kubectl get pod -l app=dataroaster-operator -o jsonpath="{.items[0].metadata.name}" -n dataroaster-operator) -n dataroaster-operator | grep "randomly generated password for user";

Output looks like this.

...
randomly generated password for user 'admin': 9a87f65688a64e999e62c8c308509708
...

9a87f65688a64e999e62c8c308509708 is the temporary admin password which can be used for the first time and should be changed in the future.

To access DataRoaster from local, port-forward the service of dataroaster-operator-service.

kubectl port-forward svc/dataroaster-operator-service 8089 -n dataroaster-operator;

DataRoaster REST API

To access DataRoaster with REST, see DataRoaster REST API for more details.

DataRoaster Trino Controller

DataRoaster Trino Controller is used to control all the trino ecosystem components to build a trino gateway easily.

See DataRoaster Trino Controller for more details.

DataRoaster Trino Gateway

DataRoaster Trino Gateway is used to route the trino queries dynamically to downstream trino clusters.

See DataRoaster Trino Gateway for more details.

DataRoaster Trino Operator

DataRoaster Trino Operator is used to create/delete trino clusters easily.

See DataRoaster Trino Operator for more details.

DataRoaster Helm Operator

DataRoaster Helm Operator is used to install / upgrade / uninstall applications of helm charts easily.

See DataRoaster Helm Operator for more details.

DataRoaster Spark Operator

DataRoaster Spark Operator is used to submit and delete spark applications on kubernetes using custom resources easily. Not only spark batch job but also endless running applications like spark streaming applications can be deployed using dataroaster spark operator.

See DataRoaster Spark Operator for more details.

Community

License

The use and distribution terms for this software are covered by the Apache 2.0 license.

dataroaster's People

Contributors

cloudcheflabs avatar mykidong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

dataroaster's Issues

Add DataRoaster Operator

DataRoaster Operator is used to controll all the data platform components running on kubernetes.
It consists of mysql server, helm operator and dataroaster server which provides rest api to the client.

Update prometheus jobs when trino configurations updated.

Everytime trino configuration updated, trino coordinator and workers in the dedicated trino cluster will be rollout restarted.
At that time, all the endpoints of trino coordinator and workers will get new addresses for their pods, so these changed endpoints addresses of trino coordinator and workers will be updated as prometheus jobs to prometheus configmap.

Add Trino Gateway to proxy trino clusters.

Trino can be used for the cases of ETL, interactive(adhoc) and scheduled, etc.
We can create trino clusters for these usages individually. In order to load-balance trino clusters, and to support trino coordinator HA, trino gateway should be used. We can extend this concept to trino ecosystem.

Trino ecosystem consists of trino gateway, authenticator, admin and trino clusters, etc.
We can think of the following scenario which can happen in trino ecosystem.

  • clients(applications, redash, tableau, cli, etc) will request queries to trino gateway.
  • trino gateway will filter parameters of request to authenticate / authorize user with authenticator.
  • the queries requested will be routed dynamically to the downstream trino cluster for the individual usages of ETL, interactive, scheduled.
  • admin will create / update / delete trino clusters on kubernetes using trino operator.
  • admin will register / deregister / activate / deactivate trino cluster to trino gateway.

Add trino controller to control all the trino ecosystems.

Trino controller needs to control all the trino ecosystems.

  • create trino operator
  • create helm operator.
  • create nginx ingress controller.
  • create cert manager.
  • create trino gateway.
  • create trino cluster.
  • create prometheus.
  • create grafana.
  • scale out trino cluster.
  • register trino cluster to trino gateway.
  • deregister trino cluster from trino gateway.
  • detect exhausted trino cluster.

Add Trino Operator

Trino operator will be used to create / update / delete trino clusters.
In addition to that, It should also monitor trino clusters and sometimes replace the exhausted trino workers with the new ones if detected.
We can also consider support of graceful shutdown when there are existing queries being executed which have to be executed before trino cluster shutdown.

Add Airflow Workflow

Airflow is one of the popular workflows for data lakehouse.
Airflow should support the following list:

  • dynamical dag workflow deployment using git-sync sidecar container.
  • support remote logging to s3.
  • KEDA based worker autoscaling.

Add specific apis for data platform components.

It is needed that specific apis for data platform components should be created to create these components automatically.
for instance,

  • /v1/spark_thrift_server for spark thrift server
  • /v1/trino for trino
  • /v1/hive_metastore for hive metastore
  • etc.

Bug: Watcher close error in trino operator.

if there is no action notified for a long time, watcher will be closed with the message like this.

com.cloudcheflabs.dataroaster.operators.trino.handler.TrinoClusterWatcher: close watcher

Add helm operator

Most of components provided by dataroaster is based on helm chart. With helm operator, these components can be installed on kubernetes easily.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.