Code Monkey home page Code Monkey logo

k8s-rds's Introduction

k8s-rds

Build Status Go Report Card

A Custom Resource Definition for provisioning AWS RDS databases.

State: BETA - use with caution

Assumptions

The node running the pod should have an instance profile that allows creation and deletion of RDS databases and Subnets.

The codes will search for the first node, and take the subnets from that node. And depending on wether or not your DB should be public, then filter them on that. If any subnets left it will attach the DB to that.

Building

go build

Installing

You can start the the controller by applying kubectl apply -f deploy/deployment.yaml

RBAC deployment

To create ClusterRole and bindings, apply the following instead:

kubectl apply -f deploy/operator-cluster-role.yaml
kubectl apply -f deploy/operator-service-account.yaml
kubectl apply -f deploy/operator-cluster-role-binding.yaml
kubectl apply -f deploy/deployment-rbac.yaml

Running

Kubernetes database provisioner

Usage:
  k8s-rds [flags]

Flags:
      --exclude-namespaces strings   list of namespaces to exclude. Mutually exclusive with --include-namespaces.
  -h, --help                         help for k8s-rds
      --include-namespaces strings   list of namespaces to include. Mutually exclusive with --exclude-namespaces.
      --provider string              Type of provider (aws, local) (default "aws")
      --repository string            Docker image repository, default is hub.docker.com)

The provider can be started in two modes:

Local - this will provision a docker image in the cluster, and providing a database that way

AWS - This will use the AWS API to create a RDS database

Deploying

When the controller is running in the cluster you can deploy/create a new database by running kubectl apply on the following file.

apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
data:
  mykey: cGFzc3dvcmRvcnNvbWV0aGluZw==
---
apiVersion: k8s.io/v1
kind: Database
metadata:
  name: pgsql
  namespace: default
spec:
  class: db.t2.medium # type of the db instance
  engine: postgres # what engine to use postgres, mysql, aurora-postgresql etc.
  version: "9.6"
  dbname: pgsql # name of the initial created database
  name: pgsql # name of the database at the provider
  password: # link to database secret
    key: mykey # the key in the secret
    name: mysecret # the name of the secret
  username: postgres # Database username
  size: 20 # Initial allocated size in GB for the database to use
  MaxAllocatedSize: 50 # max_allocated_storage size in GB, the maximum allowed storage size for the database when using autoscaling. Has to be larger then size.
  backupretentionperiod: 10 # days to keep backup, 0 means diable
  deleteprotection: true # don't delete the database even though the object is delete in k8s
  encrypted: true # should the database be encrypted
  iops: 1000 # number of iops
  multiaz: true # multi AZ support
  storagetype: gp2 # type of the underlying storage
  tags: "key=value,key1=value1"
  provider: aws # Optional either aws or local, will overrides the value the operator was started with 
  skipfinalsnapshot: false # Indicates whether to skip the creation of a final DB snapshot before deleting the instance. By default, skipfinalsnapshot isn't enabled, and the DB snapshot is created.
  

After the deploy is done you should be able to see your database via kubectl get databases

NAME         AGE
test-pgsql   11h

And on the AWS RDS page

subnets

instances

TODO

  • Basic RDS support

  • Local PostgreSQL support

  • Cluster support

  • Google Cloud SQL for PostgreSQL support

k8s-rds's People

Contributors

arminioa avatar blacksails avatar clook avatar dependabot[bot] avatar jordan-huangwei avatar liskl avatar ofir-petrushka avatar sorenmat avatar tokyo2006 avatar wgarunap avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

k8s-rds's Issues

feature-request: create configmap with rds url and credentials after rds instance has been created

hallo

i wonder how hard it would be to create some kind of referenceable object which contains the url to the rds instance. and also maybe the credentials.

would be nice to be able to use those in the deployment of a eg. backend service which will consume the rds instance. some kind of aws rds service broker.

this could be done via configMaps i assume?

currently you are just creating the instances. but there is no chance to figure out the rds instance url programatically, right?

you might get the idea.

Missing required field CreateDBSubnetGroupInput.SubnetIds

I have created a database following the instructions in the README, and it looks like it has been successful:

# kubectl apply -f https://raw.githubusercontent.com/sorenmat/k8s-rds/master/db.yaml
secret "mysecret" created
database "mypgsql" created
# kubectl get database mypgsql -o yaml
apiVersion: k8s.io/v1
kind: Database
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"k8s.io/v1","kind":"Database","metadata":{"annotations":{},"name":"mypgsql","namespace":"default"},"spec":{"class":"db.t2.medium","dbname":"thepgsql","engine":"postgres","name":"thatpgsql","password":{"key":"mykey","name":"mysecret"},"size":10,"username":"postgres"}}
  clusterName: ""
  creationTimestamp: 2018-03-30T00:36:18Z
  generation: 0
  name: mypgsql
  namespace: default
  resourceVersion: "965728"
  selfLink: /apis/k8s.io/v1/namespaces/default/databases/mypgsql
  uid: 5c6d05ff-33b2-11e8-a45f-12df4130596a
spec:
  class: db.t2.medium
  dbname: thepgsql
  engine: postgres
  password:
    key: mykey
    name: mysecret
  size: 10
  username: postgres
status:
  message: Created
  state: Created

However, I see this in the logs:

# kubectl logs -lname=k8s-rds
2018/03/30 00:37:39 Starting k8s-rds
2018/03/30 00:37:39 Watching for database changes...
2018/03/30 00:37:39 Database CRD status updated
2018/03/30 00:37:39 Seems like we are running in a Kubernetes cluster!!
2018/03/30 00:37:39 Found node with ID:  i-04bed3bea340496f1
2018/03/30 00:37:39 Found the follwing subnets: 
2018/03/30 00:37:39 Seems like we are running in a Kubernetes cluster!!
2018/03/30 00:37:39 Trying to find the correct subnets
2018/03/30 00:37:40 CreateDBSubnetGroup: InvalidParameter: 1 validation error(s) found.
- missing required field, CreateDBSubnetGroupInput.SubnetIds.

2018/03/30 00:37:40 Creating service 'mypgsql' for 
2018/03/30 00:37:40 Creation of database mypgsql done

…and no indication as to how to fix it, or how to access that database from within one of my pods. I'm running the latest server version (1.9.3) available through kops, and I didn't change the default networking configuration in it, which reads:

  subnets:
  - cidr: 172.20.32.0/19
    name: us-east-1a
    type: Public
    zone: us-east-1a
  topology:
    dns:
      type: Public
    masters: public
    nodes: public

Any thoughts would be greatly appreciated.

Tag instances

You should be able to add tags to the db instances.
Perhaps something like this:

spec:
  tags: env=testing, owner=smo

pod stuck in strange state

E0811 15:18:24.310903       1 runtime.go:69] Observed a panic: &runtime.TypeAssertionError{_interface:(*runtime._type)(0x16e0460), concrete:(*runtime._type)(0x179e6c0), asserted:(*runtime._type)(0x195c7a0), missingMethod:""} (interface conversion: interface {} is cache.DeletedFinalStateUnknown, not *crd.Database)
goroutine 52 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x1757760, 0xc00025b380)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:65 +0x7b
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:47 +0x82
panic(0x1757760, 0xc00025b380)
	/usr/local/go/src/runtime/panic.go:967 +0x15d
main.execute.func2(0x179e6c0, 0xc000116ea0)
	/app/main.go:131 +0x31c
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnDelete(...)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:212
k8s.io/client-go/tools/cache.newInformer.func1(0x1761620, 0xc0003cb6e0, 0x1, 0xc0003cb6e0)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:376 +0x352
k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc000398fd0, 0xc0003ee390, 0x0, 0x0, 0x0, 0x0)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/delta_fifo.go:436 +0x235
k8s.io/client-go/tools/cache.(*controller).processLoop(0xc000438100)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:153 +0x40
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00045cf80)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:152 +0x5f
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00015bf80, 0x3b9aca00, 0x0, 0xc000456001, 0xc0000a20c0)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:153 +0xf8
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88
k8s.io/client-go/tools/cache.(*controller).Run(0xc000438100, 0xc0000a20c0)
	/go/pkg/mod/k8s.io/[email protected]/tools/cache/controller.go:125 +0x2c1
created by main.execute
	/app/main.go:160 +0x81e

Installation issues (RBAC?)

Hi!

I managed to work around #21 by running go build and creating a docker image locally and pushing it to ECR so that Kubernetes could see it.

I have RBAC enabled in my cluster, and it looks like the permissions system isn't letting the k8s-rds binary do its thing upon startup:

$ kubectl logs -lname=k8s-rds
2018/03/29 23:43:45 Starting k8s-rds
panic: customresourcedefinitions.apiextensions.k8s.io is forbidden: User "system:serviceaccount:default:default" cannot create customresourcedefinitions.apiextensions.k8s.io at the cluster scope

goroutine 1 [running]:
main.main()
	/Users/carlosvillela/go/src/github.com/sorenmat/k8s-rds/main.go:160 +0x651

What's the simplest way to solve this?

(I volunteer to document it accordingly in a follow-up PR once we get it working, by the way)

unable to create a client for EC2

Does anybody know the reason of this error?

Kubernetes cluster version: v1.10.6
Provider: AWS

Full log:

2018/11/06 13:54:05 Starting k8s-rds
2018/11/06 13:54:09 Seems like we are running in a Kubernetes cluster!!
2018/11/06 13:54:09 unable to create a client for EC2

Unable to create the databases...

I have ensured i have created AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. I am not seeing any errors in the setup. It shows databases is available - but creating any DB instance created in the AWS.

pgsql 5s

"Cannot create resource at cluster scope" error when seen starting the controller

This issue is related to: #22

What I am seeing is that this error is returned when the controller is started by applying kubectl apply -f deploy/deployment.yaml

The error returned is:

"system:serviceaccount:default:default" cannot create resource "customresourcedefinitions" 
in API group "apiextensions.k8s.io" at the cluster scope

I was able to workaround the issue the steps in the comment here: #22 (comment)

kubectl create clusterrolebinding
   --clusterrole=cluster-admin
  --user=system:serviceaccount:default:default
   --clusterrole=cluster-admin
   --user=system:serviceaccount   rds-admin-binding

Based on the discussion/status in #22, I had expected that the issue had been resolved a while ago - am I missing something? Thanks!

Version number as number should produce an earlier error

Would be nice if the yaml/json was denied due to the fact that it doesn't adhere to the schema

E1015 11:01:22.509341       1 reflector.go:123] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:96: Failed to list *crd.Database: crd.DatabaseList.Items: []crd.Database: crd.Database.Spec: crd.DatabaseSpec.Version: ReadString: expects " or n, but found 1, error found in #10 byte of ...|version":12.3},"stat|..., bigger context ...|"gp2","username":"contentmanagementdb","version":12.3},"status":{"message":"Created","state":"Create|...
E1015 11:01:23.812325       1 reflector.go:123] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:96: Failed to list *crd.Database: crd.DatabaseList.Items: []crd.Database: crd.Database.Spec: crd.DatabaseSpec.Version: ReadString: expects " or n, but found 1, error found in #10 byte of ...|version":12.3},"stat|..., bigger context ...|"gp2","username":"contentmanagementdb","version":12.3},"status":{"message":"Created","state":"Create|...
E1015 11:01:25.210558       1 reflector.go:123] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:96: Failed to list *crd.Database: crd.DatabaseList.Items: []crd.Database: crd.Database.Spec: crd.DatabaseSpec.Version: ReadString: expects " or n, but found 1, error found in #10 byte of ...|version":12.3},"stat|..., bigger context ...|"gp2","username":"contentmanagementdb","version":12.3},"status":{"message":"Created","state":"Create|...

Is it maintained?

And is it a personal project, or used in a professional context?

It looks nice, and trying to understand the level of effort that will be put there later on.
If I use it, I'll obviously help you then. (But it is not clear yet the direction we are taking in our company).

Basically, we need to provide RDS to our multitenant openshift cluster, and provide day 2 operation as well.

Thanks for your insight!

Docker image not found

Hi!

I've been trying to follow the installation steps in the README and just bumped into this:

$ docker pull sorenmat/k8s-rds
Using default tag: latest
Error response from daemon: manifest for sorenmat/k8s-rds:latest not found

Looks like the image isn't being uploaded by Travis?

PS: Thank you so much for k8s-rds. It's exactly what we've been looking for!

Users should be instructed to create AWS-related secret before starting the controller

Users should be instructed to create an AWS-related secret, before performing this action:

kubectl apply -f deploy/deployment-rbac.yaml

The secret to create should take this form (users must supply their own AWS KEY and KEY_ID):

apiVersion: v1
kind: Secret
metadata:
  name: k8s-rds
type: Opaque
data:
  AWS_ACCESS_KEY_ID: ################################## (must be encrypted)
  AWS_SECRET_ACCESS_KEY: ################################# (must be encrypted)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.