amazon-archives / aws-service-operator Goto Github PK
View Code? Open in Web Editor NEWAWS Service Operator allows you to create AWS resources using kubectl.
License: Apache License 2.0
AWS Service Operator allows you to create AWS resources using kubectl.
License: Apache License 2.0
Allows you to create Dynamodb resources using an operator
apiVersion: operator.aws/v1alpha1
kind: DynamoDB
metadata:
name: chrishein-dynamodb-table-2
spec:
tableName: chrishein-dynamodb-table-2
hashAttribute:
name: user_id
type: S
rangeAttribute:
name: created_at
type: S
readCapacityUnits: 5
writeCapacityUnits: 5
Just like the ExternalName
Service integration allow for ConfigMaps
to be
created and templatized with the proper parameters from the CFTs
Use the Additional resources for a Service
Should manage DynamoDB resources and be formatted like:
apiVersion: operator.aws/v1alpha1
kind: DynamoDB
metadata:
name: chrishein-dynamodb-table-2
spec:
tableName: chrishein-dynamodb-table-2
hashAttribute:
name: user_id
type: S
rangeAttribute:
name: created_at
type: S
readCapacityUnits: 5
writeCapacityUnits: 5
This allows you to create CFTs and when they are deployed they are pushed to S3
Write out the thoughts behind the project.
This should talk about how to deploy each type of resource and have information about the keys that are available including best practices etc. This issue should be broken down and linked to other issues for each type of resource. Maybe these are added into the code generation libraries.
This should create any engine of the AWS::RDS:DBInstance
class meaning it should support creating mysql, postgres, mariadb, etc all via the same spec.
apiVersion: operator.aws/v1alpha1
kind: RDS
metadata:
name: my-rds
spec:
engine: postgres
backupRetentionPeriod: 30
instanceClass: db.m1.small
version: 9.6.3
storage:
capacity: 100GB
type: io1
encrypted: true
iops: 1000
versioning:
allowMajorUpgrade: true
allowMinorUpgrade: true
user: # optional
username: chris # optional
password: foobar # optional
Just like support for #43 but this one will create Kubernetes Secrets.
This should be considered a string
but be a generated type, when this type is
used it shouldn't be apart of the actual object other than stored in the
status
of the object.
Use cases:
This should take an SNSTopic
resource and bind it using a Subscription, this
should be able to both reference local SQS resource types as well as non-SQS
types.
apiVersion: operator.aws/v1alpha1
kind: SNSSubscription
metadata:
name: sns-sqs-subscription
spec:
snsTopicName: sns-topic
subject:
protocol: sqs
name: sqs-queue
# endpoint: optional
Right now the cft.go
and the controller.go
are packed into the same package they should individually be their own templates and sub-packages. Allowing independent testing. Also, rename the controller.go
to operator.go
Using some cluster in AWS have test published back to the repo so we can test PRs with some better reliability as well as using this to release new packages.
Making it so the project can be released with ease.
Need some more contributors!
With Kubernetes 1.11 CRDs support subcommands like scale
use this to enable
the elasticache instances to scale when you call that command.
kubectl scale elasticache my-elasticache --cache-nodes 5
This resource will allow a developer to create an ElastiCache instance using
the CRD model, it should be generic enough to support both memcached and redis.
apiVersion: operator.aws/v1alpha1
kind: ElastiCache
metadata:
name: elasticache
spec:
autoMinorVersionUpgrade: true
azMode: # (single-az|cross-az) only memcached
nodeType: cache.m3.medium
engine: # (memcached|redis)
version: 4.0.10
cacheNodes: 3
tags:
- name: Usage
value: caching
After issue #34 is complete, update the S3Bucket
, SQSQueue
, and
DynamoDBTable
resources to remove the resource names which aren't changeable.
This should create s3buckets and be formatted like so:
apiVersion: operator.aws/v1alpha1
kind: S3Bucket
metadata:
name: test-bucket
spec:
bucketName: test-bucket-name
versioning: false
logging:
enabled: false
prefix: "archive"
This makes it possible to get an ECR registry for a project while using the operator.
apiVersion: operator.aws/v1alpha1
kind: ECR
metadata:
name: test-app
spec:
lifecyclePolicy: |
# ...
repositoryPolicy: |
# ...
Should create an SNSTopic
, this is meant to be used with a SNSTopicSubscription
. TopicName
for the CFT should be a UUID resource type after issue #34 is finished
apiVersion: operator.aws/v1alpha1
kind: SNSTopic
metadata:
name: sns-topic
spec:
displayName: sns-topic
Right now the queuing internally uses an SQS queue and SNS topic per resource meaning that when you add additional resources we have to create new watchers for each component, this turns into a lot of resources created for the full operator to run.
Instead what would be better is to use a single SQS queue and many SNS topics than in the singular queue have it fan out to the necessary resources for processing.
This should go along with the refactoring out of OperatorKit
#23
This feature will be mostly implemented in the aws-operator-codegen
package but this would setup a separate package with some function that the server calls which is auto-generated based on the model files. Removing the last manual step for adding new code.
The --resources
flag is meant to turn on and off operators for the subsequent resources that are selected but it's currently implemented.
This should remove any services or comfigmaps when you clean up the cluster resources for example when you make an s3 bucket and it creates a configmap and service those should be cleaned up after the fact.
Should create SQS Queues and be formatted:
apiVersion: operator.aws/v1alpha1
kind: SQS
metadata:
name: chrishein-test-sqs-1
spec:
contentBasedDeduplication: false
delaySeconds: 1
usedeadletterQueue: false
This issue is to create an update deployment manifest for the operator and push
a built image (or multiple) to some registry for usage.
The end result is a deployable manifest that someone could deploy into their clusters.
Image shouldn't be named aws-service-operator
until this is OSS'ed
As of right now, you must supply an S3 bucket for the operator to use for storing the cfts, it should be able to be added using a CRD meaning you deploy and have a reference to an s3bucket that gets deployed when the operator comes alive
Start with building the support for AWS Secrets Manager and Vault then extend this to support other password managers.
The idea is to not require the use of Kubernetes Secrets to store sensitive information for the workloads. This should allow for a seamless configuration potentially referencing a secret manager at boot which will automatically be used OR each manifest could reference the secret manager of choice.
Creates SQS Queues Using CRDS like this:
apiVersion: operator.aws/v1alpha1
kind: SQS
metadata:
name: chrishein-test-sqs-1
spec:
contentBasedDeduplication: false
delaySeconds: 1
usedeadletterQueue: false
This should show how to setup the operator locally and build using the codegen
libraries to build and deploy new resources.
My .gitignore
was too open and removed my cmd
directory
Always important to know what will happen to the project
Should upload the raw template to the S3 bucket setup when creating the
operator and be formatted like so
apiVersion: operator.aws/v1alpha1
kind: CloudFormationTemplate
metadata:
name: s3bucket
data:
key: s3bucket.yaml
template: |
# ... Raw JSON or YAML template
Idea is to give more clarity into what this project solves upfront since it can
be fairly complex to understand all that it can do.
This doesn't technically do anything yet, but since the resources are code generated this will make the history really simple.
Without model files just the core code base.
After talking with @jaymccon about how the Service Broker works separating these roles will help with long-running CFTs.
Right now the resources are created after the resource is created this makes it difficult to have a proper lifecycle if the configmaps and services come up after the fact I can halt pods and deployments from being alive prior.
Right now just to make this work it has a very open policy and should be paired
down before we have production users.
This should create s3buckets and be formatted like so:
apiVersion: operator.aws/v1alpha1
kind: S3Bucket
metadata:
name: test-bucket
spec:
bucketName: test-bucket-name
versioning: false
logging:
enabled: false
prefix: "archive"
This should live in configs/aws-operator.json
where the readme documents it
and should have a sample role that allows CFTs to create any resources we have
a type for.
Operator Kit, although was great to get started with and allowed the project to move really fast, has created complications around not having the ability to create a shared context across both queue and the informer library. Using a builtin package will help remove this issue. Use https://github.com/christopherhein/operator-kit as a reference for what needs to be done.
Updates that should make this better would be using a set timeout on the informer so that the cache is resynced periodically, this will cause issues with API limits but that will be easy to deal with. Also making it so that there is an interface that each operator can implement instead of what we do
now will help.
I think this will be successful if the server.go
package loads up one single package from the root of pkg/operator/base
which then loops through all the loaded operators and initializing.
This resource type will be pulled into other resources just like the CloudFormationTemplates
, should be gotten by a reference for services like rds
, s3
, etc.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.