Code Monkey home page Code Monkey logo

open-service-broker-azure's Introduction

Open Service Broker™ for Azure

CircleCI Go Report Card

This project is no longer being maintained by Microsoft. Please refer to https://github.com/Azure/azure-service-operator for an alternate implementation.

Open Service Broker for Azure is the open source, Open Service Broker-compatible API server that provisions managed services in the Microsoft Azure public cloud.

Open Service Broker for Azure GIF

CLOUD FOUNDRY and OPEN SERVICE BROKER are trademarks of the CloudFoundry.org Foundation in the United States and other countries.

Supported Services

Stable Services

Preview Services

Experimental Services

Note for AzureChinaCloud: Currently OSBA supports managing resources in AzurePublicCloud and AzureChinaCloud. However, cloud environment between AzureChinaCloud and AzurePublicCloud is different. Here are some known differences, before you create a resource in AzureChinaCloud, please first check the document and make sure your resource meet the requirement. And there may exist unknown differences which can cause the creation of resource in AzureChinaCloud fail. Please raise an issue if you find you can't create a resource in AzureChinaCloud.

Quickstarts

Go from "I have an Azure account that I have never used" to "I just deployed WordPress and know what OSBA means!"

  • The Minikube Quickstart walks through using the Open Service Broker for Azure to deploy WordPress on a local Minikube cluster.
  • The AKS Quickstart walks through using the Open Service Broker for Azure to deploy WordPress on an Azure Managed Kubernetes Cluster (AKS).

Got questions? Ran into trouble? Check out our Frequently Asked Questions.

Getting Started on Kubernetes

Installing

Prerequisites

You'll need a few prerequisites before you run these examples on Kubernetes. Instructions on how to install each prerequisite are linked below:

Service Catalog CLI

Once you've installed the prerequisites, you'll need the Service Catalog CLI, svcat, installed to introspect the Kubernetes cluster. Please refer to the CLI installation instructions for details on how to install it onto your machine.

Helm Chart

Use Helm to install Open Service Broker for Azure onto your Kubernetes cluster. Refer to the OSBA Helm chart for details on how to complete the installation.

OpenShift Project Template

Deploy OSBA using a OpenShift Project Template

  • You must have Service Catalog already installed on OpenShift in order for this to work

Create a new OpenShift project

oc new-project osba

Process the OpenShift Template

oc process -f https://raw.githubusercontent.com/Azure/open-service-broker-azure/master/contrib/openshift/osba-os-template.yaml  \
   -p ENVIRONMENT=AzurePublicCloud \
   -p AZURE_SUBSCRIPTION_ID=$AZURE_SUBSCRIPTION_ID \
   -p AZURE_TENANT_ID=$AZURE_TENANT_ID \
   -p AZURE_CLIENT_ID=$AZURE_CLIENT_ID \
   -p AZURE_CLIENT_SECRET=$AZURE_CLIENT_SECRET \
   | oc create -f -

Provisioning

With the Kubernetes Service Catalog software and Open Service Broker for Azure both installed on your Kubernetes cluster, try creating a ServiceInstance resource to see service provisioning in action.

The following will provision PostgreSQL on Azure:

$ kubectl create -f contrib/k8s/examples/postgresql/postgresql-instance.yaml

After the ServiceInstance resource is submitted, you can view its status:

$ svcat get instance example-postgresql-all-in-one-instance

You'll see output that includes a status indicating that asynchronous provisioning is ongoing. Eventually, that status will change to indicate that asynchronous provisioning is complete.

Binding

Upon provision success, bind to the instance:

$ kubectl create -f contrib/k8s/examples/postgresql/postgresql-binding.yaml

To check the status of the binding:

$ svcat get binding example-postgresql-all-in-one-binding

You'll see some output indicating that the binding was successful. Once it is, a secret named my-postgresql-secret will be written that contains the database connection details in it. You can observe that this secret exists and has been populated:

$ kubectl get secret example-postgresql-all-in-one-secret -o yaml

This secret can be used just as any other.

Unbinding

To unbind:

$ kubectl delete servicebinding my-postgresqldb-binding

Observe that the secret named my-postgresqldb-secret is also deleted:

$ kubectl get secret my-postgresqldb-secret
Error from server (NotFound): secrets "my-postgresqldb-secret" not found

Deprovisioning

To deprovision:

$ kubectl delete serviceinstance my-postgresqldb-instance

You can observe the status to see that asynchronous deprovisioning is ongoing:

$ svcat get instance my-postgresqldb-instance

Getting Started on Cloud Foundry

Installing

To deploy Open Service Broker for Azure to Cloud Foundry, please refer to the CloudFoundry installation documentation for instructions.

Provisioning

The following will create a Postgres service:

cf create-service azure-postgresql-9-6 basic mypostgresdb -c '{
  "location": "eastus",
  "resourceGroup": "test",
  "firewallRules" : [
      {
        "name": "AllowAll",
        "startIPAddress": "0.0.0.0",
        "endIPAddress" : "255.255.255.255"
      }
    ]
  }'

You can check the status of the service instance using the cf service command, which will show output similar to the following:

Service instance: mypostgresdb
Service: azure-postgresqldb
Bound apps:
Tags:
Plan: basic
Description: Azure Database for PostgreSQL Service
Documentation url:
Dashboard:

Last Operation
Status: create in progress
Message: Creating server uf666164eb31.
Started: 2017-10-17T23:30:07Z
Updated: 2017-10-17T23:30:12Z

Binding

Once the service has been successfully provisioned, you can bind to it by using cf bind-service or by including it in a Cloud Foundry manifest.

cf bind-service myapp mypostgresdb

Once bound, the connection details for the service (such as its endpoint and authentication credentials) are available from the VCAP_SERVICES environment variable within the application. You can view the environment variables for a given application using the cf env command:

cf env myapp

Unbinding

To unbind a service from an application, use the cf unbind-service command:

cf unbind-service myapp mypostgresdb

Deprovisioning

To deprovision the service, use the cf delete-service command.

cf delete-service mypostgresdb

Getting started on Service Fabric

Please refer to the example for how to use Service Catalog with Service Fabric.

Contributing

For details on how to contribute to this project, please see contributing.md.

This project welcomes contributions and suggestions. All contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

open-service-broker-azure's People

Contributors

ageekymonk avatar aljannuzzi avatar arschles avatar bingosummer avatar bottkars avatar carolynvs avatar chzbrgr71 avatar felixpe avatar georgi-lozev avatar ivanov-aleksander avatar jcorioland avatar jeremyrickard avatar jimzim avatar joshgav avatar jpoon avatar julienstroheker avatar krancour avatar mmacy avatar neilpeterson avatar nicr9 avatar norshtein avatar olevett avatar patricklang avatar pkosiec avatar prydonius avatar ritazh avatar seanmck avatar sozercan avatar timja avatar zhongyi-zhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

open-service-broker-azure's Issues

Need to review keys used in all JSON objects

There are a lot of inconsistencies in conventions from one module to the next, and even inconsistencies within modules.

Examples:

One module might used a key like clientId while another might use clientid.

In one case I know of, one module uses objectid and clientId.

We should review the keys used in JSON throughout the entire broker and correct inconsistencies while we're still able / allowed to make breaking changes.

I'm classifying this as a bug, because even if it doesn't crash anything, I perceive it to be a serious flaw in the UX.

Test Key Vault credentials

In the lifecycle tests, incorporate an assertion that credentials obtained via binding actually work.

Expose event hub options via provisioning parameters

From @krancour on September 28, 2017 17:45

Event Hubs are definitely not one-size-fits-all. Different use cases call for different numbers of partitions and data retention policies.

This issue is to expose those options (and any others deemed reasonable) via provisioning parameters.

Additional things we should expose:

  • number of "shards"
  • others?

ACI module needs more flexibility.

The available provisioning parameters for the ACI module should match the available flags on the az container create CLI command.

Currently, it lacks many vital options, including but not limited to --commandLine and --environment-variables.

Need to document recipes

From @krancour on September 28, 2017 2:42

I've been trying to put myself back in an SRE headspace. I believe it's important that we document best practices and "recipes" for how to operationalize this broker, both in the context of k8s and CF.

I imagine these recipes covering a broad range of prototypical use cases.

Example recipe: Deploying message producers and message consumers dependent on a common service bus queue. This sort of use case is both common and non-trivial.

Add ACS Module

From @arschles on September 5, 2017 22:49

Provision/deprovision should launch and destroy clusters (respectively). Bind should return kube config. I'm not sure yet what unbind should do. Let's discuss!

Don't use ARM template parameters anymore

#69 added the ability to pre-process ARM templates as a Go template...

Now that we have that ability, I'm wondering if we can cut down on ARM template length and complexity by preferring to use Go template mechanisms (e.g. "mustaches", sprig functions, etc.) over ARM parameters.

The pros of ARM parameters, as I see them, are that they offer us a convenient way of declaratively validating inputs, however, this is not as useful to our broker as it may sound. Since ARM deployments are done asynchronously, it's better to validate inputs synchronously before accepting a provisioning request.

EDIT: ... which would mean it's better to rely on the broker's own validations mechanisms instead.

Broker not picking up parameters provided in cf create-service

While waiting for #24 to be resolved, I tried creating a service in CF by passing in parameters to the cf create-service command, but the broker still complains about not having a location, suggesting it's not getting the params provided.

Repro:

In a CF environment with the broker installed/registered, run:

cf create-service azure-rediscache basic mycache -c config.json

Where config.json looks like:

{
    "resourceGroup": "myresourcegroup",
    "cacheName": "mycache",
    "parameters": {
    "location": "westus",
    "enableNonSslPort": false,
    "sku": {
        "name": "Basic",
        "family": "C",
        "capacity": 0
    }
  }
}

Response to user:

Creating service instance mycache in org demo / space dev as [email protected]... FAILED Server error, status code: 502, error code: 10001, message: The service broker rejected the request to https://asb.apps.smallpcf.seanmckdemo.com/v2/service_instances/81d0cdc2-e78a-4c75-8d30-0bdf5600d4c4?accepts_incomplete=true. Status Code: 400 Bad Request, Body: {}

Log output:

2017-11-22T07:20:02.81-0800 [APP/PROC/WEB/0] OUT time="2017-11-22T15:20:02Z" level=debug msg="bad provisioning request: validation error" field=location instanceID=81d0cdc2-e78a-4c75-8d30-0bdf5600d4c4 issue="invalid location: \"\""

The broker API pod goes into CrashLoopBackoff

I've installed the helm chart according to the directions in the helm chart's README, and I am noticing that the broker API pod almost immediately goes into CrashLoopBackoff with the following log message:

time="2017-11-10T00:23:45Z" level=info msg="setting log level" logLevel=INFO
time="2017-11-10T00:23:45Z" level=info msg="API server is listening" address="http://0.0.0.0:8080"
time="2017-11-10T00:24:15Z" level=fatal msg="async engine stopped: cleaner stopped: error cleaning up after dead workers: error retrieving workers: dial tcp: i/o timeout"```

I am operating on revision 01d1b210e3910bdd57cbad0d444094eb22374527

logrus outputs all the logs to stderr by default

From @bingosummer on September 8, 2017 7:20

For now, all the logs are outputted to stderr.

In CF app logs, it looks like:

2017-09-08T06:38:28.86+0000 [APP/PROC/WEB/0] ERR time="2017-09-08T06:38:28Z" level=info msg="setting log level" logLevel=DEBUG
2017-09-08T06:38:28.86+0000 [APP/PROC/WEB/0] ERR time="2017-09-08T06:38:28Z" level=info msg="API server is listening" address="http://0.0.0.0:8080"

Not sure how the logs looks like in k8s.

From my understanding,

  1. debug, info, warn should be put into Stdout and levels error, fatal, panic into Stderr. But logrus doesn't support it directly. Needs more codes. FYI: sirupsen/logrus#403
  2. No need to add timestamp because CF already did. Not sure whether k8s added the timestamp. If not, we should keep it.

Unit tests panic when there is no redis running

From @arschles on October 13, 2017 18:56

If you run go test -tags unit $$(glide nv) on the host, with no redis host running, many tests fail as expected, but one panics:

--- FAIL: TestGetExistingInstance (0.00s)
        Error Trace:    store_test.go:51
	Error:      	Expected nil, but got: &net.OpError{Op:"dial", Net:"tcp", Source:net.Addr(nil), Addr:net.Addr(nil), Err:(*net.DNSError)(0xc42007fd80)}
        Error Trace:    store_test.go:55
	Error:      	Should be true
        Error Trace:    store_test.go:56
	Error:      	Expected nil, but got: &net.OpError{Op:"dial", Net:"tcp", Source:net.Addr(nil), Addr:net.Addr(nil), Err:(*net.DNSError)(0xc42007ff00)}
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1294357]

goroutine 33 [running]:
testing.tRunner.func1(0xc4200811e0)
	/usr/local/opt/go/libexec/src/testing/testing.go:622 +0x29d
panic(0x12e0200, 0x14d2410)
	/usr/local/opt/go/libexec/src/runtime/panic.go:489 +0x2cf
github.com/Azure/azure-service-broker/pkg/storage.TestGetExistingInstance(0xc4200811e0)
	/Users/aaschles/gocode/src/github.com/Azure/azure-service-broker/pkg/storage/store_test.go:57 +0x257
testing.tRunner(0xc4200811e0, 0x134b1a8)
	/usr/local/opt/go/libexec/src/testing/testing.go:657 +0x96
created by testing.(*T).Run
	/usr/local/opt/go/libexec/src/testing/testing.go:697 +0x2ca

assert that provisioning and deprovisioning work for each service

From @krancour on October 30, 2017 17:22

To avoid issues like #246 taking us by surprise in the future, it would be good to assert (in the module lifecycle tests) that after provisioning, the expected resources exist, and after deprovisioning, the expected resourced have been deleted. Currently (save for ongoing work to test credentials) the lifecycle tests only assert that each module can complete the provision/bind/unbind/deprovision lifecycle without error. We can stand to make stronger assertions that expected things happened.

Assert compliance with OSB spec

@Haishi2016 has written a nice suite for doing this. It should be integrated into our own test suite.

https://github.com/Haishi2016/osb-checker

Since I expect that many (slight) deviations exist, this additional assertion should not, initially, fail the CI pipeline if issues are found.

We can address each finding individually (as its own issue) before requiring this assertion to pass.

Meta: Modules need to cover wider variety of use cases

From @krancour on September 28, 2017 2:35

This issue is a placeholder for the moment.

To tease the discussion, see the below comments.

The sentiments expressed in those comments are likely applicable to a variety of existing modules, including, but probably not limited to:

  • PostgreSQL
  • MySQL
  • SQL Server

aci should support multiple ports

Docker and ACI allow containers to expose multiple ports, but provisioning parameters for the ACI module only allow specifying a single port.

Although we're not investing heavily in module development / enhancement in the v0.4.0/v0.5.0 timeframe, I'm adding this to v0.4.0 because s/port/ports is a breaking change and these changes are better made sooner while we are making no guarantees about the stability of each module.

Publish CLI Binaries to blob storage?

From @arschles on October 17, 2017 16:21

We have instructions that show how to run the CLI with go run .... Even though the CLI is not meant to be general-purpose, it would be nice to treat it as an artifact (as we do with our docker images, etc...) and write our documentation to instruct users to download the CLI instead.

Handling failures in the async engine?

From @arschles on October 16, 2017 18:55

I recall a discussion between myself and @krancour about handling failures in the async engine, in which we discussed rolling back changes that were made in prior steps. I believe the async engine doesn't have rollback support yet, but the best practice is to make steps idempotent which makes failure handling less important (because the steps can be re-run with minimal negative consequence). So, this issue exists to track discussion for whether we need to add support for:

  1. Adding a "rollback" function to each step
  2. Adding support in the async engine to rollback all successful steps before a failure

cc/ @krancour

Test MySQL credentials

In the lifecycle tests, incorporate an assertion that credentials obtained via binding actually work.

Respond with error message if operation fails

For example: I didn't specified location in provisioning parameters, it would return:

$ cf create-service azure-sqldb basic sqlsql
Creating service instance sqlsql in org IP.xip.io_ORGANIZATION / space azure as admin...
FAILED
Server error, status code: 502, error code: 10001, message: The service broker rejected the request to http://asb.IP.xip.io/v2/service_instances/ace140b2-79dc-4ed3-bacb-c4eba0eb67dc?accepts_incomplete=true.
Status Code: 400 Bad Request, Body: {}

Relative line:
https://github.com/Azure/azure-service-broker/blob/master/pkg/api/provision.go#L225

See bold, user only knows that it is a bad request but what's the reason in details? User needs to use cf logs asb --recent to find out that.
At worst, some other polling operations are working at the same time, maybe it's a little hard to find the corresponding error message. (PS, such an error is printed to stdout, is it expected?)

2017-11-17T02:03:31.02+0000 [APP/PROC/WEB/0] OUT time="2017-11-17T02:03:31Z" level=debug msg="bad provisioning request: validation error" field=location instanceID=ace140b2-79dc-4ed3-bacb-c4eba0eb67dc issue="invalid location: \"\""

Anyway, it's better to always respond with error message if operation failed.

Meta: Add documentation for each module

From @krancour on September 28, 2017 18:3

For each module, we should document service/plan offerings, provisioning parameters, and the credential schema.

Modules:

  • aci
  • cosmosdb
  • eventhub
  • keyvault
  • mssql
  • mysql
  • postgresql
  • rediscache
  • search
  • servicebus
  • storage

Let cluster operator define default location

From @krancour on September 12, 2017 14:19

In the interest of eliminating (or at least reducing) cloud/provider-specific parameters required for provisioning, it might be a good idea to allow (require?) cluster operators to define a default location (region) where services will be provisioned.

Similarly, it might be good to allow a default resource group to be defined. Right now, these are auto-generated by default and #84 will make resource group an optional parameter for all modules, but there is probably a benefit to allowing cluster operators to set a default. This would allow them to neatly contain all Azure resourced created for a given cluster to a single resource group if they so choose. (I think this should be optional and not required.)

REFERENCES command denied to user for MySQL

While working on the Ghost chart, noticed an error relating to the minimum privilege required by the created db user for MySQL.

MESSAGE: alter table `posts_tags` add constraint `posts_tags_post_id_foreign` foreign key (`post_id`) references `posts` (`id`) - ER_TABLEACCESS_DENIED_ERROR: REFERENCES command denied to user '<REDACTED>' for table 'posts'

We need to add the "REFERENCES" privilege for the created db user.

Test Event Hub credentials

In the lifecycle tests, incorporate an assertion that credentials obtained via binding actually work.

cli needs way of sending multi-valued parameters

For provision, bind, and update operations, it's currently possible to specify params as key=value pairs, but there's no way to send an array as a parameter. This would be useful, for instance, in specifying multiple postgres extensions to install during provisioning or multiple container ports to expose (in ACI).

Update service descriptions with stability info

We already have a way of filtering what services are included in the catalog on the basis of the relative stability of each module, but we don't include stability information in the catalog.

Every module should update service descriptions to include this info. It could maybe also be added in a dedicated field, but there's no guarantee that clients like k8s service catalog will show that information, so the description fields are the best place for this.

Add ACI Module

From @arschles on September 5, 2017 22:45

Provisioning should launch a new container on ACI, de-provisioning should delete it, and bind/unbind should be no-ops.

Bound users in postgres should be in azure_pg_admin role

From @krancour on September 28, 2017 20:17

Without this, users cannot enable extensions, and the inability to do so rules out too many common use cases. (i.e. There's too much off the shelf software that requires various postgres extensions. GitLab CE is a great example.)

Unfortunately, this is not easily accomplished at this time because, while it is possible to assign new users to the azure_pg_admin, one must be a superuser to revoke the role. And even the administrative user is not a superuser. Inability to revoke the role from a bound user means it is also impossible to drop a user upon unbind since a drop role (user) command implicitly drops all the role's (user's) roles (group's).

(PostgreSQL uses "role" too much. 😛 )

What do we do about deprovisioning an instance with existing bindings?

From @krancour on August 3, 2017 17:11

The k8s service catalog doesn't let this happen, but that is unlikely to be this broker's only client.

Since the OSB spec doesn't explicitly disallow this, I propose we accept it as a possibility and handle it by cascading the delete (deprovision) to any would-be orphan bindings.

I'm not sure what this, ideally, looks like. Do we queue up a whole bunch of unbinds and require those to complete before we queue up the deprovisioning steps? Or do we do the deprovisioning and when we're done, we say the orhpaned bindings are meaningless and we simply delete them from the broker's own datastore?

I'm going to consider this a post-alpha issue, so I'm not assigning it to any existing milestone.

cc @arschles

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.