Code Monkey home page Code Monkey logo

open-service-broker-azure's Issues

REFERENCES command denied to user for MySQL

While working on the Ghost chart, noticed an error relating to the minimum privilege required by the created db user for MySQL.

MESSAGE: alter table `posts_tags` add constraint `posts_tags_post_id_foreign` foreign key (`post_id`) references `posts` (`id`) - ER_TABLEACCESS_DENIED_ERROR: REFERENCES command denied to user '<REDACTED>' for table 'posts'

We need to add the "REFERENCES" privilege for the created db user.

Broker not picking up parameters provided in cf create-service

While waiting for #24 to be resolved, I tried creating a service in CF by passing in parameters to the cf create-service command, but the broker still complains about not having a location, suggesting it's not getting the params provided.

Repro:

In a CF environment with the broker installed/registered, run:

cf create-service azure-rediscache basic mycache -c config.json

Where config.json looks like:

{
    "resourceGroup": "myresourcegroup",
    "cacheName": "mycache",
    "parameters": {
    "location": "westus",
    "enableNonSslPort": false,
    "sku": {
        "name": "Basic",
        "family": "C",
        "capacity": 0
    }
  }
}

Response to user:

Creating service instance mycache in org demo / space dev as [email protected]... FAILED Server error, status code: 502, error code: 10001, message: The service broker rejected the request to https://asb.apps.smallpcf.seanmckdemo.com/v2/service_instances/81d0cdc2-e78a-4c75-8d30-0bdf5600d4c4?accepts_incomplete=true. Status Code: 400 Bad Request, Body: {}

Log output:

2017-11-22T07:20:02.81-0800 [APP/PROC/WEB/0] OUT time="2017-11-22T15:20:02Z" level=debug msg="bad provisioning request: validation error" field=location instanceID=81d0cdc2-e78a-4c75-8d30-0bdf5600d4c4 issue="invalid location: \"\""

Let cluster operator define default location

From @krancour on September 12, 2017 14:19

In the interest of eliminating (or at least reducing) cloud/provider-specific parameters required for provisioning, it might be a good idea to allow (require?) cluster operators to define a default location (region) where services will be provisioned.

Similarly, it might be good to allow a default resource group to be defined. Right now, these are auto-generated by default and #84 will make resource group an optional parameter for all modules, but there is probably a benefit to allowing cluster operators to set a default. This would allow them to neatly contain all Azure resourced created for a given cluster to a single resource group if they so choose. (I think this should be optional and not required.)

Meta: Modules need to cover wider variety of use cases

From @krancour on September 28, 2017 2:35

This issue is a placeholder for the moment.

To tease the discussion, see the below comments.

The sentiments expressed in those comments are likely applicable to a variety of existing modules, including, but probably not limited to:

  • PostgreSQL
  • MySQL
  • SQL Server

Publish CLI Binaries to blob storage?

From @arschles on October 17, 2017 16:21

We have instructions that show how to run the CLI with go run .... Even though the CLI is not meant to be general-purpose, it would be nice to treat it as an artifact (as we do with our docker images, etc...) and write our documentation to instruct users to download the CLI instead.

ACI module needs more flexibility.

The available provisioning parameters for the ACI module should match the available flags on the az container create CLI command.

Currently, it lacks many vital options, including but not limited to --commandLine and --environment-variables.

Test MySQL credentials

In the lifecycle tests, incorporate an assertion that credentials obtained via binding actually work.

logrus outputs all the logs to stderr by default

From @bingosummer on September 8, 2017 7:20

For now, all the logs are outputted to stderr.

In CF app logs, it looks like:

2017-09-08T06:38:28.86+0000 [APP/PROC/WEB/0] ERR time="2017-09-08T06:38:28Z" level=info msg="setting log level" logLevel=DEBUG
2017-09-08T06:38:28.86+0000 [APP/PROC/WEB/0] ERR time="2017-09-08T06:38:28Z" level=info msg="API server is listening" address="http://0.0.0.0:8080"

Not sure how the logs looks like in k8s.

From my understanding,

  1. debug, info, warn should be put into Stdout and levels error, fatal, panic into Stderr. But logrus doesn't support it directly. Needs more codes. FYI: sirupsen/logrus#403
  2. No need to add timestamp because CF already did. Not sure whether k8s added the timestamp. If not, we should keep it.

Update service descriptions with stability info

We already have a way of filtering what services are included in the catalog on the basis of the relative stability of each module, but we don't include stability information in the catalog.

Every module should update service descriptions to include this info. It could maybe also be added in a dedicated field, but there's no guarantee that clients like k8s service catalog will show that information, so the description fields are the best place for this.

Bound users in postgres should be in azure_pg_admin role

From @krancour on September 28, 2017 20:17

Without this, users cannot enable extensions, and the inability to do so rules out too many common use cases. (i.e. There's too much off the shelf software that requires various postgres extensions. GitLab CE is a great example.)

Unfortunately, this is not easily accomplished at this time because, while it is possible to assign new users to the azure_pg_admin, one must be a superuser to revoke the role. And even the administrative user is not a superuser. Inability to revoke the role from a bound user means it is also impossible to drop a user upon unbind since a drop role (user) command implicitly drops all the role's (user's) roles (group's).

(PostgreSQL uses "role" too much. ๐Ÿ˜› )

Test Event Hub credentials

In the lifecycle tests, incorporate an assertion that credentials obtained via binding actually work.

Handling failures in the async engine?

From @arschles on October 16, 2017 18:55

I recall a discussion between myself and @krancour about handling failures in the async engine, in which we discussed rolling back changes that were made in prior steps. I believe the async engine doesn't have rollback support yet, but the best practice is to make steps idempotent which makes failure handling less important (because the steps can be re-run with minimal negative consequence). So, this issue exists to track discussion for whether we need to add support for:

  1. Adding a "rollback" function to each step
  2. Adding support in the async engine to rollback all successful steps before a failure

cc/ @krancour

Need to document recipes

From @krancour on September 28, 2017 2:42

I've been trying to put myself back in an SRE headspace. I believe it's important that we document best practices and "recipes" for how to operationalize this broker, both in the context of k8s and CF.

I imagine these recipes covering a broad range of prototypical use cases.

Example recipe: Deploying message producers and message consumers dependent on a common service bus queue. This sort of use case is both common and non-trivial.

cli needs way of sending multi-valued parameters

For provision, bind, and update operations, it's currently possible to specify params as key=value pairs, but there's no way to send an array as a parameter. This would be useful, for instance, in specifying multiple postgres extensions to install during provisioning or multiple container ports to expose (in ACI).

What do we do about deprovisioning an instance with existing bindings?

From @krancour on August 3, 2017 17:11

The k8s service catalog doesn't let this happen, but that is unlikely to be this broker's only client.

Since the OSB spec doesn't explicitly disallow this, I propose we accept it as a possibility and handle it by cascading the delete (deprovision) to any would-be orphan bindings.

I'm not sure what this, ideally, looks like. Do we queue up a whole bunch of unbinds and require those to complete before we queue up the deprovisioning steps? Or do we do the deprovisioning and when we're done, we say the orhpaned bindings are meaningless and we simply delete them from the broker's own datastore?

I'm going to consider this a post-alpha issue, so I'm not assigning it to any existing milestone.

cc @arschles

Need to review keys used in all JSON objects

There are a lot of inconsistencies in conventions from one module to the next, and even inconsistencies within modules.

Examples:

One module might used a key like clientId while another might use clientid.

In one case I know of, one module uses objectid and clientId.

We should review the keys used in JSON throughout the entire broker and correct inconsistencies while we're still able / allowed to make breaking changes.

I'm classifying this as a bug, because even if it doesn't crash anything, I perceive it to be a serious flaw in the UX.

Meta: Add documentation for each module

From @krancour on September 28, 2017 18:3

For each module, we should document service/plan offerings, provisioning parameters, and the credential schema.

Modules:

  • aci
  • cosmosdb
  • eventhub
  • keyvault
  • mssql
  • mysql
  • postgresql
  • rediscache
  • search
  • servicebus
  • storage

aci should support multiple ports

Docker and ACI allow containers to expose multiple ports, but provisioning parameters for the ACI module only allow specifying a single port.

Although we're not investing heavily in module development / enhancement in the v0.4.0/v0.5.0 timeframe, I'm adding this to v0.4.0 because s/port/ports is a breaking change and these changes are better made sooner while we are making no guarantees about the stability of each module.

assert that provisioning and deprovisioning work for each service

From @krancour on October 30, 2017 17:22

To avoid issues like #246 taking us by surprise in the future, it would be good to assert (in the module lifecycle tests) that after provisioning, the expected resources exist, and after deprovisioning, the expected resourced have been deleted. Currently (save for ongoing work to test credentials) the lifecycle tests only assert that each module can complete the provision/bind/unbind/deprovision lifecycle without error. We can stand to make stronger assertions that expected things happened.

Unit tests panic when there is no redis running

From @arschles on October 13, 2017 18:56

If you run go test -tags unit $$(glide nv) on the host, with no redis host running, many tests fail as expected, but one panics:

--- FAIL: TestGetExistingInstance (0.00s)
        Error Trace:    store_test.go:51
	Error:      	Expected nil, but got: &net.OpError{Op:"dial", Net:"tcp", Source:net.Addr(nil), Addr:net.Addr(nil), Err:(*net.DNSError)(0xc42007fd80)}
        Error Trace:    store_test.go:55
	Error:      	Should be true
        Error Trace:    store_test.go:56
	Error:      	Expected nil, but got: &net.OpError{Op:"dial", Net:"tcp", Source:net.Addr(nil), Addr:net.Addr(nil), Err:(*net.DNSError)(0xc42007ff00)}
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1294357]

goroutine 33 [running]:
testing.tRunner.func1(0xc4200811e0)
	/usr/local/opt/go/libexec/src/testing/testing.go:622 +0x29d
panic(0x12e0200, 0x14d2410)
	/usr/local/opt/go/libexec/src/runtime/panic.go:489 +0x2cf
github.com/Azure/azure-service-broker/pkg/storage.TestGetExistingInstance(0xc4200811e0)
	/Users/aaschles/gocode/src/github.com/Azure/azure-service-broker/pkg/storage/store_test.go:57 +0x257
testing.tRunner(0xc4200811e0, 0x134b1a8)
	/usr/local/opt/go/libexec/src/testing/testing.go:657 +0x96
created by testing.(*T).Run
	/usr/local/opt/go/libexec/src/testing/testing.go:697 +0x2ca

Test Key Vault credentials

In the lifecycle tests, incorporate an assertion that credentials obtained via binding actually work.

Respond with error message if operation fails

For example: I didn't specified location in provisioning parameters, it would return:

$ cf create-service azure-sqldb basic sqlsql
Creating service instance sqlsql in org IP.xip.io_ORGANIZATION / space azure as admin...
FAILED
Server error, status code: 502, error code: 10001, message: The service broker rejected the request to http://asb.IP.xip.io/v2/service_instances/ace140b2-79dc-4ed3-bacb-c4eba0eb67dc?accepts_incomplete=true.
Status Code: 400 Bad Request, Body: {}

Relative line:
https://github.com/Azure/azure-service-broker/blob/master/pkg/api/provision.go#L225

See bold, user only knows that it is a bad request but what's the reason in details? User needs to use cf logs asb --recent to find out that.
At worst, some other polling operations are working at the same time, maybe it's a little hard to find the corresponding error message. (PS, such an error is printed to stdout, is it expected?)

2017-11-17T02:03:31.02+0000 [APP/PROC/WEB/0] OUT time="2017-11-17T02:03:31Z" level=debug msg="bad provisioning request: validation error" field=location instanceID=ace140b2-79dc-4ed3-bacb-c4eba0eb67dc issue="invalid location: \"\""

Anyway, it's better to always respond with error message if operation failed.

Expose event hub options via provisioning parameters

From @krancour on September 28, 2017 17:45

Event Hubs are definitely not one-size-fits-all. Different use cases call for different numbers of partitions and data retention policies.

This issue is to expose those options (and any others deemed reasonable) via provisioning parameters.

Additional things we should expose:

  • number of "shards"
  • others?

Add ACI Module

From @arschles on September 5, 2017 22:45

Provisioning should launch a new container on ACI, de-provisioning should delete it, and bind/unbind should be no-ops.

Assert compliance with OSB spec

@Haishi2016 has written a nice suite for doing this. It should be integrated into our own test suite.

https://github.com/Haishi2016/osb-checker

Since I expect that many (slight) deviations exist, this additional assertion should not, initially, fail the CI pipeline if issues are found.

We can address each finding individually (as its own issue) before requiring this assertion to pass.

The broker API pod goes into CrashLoopBackoff

I've installed the helm chart according to the directions in the helm chart's README, and I am noticing that the broker API pod almost immediately goes into CrashLoopBackoff with the following log message:

time="2017-11-10T00:23:45Z" level=info msg="setting log level" logLevel=INFO
time="2017-11-10T00:23:45Z" level=info msg="API server is listening" address="http://0.0.0.0:8080"
time="2017-11-10T00:24:15Z" level=fatal msg="async engine stopped: cleaner stopped: error cleaning up after dead workers: error retrieving workers: dial tcp: i/o timeout"```

I am operating on revision 01d1b210e3910bdd57cbad0d444094eb22374527

Don't use ARM template parameters anymore

#69 added the ability to pre-process ARM templates as a Go template...

Now that we have that ability, I'm wondering if we can cut down on ARM template length and complexity by preferring to use Go template mechanisms (e.g. "mustaches", sprig functions, etc.) over ARM parameters.

The pros of ARM parameters, as I see them, are that they offer us a convenient way of declaratively validating inputs, however, this is not as useful to our broker as it may sound. Since ARM deployments are done asynchronously, it's better to validate inputs synchronously before accepting a provisioning request.

EDIT: ... which would mean it's better to rely on the broker's own validations mechanisms instead.

Add ACS Module

From @arschles on September 5, 2017 22:49

Provision/deprovision should launch and destroy clusters (respectively). Bind should return kube config. I'm not sure yet what unbind should do. Let's discuss!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.