azure / open-service-broker-azure Goto Github PK
View Code? Open in Web Editor NEWThe Open Service Broker API Server for Azure Services
Home Page: https://osba.sh
License: MIT License
The Open Service Broker API Server for Azure Services
Home Page: https://osba.sh
License: MIT License
Need a way to provide integer and floating point parameter for provision, bind, and update operations.
From @zhongyi-zhang on September 26, 2017 3:47
https://github.com/openservicebrokerapi/servicebroker/blob/v2.12/spec.md#updating-a-service-instance
While working on the Ghost chart, noticed an error relating to the minimum privilege required by the created db user for MySQL.
MESSAGE: alter table `posts_tags` add constraint `posts_tags_post_id_foreign` foreign key (`post_id`) references `posts` (`id`) - ER_TABLEACCESS_DENIED_ERROR: REFERENCES command denied to user '<REDACTED>' for table 'posts'
We need to add the "REFERENCES" privilege for the created db user.
While waiting for #24 to be resolved, I tried creating a service in CF by passing in parameters to the cf create-service command, but the broker still complains about not having a location, suggesting it's not getting the params provided.
Repro:
In a CF environment with the broker installed/registered, run:
cf create-service azure-rediscache basic mycache -c config.json
Where config.json looks like:
{
"resourceGroup": "myresourcegroup",
"cacheName": "mycache",
"parameters": {
"location": "westus",
"enableNonSslPort": false,
"sku": {
"name": "Basic",
"family": "C",
"capacity": 0
}
}
}
Response to user:
Creating service instance mycache in org demo / space dev as [email protected]... FAILED Server error, status code: 502, error code: 10001, message: The service broker rejected the request to https://asb.apps.smallpcf.seanmckdemo.com/v2/service_instances/81d0cdc2-e78a-4c75-8d30-0bdf5600d4c4?accepts_incomplete=true. Status Code: 400 Bad Request, Body: {}
Log output:
2017-11-22T07:20:02.81-0800 [APP/PROC/WEB/0] OUT time="2017-11-22T15:20:02Z" level=debug msg="bad provisioning request: validation error" field=location instanceID=81d0cdc2-e78a-4c75-8d30-0bdf5600d4c4 issue="invalid location: \"\""
From @krancour on September 12, 2017 14:19
In the interest of eliminating (or at least reducing) cloud/provider-specific parameters required for provisioning, it might be a good idea to allow (require?) cluster operators to define a default location (region) where services will be provisioned.
Similarly, it might be good to allow a default resource group to be defined. Right now, these are auto-generated by default and #84 will make resource group an optional parameter for all modules, but there is probably a benefit to allowing cluster operators to set a default. This would allow them to neatly contain all Azure resourced created for a given cluster to a single resource group if they so choose. (I think this should be optional and not required.)
From @krancour on September 28, 2017 2:35
This issue is a placeholder for the moment.
To tease the discussion, see the below comments.
The sentiments expressed in those comments are likely applicable to a variety of existing modules, including, but probably not limited to:
This will alleviate one of the situations that would commonly tempt us to give admin/superuser permissions upon bind (which is a terrible idea). This would be a step toward getting charts like GitLab CE to work.
From @krancour on October 13, 2017 3:37
From @arschles on October 17, 2017 16:21
We have instructions that show how to run the CLI with go run ...
. Even though the CLI is not meant to be general-purpose, it would be nice to treat it as an artifact (as we do with our docker images, etc...) and write our documentation to instruct users to download the CLI instead.
The available provisioning parameters for the ACI module should match the available flags on the az container create
CLI command.
Currently, it lacks many vital options, including but not limited to --commandLine
and --environment-variables
.
In the lifecycle tests, incorporate an assertion that credentials obtained via binding actually work.
From @bingosummer on September 8, 2017 7:20
For now, all the logs are outputted to stderr.
In CF app logs, it looks like:
2017-09-08T06:38:28.86+0000 [APP/PROC/WEB/0] ERR time="2017-09-08T06:38:28Z" level=info msg="setting log level" logLevel=DEBUG
2017-09-08T06:38:28.86+0000 [APP/PROC/WEB/0] ERR time="2017-09-08T06:38:28Z" level=info msg="API server is listening" address="http://0.0.0.0:8080"
Not sure how the logs looks like in k8s.
From my understanding,
We already have a way of filtering what services are included in the catalog on the basis of the relative stability of each module, but we don't include stability information in the catalog.
Every module should update service descriptions to include this info. It could maybe also be added in a dedicated field, but there's no guarantee that clients like k8s service catalog will show that information, so the description fields are the best place for this.
From @krancour on September 28, 2017 20:17
Without this, users cannot enable extensions, and the inability to do so rules out too many common use cases. (i.e. There's too much off the shelf software that requires various postgres extensions. GitLab CE is a great example.)
Unfortunately, this is not easily accomplished at this time because, while it is possible to assign new users to the azure_pg_admin
, one must be a superuser to revoke the role. And even the administrative user is not a superuser. Inability to revoke the role from a bound user means it is also impossible to drop a user upon unbind since a drop role
(user) command implicitly drops all the role's (user's) roles (group's).
(PostgreSQL uses "role" too much. ๐ )
From @krancour on October 13, 2017 3:37
In the lifecycle tests, incorporate an assertion that credentials obtained via binding actually work.
From @arschles on October 16, 2017 18:55
I recall a discussion between myself and @krancour about handling failures in the async engine, in which we discussed rolling back changes that were made in prior steps. I believe the async engine doesn't have rollback support yet, but the best practice is to make steps idempotent which makes failure handling less important (because the steps can be re-run with minimal negative consequence). So, this issue exists to track discussion for whether we need to add support for:
cc/ @krancour
In the lifecycle tests, incorporate an assertion that credentials obtained via binding actually work.
From @krancour on September 28, 2017 2:42
I've been trying to put myself back in an SRE headspace. I believe it's important that we document best practices and "recipes" for how to operationalize this broker, both in the context of k8s and CF.
I imagine these recipes covering a broad range of prototypical use cases.
Example recipe: Deploying message producers and message consumers dependent on a common service bus queue. This sort of use case is both common and non-trivial.
For provision, bind, and update operations, it's currently possible to specify params as key=value pairs, but there's no way to send an array as a parameter. This would be useful, for instance, in specifying multiple postgres extensions to install during provisioning or multiple container ports to expose (in ACI).
From @arschles on October 4, 2017 21:33
Could be the same repo that we publish azure/charts (currently located at https://github.com/deis/service-catalog-charts) to
From @krancour on July 10, 2017 14:36
From @krancour on August 3, 2017 17:11
The k8s service catalog doesn't let this happen, but that is unlikely to be this broker's only client.
Since the OSB spec doesn't explicitly disallow this, I propose we accept it as a possibility and handle it by cascading the delete (deprovision) to any would-be orphan bindings.
I'm not sure what this, ideally, looks like. Do we queue up a whole bunch of unbinds and require those to complete before we queue up the deprovisioning steps? Or do we do the deprovisioning and when we're done, we say the orhpaned bindings are meaningless and we simply delete them from the broker's own datastore?
I'm going to consider this a post-alpha issue, so I'm not assigning it to any existing milestone.
cc @arschles
Currently, you can get away with initiating provisioning without providing several parameters that ought to be required. The result is an ARM deployment that is doomed to fail.
From @julienstroheker on October 16, 2017 19:55
To add on top of #192
From @zhongyi-zhang on September 13, 2017 2:35
There are a lot of inconsistencies in conventions from one module to the next, and even inconsistencies within modules.
Examples:
One module might used a key like clientId
while another might use clientid
.
In one case I know of, one module uses objectid
and clientId
.
We should review the keys used in JSON throughout the entire broker and correct inconsistencies while we're still able / allowed to make breaking changes.
I'm classifying this as a bug, because even if it doesn't crash anything, I perceive it to be a serious flaw in the UX.
From @krancour on September 28, 2017 18:3
For each module, we should document service/plan offerings, provisioning parameters, and the credential schema.
Modules:
Docker and ACI allow containers to expose multiple ports, but provisioning parameters for the ACI module only allow specifying a single port.
Although we're not investing heavily in module development / enhancement in the v0.4.0/v0.5.0 timeframe, I'm adding this to v0.4.0 because s/port/ports is a breaking change and these changes are better made sooner while we are making no guarantees about the stability of each module.
From @krancour on October 30, 2017 17:22
To avoid issues like #246 taking us by surprise in the future, it would be good to assert (in the module lifecycle tests) that after provisioning, the expected resources exist, and after deprovisioning, the expected resourced have been deleted. Currently (save for ongoing work to test credentials) the lifecycle tests only assert that each module can complete the provision/bind/unbind/deprovision lifecycle without error. We can stand to make stronger assertions that expected things happened.
From @torosent on September 12, 2017 14:30
I would like to contribute this module.
From @arschles on October 13, 2017 18:56
If you run go test -tags unit $$(glide nv)
on the host, with no redis host running, many tests fail as expected, but one panics:
--- FAIL: TestGetExistingInstance (0.00s)
Error Trace: store_test.go:51
Error: Expected nil, but got: &net.OpError{Op:"dial", Net:"tcp", Source:net.Addr(nil), Addr:net.Addr(nil), Err:(*net.DNSError)(0xc42007fd80)}
Error Trace: store_test.go:55
Error: Should be true
Error Trace: store_test.go:56
Error: Expected nil, but got: &net.OpError{Op:"dial", Net:"tcp", Source:net.Addr(nil), Addr:net.Addr(nil), Err:(*net.DNSError)(0xc42007ff00)}
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1294357]
goroutine 33 [running]:
testing.tRunner.func1(0xc4200811e0)
/usr/local/opt/go/libexec/src/testing/testing.go:622 +0x29d
panic(0x12e0200, 0x14d2410)
/usr/local/opt/go/libexec/src/runtime/panic.go:489 +0x2cf
github.com/Azure/azure-service-broker/pkg/storage.TestGetExistingInstance(0xc4200811e0)
/Users/aaschles/gocode/src/github.com/Azure/azure-service-broker/pkg/storage/store_test.go:57 +0x257
testing.tRunner(0xc4200811e0, 0x134b1a8)
/usr/local/opt/go/libexec/src/testing/testing.go:657 +0x96
created by testing.(*T).Run
/usr/local/opt/go/libexec/src/testing/testing.go:697 +0x2ca
From @zhongyi-zhang on October 16, 2017 2:11
The test for MongoDB was added. The other plan CosmosDB still need such a test.
In the lifecycle tests, incorporate an assertion that credentials obtained via binding actually work.
For example: I didn't specified location in provisioning parameters, it would return:
$ cf create-service azure-sqldb basic sqlsql Creating service instance sqlsql in org IP.xip.io_ORGANIZATION / space azure as admin... FAILED Server error, status code: 502, error code: 10001, message: The service broker rejected the request to http://asb.IP.xip.io/v2/service_instances/ace140b2-79dc-4ed3-bacb-c4eba0eb67dc?accepts_incomplete=true. Status Code: 400 Bad Request, Body: {}
Relative line:
https://github.com/Azure/azure-service-broker/blob/master/pkg/api/provision.go#L225
See bold, user only knows that it is a bad request but what's the reason in details? User needs to use cf logs asb --recent
to find out that.
At worst, some other polling operations are working at the same time, maybe it's a little hard to find the corresponding error message. (PS, such an error is printed to stdout, is it expected?)
2017-11-17T02:03:31.02+0000 [APP/PROC/WEB/0] OUT time="2017-11-17T02:03:31Z" level=debug msg="bad provisioning request: validation error" field=location instanceID=ace140b2-79dc-4ed3-bacb-c4eba0eb67dc issue="invalid location: \"\""
Anyway, it's better to always respond with error message if operation failed.
From @krancour on September 28, 2017 17:45
Event Hubs are definitely not one-size-fits-all. Different use cases call for different numbers of partitions and data retention policies.
This issue is to expose those options (and any others deemed reasonable) via provisioning parameters.
Additional things we should expose:
From @arschles on September 5, 2017 22:45
Provisioning should launch a new container on ACI, de-provisioning should delete it, and bind/unbind should be no-ops.
From @zhongyi-zhang on October 20, 2017 8:34
@Haishi2016 has written a nice suite for doing this. It should be integrated into our own test suite.
https://github.com/Haishi2016/osb-checker
Since I expect that many (slight) deviations exist, this additional assertion should not, initially, fail the CI pipeline if issues are found.
We can address each finding individually (as its own issue) before requiring this assertion to pass.
It should have guidance on how reviewers should leave comments and what the pre-requisites are for a merge
I've installed the helm chart according to the directions in the helm chart's README, and I am noticing that the broker API pod almost immediately goes into CrashLoopBackoff
with the following log message:
time="2017-11-10T00:23:45Z" level=info msg="setting log level" logLevel=INFO
time="2017-11-10T00:23:45Z" level=info msg="API server is listening" address="http://0.0.0.0:8080"
time="2017-11-10T00:24:15Z" level=fatal msg="async engine stopped: cleaner stopped: error cleaning up after dead workers: error retrieving workers: dial tcp: i/o timeout"```
I am operating on revision 01d1b210e3910bdd57cbad0d444094eb22374527
#69 added the ability to pre-process ARM templates as a Go template...
Now that we have that ability, I'm wondering if we can cut down on ARM template length and complexity by preferring to use Go template mechanisms (e.g. "mustaches", sprig functions, etc.) over ARM parameters.
The pros of ARM parameters, as I see them, are that they offer us a convenient way of declaratively validating inputs, however, this is not as useful to our broker as it may sound. Since ARM deployments are done asynchronously, it's better to validate inputs synchronously before accepting a provisioning request.
EDIT: ... which would mean it's better to rely on the broker's own validations mechanisms instead.
From @arschles on September 5, 2017 22:49
Provision/deprovision should launch and destroy clusters (respectively). Bind should return kube config. I'm not sure yet what unbind should do. Let's discuss!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.