Code Monkey home page Code Monkey logo

sso's Introduction

sso

See our launch blog post for more information!

CircleCI MIT license Docker Automated build codecov.io

Please take the SSO Community Survey to let us know how we're doing, and to help us plan our roadmap!


sso — lovingly known as the S.S. Octopus or octoboi — is the authentication and authorization system BuzzFeed developed to provide a secure, single sign-on experience for access to the many internal web apps used by our employees.

It depends on Google as its authoritative OAuth2 provider, and authenticates users against a specific email domain. Further authorization based on Google Group membership can be required on a per-upstream basis.

The main idea behind sso is a "double OAuth2" flow, where sso-auth is the OAuth2 provider for sso-proxy and Google is the OAuth2 provider for sso-auth.

sso is built on top of Bitly’s open source oauth2_proxy

In a nutshell:

  • If a user visits an sso-proxy-protected service (foo.sso.example.com) and does not have a session cookie, they are redirected to sso-auth (sso-auth.example.com).
    • If the user does not have a session cookie for sso-auth, they are prompted to log in via the usual Google OAuth2 flow, and then redirected back to sso-proxy where they will now be logged in (to foo.sso.example.com)
    • If the user does have a session cookie for sso-auth (e.g. they have already logged into bar.sso.example.com), they are transparently redirected back to proxy where they will be logged in, without needing to go through the Google OAuth2 flow
  • sso-proxy transparently re-validates & refreshes the user's session with sso-auth

Installation

Quickstart

Follow our Quickstart guide to spin up a local deployment of sso to get a feel for how it works!

Code of Conduct

Help us keep sso open and inclusive. Please read and follow our Code of Conduct.

Contributing

Contributions to sso are welcome! Please follow our contribution guideline.

Issues

Please file any issues you find in our issue tracker.

Security Vulns

If you come across any security vulnerabilities with the sso repo or software, please email [email protected]. In your email, please request access to our bug bounty program so we can compensate you for any valid issues reported.

Maintainers

sso is actively maintained by the BuzzFeed Infrastructure teams.

Notable forks

  • pomerium an identity-access proxy, inspired by BeyondCorp.

sso's People

Contributors

cameronattard avatar colemujadzic avatar cotarg avatar danbf avatar gordcorp avatar jphines avatar jusshersmith avatar katzdm avatar kjetijor avatar loganmeetsworld avatar mccutchen avatar mcfearsome avatar mirobertod avatar mreiferson avatar niksrc avatar notnmeyer avatar quovobill avatar ready4god2513 avatar snebel29 avatar sporkmonger avatar tahoward avatar thoward-godaddy avatar while1eq1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sso's Issues

sso-proxy: support dynamic config endpoint

Related to #68.

Config shouldn't require restarts and also should support remote endpoints and not just a file. My specific aim is essentially to be able to use Consul as a backend (outside the Kubernetes universe) but remote endpoints is a backend agnostic feature that's obviously very useful to have in all kinds of scenarios.

Permit TLS 1.3

auth/http.go:ServeHTTPS() disallows TLS 1.3; it should be permitted and supported.

sso-auth: segfault if `OLD_COOKIE_SECRET` value is not present

reproduce by running the quick-start docker-compose environment without the OLD_COOKIE_SECRET causes a seg-fault

authenticator_1 | panic: runtime error: invalid memory address or nil pointer dereference
authenticator_1 | [signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x6ba2d6]
authenticator_1 |
authenticator_1 | goroutine 5 [running]:
authenticator_1 | github.com/buzzfeed/cop/internal/pkg/sessions.(*CookieStore).AuthLoadSession(0x0, 0xc420127500, 0x1, 0x10, 0xc420074c30)
authenticator_1 | /go/src/github.com/buzzfeed/cop/internal/pkg/sessions/cookie_store.go:196 +0x26
authenticator_1 | github.com/buzzfeed/cop/internal/auth.(*Authenticator).LoadSession(0xc4200b6500, 0xc420127500, 0xa, 0x7fdc319a7000, 0x0)
authenticator_1 | /go/src/github.com/buzzfeed/cop/internal/auth/authenticator.go:144 +0xa6
authenticator_1 | github.com/buzzfeed/cop/internal/auth.(*Authenticator).authenticate(0xc4200b6500, 0xadb360, 0xc420176000, 0xc420127500, 0x1, 0xc420018270, 0x22)
authenticator_1 | /go/src/github.com/buzzfeed/cop/internal/auth/authenticator.go:283 +0x80
authenticator_1 | github.com/buzzfeed/cop/internal/auth.(*Authenticator).SignIn(0xc4200b6500, 0xadb360, 0xc420176000, 0xc420127500)
authenticator_1 | /go/src/github.com/buzzfeed/cop/internal/auth/authenticator.go:372 +0x14c
authenticator_1 | github.com/buzzfeed/cop/internal/auth.(*Authenticator).SignIn-fm(0xadb360, 0xc420176000, 0xc420127500)
authenticator_1 | /go/src/github.com/buzzfeed/cop/internal/auth/authenticator.go:207 +0x48
authenticator_1 | github.com/buzzfeed/cop/internal/auth.(*Authenticator).validateSignature.func1(0xadb360, 0xc420176000, 0xc420127500)
authenticator_1 | /go/src/github.com/buzzfeed/cop/internal/auth/middleware.go:154 +0x121
authenticator_1 | github.com/buzzfeed/cop/internal/auth.(*Authenticator).validateRedirectURI.func1(0xadb360, 0xc420176000, 0xc420127500)
authenticator_1 | /go/src/github.com/buzzfeed/cop/internal/auth/middleware.go:122 +0x6e0
authenticator_1 | github.com/buzzfeed/cop/internal/auth.(*Authenticator).validateClientID.func1(0xadb360, 0xc420176000, 0xc420127500)
authenticator_1 | /go/src/github.com/buzzfeed/cop/internal/auth/middleware.go:75 +0x703
authenticator_1 | github.com/buzzfeed/cop/internal/auth.(*Authenticator).withMethods.func1(0xadb360, 0xc420176000, 0xc420127500)
authenticator_1 | /go/src/github.com/buzzfeed/cop/internal/auth/middleware.go:47 +0x92
authenticator_1 | net/http.HandlerFunc.ServeHTTP(0xc4200e4b40, 0xadb360, 0xc420176000, 0xc420127500)
authenticator_1 | /usr/local/go/src/net/http/server.go:1918 +0x44
authenticator_1 | net/http.(*ServeMux).ServeHTTP(0xc4201414d0, 0xadb360, 0xc420176000, 0xc420127500)
authenticator_1 | /usr/local/go/src/net/http/server.go:2254 +0x130
authenticator_1 | net/http.(*ServeMux).ServeHTTP(0xc420141440, 0xadb360, 0xc420176000, 0xc420127500)
authenticator_1 | /usr/local/go/src/net/http/server.go:2254 +0x130
authenticator_1 | github.com/buzzfeed/cop/internal/auth.setHeaders.func1(0xadb360, 0xc420176000, 0xc420127500)
authenticator_1 | /go/src/github.com/buzzfeed/cop/internal/auth/middleware.go:32 +0x159
authenticator_1 | net/http.HandlerFunc.ServeHTTP(0xc4200e4da0, 0xadb360, 0xc420176000, 0xc420127500)
authenticator_1 | /usr/local/go/src/net/http/server.go:1918 +0x44
authenticator_1 | net/http.(*timeoutHandler).ServeHTTP.func1(0xc420141bc0, 0xc420176000, 0xc420127500, 0xc420014180)
authenticator_1 | /usr/local/go/src/net/http/server.go:3043 +0x53
authenticator_1 | created by net/http.(*timeoutHandler).ServeHTTP
authenticator_1 | /usr/local/go/src/net/http/server.go:3042 +0x158

sso-proxy: support Authorization: Bearer <token>

In Kubernetes, the most preferred way of authenticating to the dashboard is via an authenticating proxy.
https://github.com/kubernetes/dashboard/wiki/Access-control#authentication

The authentication mechanism that Kubernetes expects is an Authorization: Bearer <token>, where the token is typically going to be the JWT ID token.
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens

While headers like X-Forwarded-Email and X-Forwarded-Groups are certainly more approachable and accessible for upstream services to consume, these aren't currently supported by Kubernetes, and unless you enable Gap-Signature, there's no guarantees beyond what firewalling you do between the proxy and the upstream service in terms of preventing impersonation. The Gap-Signature scheme doesn't appear to be standardized, so it makes more sense to me to put the feature request on the proxy side rather than to try to have the Kubernetes dashboard support X-Forwarded-Email and X-Forwarded-Groups.

sso-proxy: remove provider interface

What

This is something that was not changed from the original oauth_proxy clone that can be removed.
Since we only have one provider for sso-proxy - the SSOProvider, to simplify testing and remove default provider logic, it makes sense to get rid of the interface and have the OAuthProxy struct's provider field be an SSOProvider rather than a Provider interface.

docs: best practices for frontend heavy services

Many web applications eschew page refreshing and rely heavily on AJAX requests. This behavior circumvents SSO's ability to perform the proxy -> auth -> proxy redirect loop and it's easy for a frontend client to end up in a state where it thinks that its upstream is failing, but in reality the SSO proxy is trying to get it to reauthenticate.

It would be useful to have documentation outlining the "best case" relationship for SSO proxy and services which use AJAX. This would include:

  • How a frontend should pass X-Requested-With: XMLHttpRequest to get the proxy to respond with JSON. Does the browser do this automatically? Either way we should make it clear that it is the way the proxy knows to respond with JSON.
  • What happens when the proxy wants to go through the redirect flow, but can't because the request is an XHR? (Answer: 401)
  • What (if any) unique strategies should a frontend client employ to correctly navigate every step in the big diagram? Can/should it prompt the user to open a new tab? Does it need to save state and perform a hard refresh? Which steps happen behind the scenes?

docs: reference sso helm chart in README

Is your feature request related to a problem? Please describe.

To make this easier to use, it would be great if there was a Helm Chart or a guide for getting this up and running on Kubernetes

Thanks,

sso-proxy: remove deprecated session unmarshal code and old cookie secrets

Why
We have two code paths for loading the session state in sso-proxy that we kept in until the cookie refresh period expired for sso in production. Both code paths were kept in place so that having an invalidly decrypted session cookie would not affect the UX for those logging into sso-proxy with the sessions encrypted the old way. We can now remove that code path.

This will involve removing:

sso-proxy: define upstreams by path

Can you please add the ability to define an upsteam based on path? Or if it's already possible please provide an example or test?

For instance when trying to secure monocular with sso I found they have multiple containers for the app. The frontend container needs to be mapped to monocular.sso.mydomain.com and the api container to monocular.sso.mydomain.com/api/

sso-proxy: CLUSTER should not be a required config value

AFAIK, this should only affect metrics, where sso should be able to omit this information if it's not given or pick a reasonable default value.

While we're at it, let's consider changing the name of this to something a bit more deployment agnostic, like ENVIRONMENT?

Makefile for running local dependencies and tests

Is your feature request related to a problem? Please describe.

We should make the instructions for testing as easy as running make test. Why? This would provide a "standard" interface; consistency with a large number of existing open source projects. Furthermore, it would be self-documenting -- if running tests is just make test then I can look at the Makefile and figure out exactly what steps are taken to build and test the project.

Describe the solution you'd like

This blog post is a good source of inspiration for what's possible:
http://azer.bike/journal/a-good-makefile-for-go/

docs: non-docker based deployment guides

The Quickstart was useful to play around with but I think it might give the wrong idea: SSO is not actually coupled with docker or k8s.

It also works great with old fashioned unit files. If there's any interest I'd be happy to improve the docs in that regard.

build static assets into go binary

Is your feature request related to a problem? Please describe.
Currently, static assets are not built into the binary.

Describe the solution you'd like
We can build the static assets into the go binary to make it easier to pull and use the Docker image without needing to mount the static directory.
We can use go-bindata to compile static images into our go binary, similar to what is done in go-httpbin

*: auto config reloading

Is your feature request related to a problem? Please describe.
Currently if you change the yaml config for the sso-proxy, you have to reload the sso-proxy app itself.

Describe the solution you'd like

  • The sso-proxy should allow for the config file to change, and reload it automatically, so you don't have to reload the sso-proxy every time. This is useful for dynamic environments and when running in kubernetes.

  • Provide a way when running in a kubernetes cluster to have it watch for annotations on a service to have it dynamically generate the config and reload the service when the config is updated.

sso-{auth,proxy}: simplify secret validation code

In #37, I covered some of the difficulties I encountered in generating a valid secret. A lot of the complexity arises from the fact that sso accepts either base64-encoded bytes or raw bytes, and attempt to guess which it has been given.

Unfortunately, that guess is extremely unreliable, because

  • It will reject legitimate raw secret bytes that happen to decode as base64
  • The base64 decoder used will not decode the output of the base64 CLI tool a user might naturally reach for to safely encode a set of randomly generated bytes

Here are some legitimate secret values that will be rejected:

# 32 ASCII bytes generated by 1Password
H3LWy8KF86hm2UcE3XAYvb2ub2G33YiV

$ openssl rand 32 -base64
AquwB7PgQRVaAFKwEXDefRhfegtqfLXVP4/LbDSHkpY=

$ openssl rand 32 | base64
4QkZXXwT27g4tYbdO50gA0UHo5S8OElg0Vs0m+E5w1c=

I can't remember why there's so much flexibility built into the secret validation code, but it feels too complex:

validCookieSecretSize := false
for _, i := range []int{32, 64} {
if len(sessions.SecretBytes(o.CookieSecret)) == i {
validCookieSecretSize = true
}
}
var decoded bool
if string(sessions.SecretBytes(o.CookieSecret)) != o.CookieSecret {
decoded = true
}
if validCookieSecretSize == false {
var suffix string
if decoded {
suffix = fmt.Sprintf(" note: cookie secret was base64 decoded from %q", o.CookieSecret)
}
msgs = append(msgs, fmt.Sprintf(
"cookie_secret must be 32 or 64 bytes "+
"to create an AES cipher but is %d bytes.%s",
len(sessions.SecretBytes(o.CookieSecret)), suffix))
}

And it seems like this code should be stricter about what it will accept, and rely on the application to provide secret bytes in some specific form:

// SecretBytes attempts to base64 decode the secret, if that fails it treats the secret as binary
func SecretBytes(secret string) []byte {
b, err := base64.URLEncoding.DecodeString(addPadding(secret))
if err == nil {
return []byte(addPadding(string(b)))
}
return []byte(secret)
}

sessions: document the 4 timestamps in SessionState

There are 4 timestamps in sessions.SessionState and their names are somewhat confusing, given that you're dealing w/ several things that may be on different expiration schedules, and indeed likely will be. It would be helpful to add additional comments discussing e.g. what RefreshDeadline vs LifetimeDeadline vs ValidDeadline are. I presume that RefreshDeadline is the timestamp at which access token is no longer valid, and that ValidDeadline is for checking whether token is still valid via token verification endpoint. Presumably LifetimeDeadline expiration sends the user back through the IdP auth flow again? But I'm guessing on all of these because the comments don't really say.

circle-ci: build static assets into go binary

Is your feature request related to a problem? Please describe.
Currently, static assets are not built into the binary.

Describe the solution you'd like
We can build the static assets into the go binary to make it easier to pull and use the Docker image without needing to mount the static directory.

sso-auth: decouple setting provider with options validation

What

Move out initializing providers from validating Options into to a separate function that can be taken in as an option function.

Why

Currently we set the provider as the google provider during validation of the Options struct which makes it difficult to add a new provider and to add new features to existing providers. Decoupling the logic from validating Options will make it simpler to support adding new providers.

*: better startup error reporting

We aim to use structured logging throughout sso, but its use for reporting configuration errors that prevent successful startup actually makes the errors much harder to read and diagnose. Here's an example:

$ docker run --rm buzzfeed/sso:latest sso-proxy
{"error":"Invalid configuration:\n  missing setting: cluster\n  missing setting: provider-url\n  missing setting: upstream-configs\n  missing setting: cookie-secret\n  missing setting: client-id\n  missing setting: client-secret\n  missing setting: email-domain\n  missing setting: statsd-host\n  missing setting: statsd-port","level":"error","msg":"error validing options","service":"sso-proxy","time":"2018-08-23 23:33:55.82311"}

In cases where invalid/incomplete configuration is provided, we should instead exit with a more readable error message.

Additionally, the error message above refers to settings as, e.g., provider-url, which is a holdover from the days of CLI options. These configuration error messages should be in terms of env vars (i.e. PROVIDER_URL in this example), given that env vars are the standard/preferred method of configuring sso.

This affects both sso-auth and sso-proxy.

sso-auth: fails to starts with error "bad key size"

Describe the bug
Running "sso-auth" results in :

> {"error":"siv: bad key size","level":"error","msg":"","service":"sso-authenticator","time":"2018-09-21 11:56:10.92111"}
> {"error":"siv: bad key size","level":"error","msg":"error creating new Authenticator","service":"sso-authenticator","time":"2018-09-21 11:56:10.92111"}
> 

To Reproduce
Steps to reproduce the behavior:
Follow the steps provided in the URL :

https://github.com/buzzfeed/sso/blob/master/docs/google_provider_setup.md

and generate the google json

Expected behavior
sso-auth service should launch without errors.

Additional context
This is running on Ubuntu 18.04. However "sso-proxy" starts without errors.

add a new parameter which overrides the `PROVIDER_URL` for the sso-proxy->sso-auth connection for split dns environments

add a new parameter which overrides the PROVIDER_URL for the sso-proxy->sso-auth connection for split dns environments

currently the PROVIDER_URL is both used to generate the sign_in url for the client http://sso-auth.localtest.me/sign_in? and by the sso-proxy for the redeem transaction http://sso-auth.localtest.me/redeem. In a split DNS environment they point to different IP's as is the case with the current quickstart.

currently sso-auth.localtest.me resolves to different IP's for the client and the sso-proxy process:

for PROVIDER_URL=sso-auth.localtest.me
Client Resolution: sso-auth.localtest.me=127.0.0.1
sso-proxy Resolution: sso-auth.localtest.me=172.20.0.1 which forced by https://github.com/buzzfeed/sso/blob/master/quickstart/docker-compose.yml#L64-L65 and https://github.com/buzzfeed/sso/blob/master/quickstart/docker-compose.yml#L137-L143

i propose an optional parameter PROXY_PROVIDER_URL which when present is used by the sso-proxy to generate the redeem transaction url. when not present the current behavior of PROVIDER_URL being used. ideally this would result in:

for PROVIDER_URL=sso-auth.localtest.me
for PROXY_PROVIDER_URL=host.docker.internal
Client Resolution: sso-auth.localtest.me=127.0.0.1
sso-proxy Resolution: host.docker.internal=172.20.0.1
with out the need for any docker-compose extra_hosts or networks stanza's

sso-proxy: sign requests all requests for upstreams with a private key

Why

Currently sso-proxy signs the Gap-Signature header using a shared secret stored in the env-vars with the prefix "SSO_CONFIG_". This can be tedious as both the upstream and sso_proxy need to have the same secret.

What

Using a public/private key mechanism, SSO Proxy will sign requests with its private key and have an endpoint available for upstreams to retrieve the public key and validate the authenticity of the requests.

*: user-defined templates and error pages

Currently the FOOTER env does nothing, there's no way to remove "Secured by SSO" or customize the login page without a recompile.

I think a -ui flag that points to a folder with templates that override the built in ones would be a nice thing to have. Happy to contribute a PR.

TCP Dialer timeout when redeeming token

Describe the bug
The authorization flow fails consistently with an io timeout.

To Reproduce
Steps to reproduce the behavior:

  1. Set up the quickstart in a high-latency environment (e.g. Israel)
  2. Go through the auth flow
  3. See a 500 Internal Service Error as a response to the callback
  4. Observe from the logs that: Post https://www.googleapis.com/oauth2/v3/token: dial tcp 172.217.21.202:443: i/o timeout

Expected behavior
The OAuth flow should succeed.

Additional context
From what I can see, the dial timeout is hardcoded to 2 seconds. This is too slow for some environments and should be configurable.

Local make build failing due to file "statik/statik.go" already exists; use -f to overwrite

The issue
Running make build fails with Error 1

To Reproduce
Steps to reproduce the behavior:

  1. Install sso from scratch following the instructions
$ go get github.com/buzzfeed/sso/cmd/...
$ cd $GOPATH/src/github.com/buzzfeed/sso
$ gpm install
$ make build
mkdir -p dist
go generate ./...
file "statik/statik.go" already exists; use -f to overwrite
internal/auth/static_files.go:38: running "/home/snebel/gopath/bin/statik": exit status 1
Makefile:8: recipe for target 'dist/sso-auth' failed
make: *** [dist/sso-auth] Error 1

Expected behavior
I would expect the make command to just build it without errors

Desktop (please complete the following information):

  • OS:
Description:	Ubuntu 16.04.5 LTS
Release:	16.04
Codename:	xenial
  • Repository commit: 0519150fcf83dde5dcb077b4eff8689276b0150d

Additional context
Adding -f flag to internal/auth/static_files.go:38 seems to fix the issue

//go:generate $GOPATH/bin/statik -f -src=./static
$ make build
mkdir -p dist
go generate ./...
go build -o dist/sso-auth ./cmd/sso-auth
mkdir -p dist
go generate ./...
go build -o dist/sso-proxy ./cmd/sso-proxy

Although I don't have the full context on statik tool and how the CI is doing the building not to encounter any error, I think It'd be good to fix this to being able to do builds locally without errors.

Thanks

remove use of go-options

Is your feature request related to a problem? Please describe.
We currently use go-options to allow us to either start sso-auth and sso-proxy via CLI or config file. Rather than having two different ways of starting these binaries, it seems like a good idea to stick to one way.

Describe the solution you'd like

  • Run binaries using a config file.
    • Pro: More secure
    • Con: Will require more work.
  • CLI Solution
    • Con: It is possible to leak sensitive secrets via CLI
    • Pro: Easier since we already use it.

*: provide better feedback for invalid secrets

TL;DR

If an invalid COOKIE_SECRET value is given, sso should provide guidance for how to generate a valid one rather than a cryptic error message. Bonus points for providing a separate sso-gen-secret binary that will Just Work!

Let's make it as easy as possible for users to generate good, secure secrets!

A bit more context

The error message sso gives for an invalid COOKIE_SECRET value takes this general form (for abcd as the secret value):

{"error":"Invalid configuration:\n  cookie_secret must be 32 or 64 bytes to create an AES cipher but is 4 bytes. note: cookie secret was base64 decoded from \"abcd\"","level":"error","msg":"error validating opts","service":"sso-authenticator","time":"2018-08-25 00:15:04.82512"}

But we still get an error message if we try a 32 byte secret (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx):

{"error":"Invalid configuration:\n  cookie_secret must be 32 or 64 bytes to create an AES cipher but is 24 bytes. note: cookie secret was base64 decoded from \"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"","level":"error","msg":"error validating opts","service":"sso-authenticator","time":"2018-08-25 00:17:33.82512"}

Having dug into this a bit, I know that sso is trying to base64-decode the given secret value (that's why it reports a length of 24 bytes above), but, as I'll illustrate in a follow-up issue, it can be difficult to generate a valid secret even with this knowledge.

Working example

Here's one way that works, assuming python is available (note the use of urlsafe_b64encode, which seems to agree with the golang decoder used in sso):

python -c 'import base64, os, sys; sys.stdout.write(base64.urlsafe_b64encode(os.urandom(32)))'

*: add useful --help output

We should add a --help CLI flag to both sso-auth and sso-proxy that outputs documentation about configuration, for anyone exploring or debugging the apps at the command line.

At the moment, the --help flag is ignored and instead the user is greeted with a big error message about a bunch of invalid settings (see also #33):

$ docker run --rm buzzfeed/sso:latest sso-proxy --help
{"error":"Invalid configuration:\n  missing setting: cluster\n  missing setting: provider-url\n  missing setting: upstream-configs\n  missing setting: cookie-secret\n  missing setting: client-id\n  missing setting: client-secret\n  missing setting: email-domain\n  missing setting: statsd-host\n  missing setting: statsd-port","level":"error","msg":"error validing options","service":"sso-proxy","time":"2018-08-24 23:49:58.82411"}

*: statsd should be optional

Not all environments may have need for metrics collection, as such the following options should be optional or have defaults:

  • STATSD_HOST
  • STATSD_PORT

*: combine sso-{auth,proxy} into a single binary

I'm still reading through but it seems like it wouldn't be too hard to roll sso-auth and sso-proxy into a single binary, busybox/minikube style.

It would simplify deployment, I think the benefits are worth the effort.

sso-proxy: refresh on error page leads to different error

Describe the bug
Use an upstream where you try to login and get a "Group membership required" error. On refresh, this turns into "http: named cookie not present"

Expected behavior
The group membership required error should remain.

Desktop (please complete the following information):
Using docker image sso:latest (first - and so far only - release)
Applies to all OS/browsers

Additional context
Notice how the page url still has an oauth2/callback?code=<code> section.

Metrics configuration should be optional

Both sso-auth and sso-proxy refuse to start without STATSD_HOST and STATSD_PORT env vars, but — especially while we only support the DataDog statsd client — metrics reporting should be optional and disabled in the absence of those config values.

sso-proxy: logs alternate between JSON and plain text

Describe the bug

This popped up while writing the tests for TLS verification.

{"error":"x509: certificate signed by unknown authority","level":"error","msg":"error in upstreamTransport RoundTrip","service":"sso","time":"2018-09-18 16:13:01.9184"}
2018/09/18 16:13:01 server.go:2979: http: TLS handshake error from 127.0.0.1:55948: remote error: tls: bad certificate
2018/09/18 16:13:01 reverseproxy.go:395: http: proxy error: x509: certificate signed by unknown authority
{"error":"unsupported protocol scheme \"\"","level":"error","msg":"error in upstreamTransport RoundTrip","service":"sso","time":"2018-09-18 16:13:01.9184"}

To Reproduce
Steps to reproduce the behavior:

  1. TLS verification failures will trigger the bug

Expected behavior
If logging in JSON format, all logs should be in JSON format.

Caveats
I've run into this issue countless times in Go and it's honestly a pain to fix.

sso-auth: add support for individual e-mail address authentication

Is your feature request related to a problem? Please describe.
Currently, it is only possible to whitelist an entire domain of addresses, which is convenient for organizations, however it is not so convenient for single users wishing to authenticate oAuth on a public domain such as gmail.com.

Describe the solution you'd like
It would be nice if there was a configuration option to specify a list of individual e-mail addresses to whitelist, or a path to a file containing a list of emails to whitelist, in addition to optionally also specifying whitelisted domain(s).

Describe alternatives you've considered
I haven't considered any alternatives whitelisting methods because Google is the only oAuth provider currently supported.

Additional context
This feature was already available in oauth2_proxy via the "authenticated-emails-file" command-line flag.

Add TLS verification configuration option to proxy

Currently, the proxy verifies TLS certificates are valid before sending requests to upstream servers. We'd like to point directly at AWS ALB DNS records in some cases, which means those certs are essentially guaranteed not to be valid. The validity of the cert buys very little in our threat model since it's not mutual auth, it's trivial to get a valid cert, and it's the firewall/security group rules that are what's really giving you any kind of assurance.

I'd like to either have a TLS_VERIFY environment variable that controls whether the proxy will verify certificates or not, or add a config line to the upstream_configs.yml file that would control it. The latter being preferable since conceivably you might have some upstream services that do have valid certs and some that don't.

sso-auth: refactor Provider-related tests

What

  • Remove ProviderData as an implementation of the Provider interface
  • Change unit tests in authenticator.go to use TestProvider as mock
  • Some tests use the google provider and test the Authenticator behavior with those. We should keep those, but possibly separate them out.

Why

Our tests inconsistently use TestProvider, a mock of the Provider interface and ProviderData which is an inconsistent implementation of the Provider interface, returning an ErrNotImplemented for some functions, but not for all.

sso-proxy: configuration option to preserve client Host header

Currently SS Octopus replaces the Host header provided by the client with a new Host header based on the upstream DNS name. It also isn't very careful in how it sets the X-Forwarded-Host header (adds a new one, ignoring any that may have been set by another proxy or networking component). In some configurations, particularly those involving other proxies or servers that have virtual hosts, it's desirable to preserve the client-provided host. This should be a per-upstream configuration option.

*: add a --version flag

Now that we're versioning, it's generally useful to be able to introspect what version of a binary you have. For Go specifically, I also find it useful to know what version of Go the binary was built with, e.g.:

$ nsqd --version
nsqd v1.0.0-compat (built w/go1.9.1)

sso-auth: create a default provider

Is your feature request related to a problem? Please describe.
Create a default provider for the authenticator that can be used for quick-start purposes rather than having to set up google provider credentials.

Describe the solution you'd like
This will making getting sso running locally without any other credentials easier for those who want to test it out. This can also enable us to create integration tests.

*: user-defined error pages

Is your feature request related to a problem? Please describe.
There is now a generic error page. Would be nice if we could provide our own template (style, logo, wording, ...)

Describe the solution you'd like
Option to define my own error html page template, including css and images

Describe alternatives you've considered
Only other option is to keep the baked in one (that should still be used as default)

Additional context
n/a

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.