bitly / oauth2_proxy Goto Github PK
View Code? Open in Web Editor NEWA reverse proxy that provides authentication with Google, Github or other provider
License: MIT License
A reverse proxy that provides authentication with Google, Github or other provider
License: MIT License
It would be great to leverage the google apps provisioning API to retrieve the logged-in user's group memberships, enabling some useful features:
Currently, it looks like the email file is only read at startup. It would be useful to allow this to be read at login so users could be added/deleted dynamically.
We are currently running this on 4 different hosts, all under the same domain via round robin DNS.
Something we run into is that a user might have to login multiple times if they end up on a different hosts between sessions. It would be nice if there were a way to distribute cookies between all the proxy hosts. I'm curious how you think that could be built.
Another issue is that cookies don't seem to persist on disk at all, so everyone has to re-login if there is a host restart. Maybe a distributed datastore could be leveraged to solve both issues. Maybe etcd?
Thoughts? Ideas?
For our deployments we run into the issue of the google_auth_proxy not closing connections properly.
Proxy running like this: google_auth_proxy -http-address=127.0.0.1:8443 -upstream=http://127.0.0.1:8080/
root@ip-10-94-4-19:/home/ubuntu# lsof -p 2340
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
google_au 2340 root cwd DIR 202,1 4096 2 /
google_au 2340 root rtd DIR 202,1 4096 2 /
google_au 2340 root txt REG 202,1 5024460 34087 /usr/local/bin/google_auth_proxy
google_au 2340 root mem REG 202,1 105288 403122 /lib/x86_64-linux-gnu/libresolv-2.15.so
google_au 2340 root mem REG 202,1 31104 403123 /lib/x86_64-linux-gnu/libnss_dns-2.15.so
google_au 2340 root mem REG 202,1 52120 403121 /lib/x86_64-linux-gnu/libnss_files-2.15.so
google_au 2340 root mem REG 202,1 1811128 403117 /lib/x86_64-linux-gnu/libc-2.15.so
google_au 2340 root mem REG 202,1 135366 403126 /lib/x86_64-linux-gnu/libpthread-2.15.so
google_au 2340 root mem REG 202,1 149280 403129 /lib/x86_64-linux-gnu/ld-2.15.so
google_au 2340 root 0u CHR 1,3 0t0 278 /dev/null
google_au 2340 root 1u CHR 136,2 0t0 5 /dev/pts/2
google_au 2340 root 2u CHR 136,2 0t0 5 /dev/pts/2
google_au 2340 root 3u IPv4 98542 0t0 TCP localhost:8443 (LISTEN)
google_au 2340 root 4r FIFO 0,8 0t0 98543 pipe
google_au 2340 root 5w FIFO 0,8 0t0 98543 pipe
google_au 2340 root 6u 0000 0,9 0 5766 anon_inode
google_au 2340 root 7u IPv4 1780813 0t0 TCP localhost:35872->localhost:http-alt (CLOSE_WAIT)
google_au 2340 root 8u IPv4 187286 0t0 TCP localhost:44725->localhost:http-alt (CLOSE_WAIT)
google_au 2340 root 9r CHR 1,9 0t0 283 /dev/urandom
google_au 2340 root 10u IPv4 458663 0t0 TCP localhost:60616->localhost:http-alt (CLOSE_WAIT)
google_au 2340 root 11u IPv4 147198 0t0 TCP localhost:35808->localhost:http-alt (CLOSE_WAIT)
google_au 2340 root 12u IPv4 466452 0t0 TCP localhost:33689->localhost:http-alt (CLOSE_WAIT)
google_au 2340 root 13u IPv4 147623 0t0 TCP localhost:35813->localhost:http-alt (CLOSE_WAIT)
google_au 2340 root 14u IPv4 458314 0t0 TCP localhost:60459->localhost:http-alt (CLOSE_WAIT)
google_au 2340 root 15u IPv4 528298 0t0 TCP localhost:50205->localhost:http-alt (CLOSE_WAIT)
google_au 2340 root 16u IPv4 458336 0t0 TCP localhost:60468->localhost:http-alt (CLOSE_WAIT)
google_au 2340 root 17u IPv4 466467 0t0 TCP localhost:33698->localhost:http-alt (CLOSE_WAIT)
google_au 2340 root 18u IPv4 691295 0t0 TCP localhost:46675->localhost:http-alt (CLOSE_WAIT)
google_au 2340 root 19u IPv4 460164 0t0 TCP localhost:60836->localhost:http-alt (CLOSE_WAIT)
google_au 2340 root 20u IPv4 460058 0t0 TCP localhost:60784->localhost:http-alt (CLOSE_WAIT)
google_au 2340 root 21u IPv4 460508 0t0 TCP localhost:32792->localhost:http-alt (CLOSE_WAIT)
google_au 2340 root 22u IPv4 460173 0t0 TCP localhost:60842->localhost:http-alt (CLOSE_WAIT)
This piles up towards 1023 FDs open until kernel does not allow more.
Did anyone else run into this issue and has any guidance?
I've experienced this error several times. It's always after the same POST request. Here's my log output:
2015/03/16 14:32:35 127.0.0.1:40592 ("x.x.x.x") POST /jenkins/updateCenter/byId/default/postBack
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x20 pc=0x52bf60]
goroutine 6 [running]:
runtime.panic(0x6cbf60, 0xa794c8)
/usr/local/go/src/pkg/runtime/panic.c:266 +0xb6
bufio.(*Reader).Read(0xc210038300, 0xc2100ab000, 0x1000, 0x1000, 0x1000, ...)
/usr/local/go/src/pkg/bufio/bufio.go:152 +0x100
io.(*LimitedReader).Read(0xc2100bd400, 0xc2100ab000, 0x1000, 0x1000, 0x1000, ...)
/usr/local/go/src/pkg/io/io.go:398 +0xbb
net/http.(*body).Read(0xc2100a6300, 0xc2100ab000, 0x1000, 0x1000, 0xc2100ab000, ...)
/usr/local/go/src/pkg/net/http/transfer.go:534 +0x96
io.(*LimitedReader).Read(0xc210050660, 0xc2100ab000, 0x1000, 0x1000, 0x8, ...)
/usr/local/go/src/pkg/io/io.go:398 +0xbb
bufio.(*Writer).ReadFrom(0xc210098780, 0x7fb1aa8a98e0, 0xc210050660, 0x7ebb7, 0x0, ...)
/usr/local/go/src/pkg/bufio/bufio.go:622 +0x15a
io.Copy(0x7fb1aa8a9a50, 0xc210098780, 0x7fb1aa8a98e0, 0xc210050660, 0x0, ...)
/usr/local/go/src/pkg/io/io.go:348 +0x124
net/http.(*transferWriter).WriteBody(0xc21004f070, 0x7fb1aa8a9a50, 0xc210098780, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/transfer.go:196 +0x57c
net/http.(*Request).write(0xc2100a7a90, 0x7fb1aa8a9a50, 0xc210098780, 0x0, 0x0, ...)
/usr/local/go/src/pkg/net/http/request.go:400 +0x7e4
net/http.(*persistConn).writeLoop(0xc21009e380)
/usr/local/go/src/pkg/net/http/transport.go:797 +0x185
created by net/http.(*Transport).dialConn
/usr/local/go/src/pkg/net/http/transport.go:529 +0x61e
goroutine 1 [IO wait]:
net.runtime_pollWait(0x7fb1aa8a9760, 0x72, 0x0)
/usr/local/go/src/pkg/runtime/netpoll.goc:116 +0x6a
net.(*pollDesc).Wait(0xc21004f610, 0x72, 0x7fb1aa8a81b0, 0xb)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:81 +0x34
net.(*pollDesc).WaitRead(0xc21004f610, 0xb, 0x7fb1aa8a81b0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:86 +0x30
net.(*netFD).accept(0xc21004f5b0, 0x7cefa0, 0x0, 0x7fb1aa8a81b0, 0xb)
/usr/local/go/src/pkg/net/fd_unix.go:382 +0x2c2
net.(*TCPListener).AcceptTCP(0xc210000420, 0x45abdb, 0x7fb1aa708bc0, 0x45abdb)
/usr/local/go/src/pkg/net/tcpsock_posix.go:233 +0x47
net.(*TCPListener).Accept(0xc210000420, 0x7fb1aa8a9830, 0xc210000850, 0xc21009e700, 0x0)
/usr/local/go/src/pkg/net/tcpsock_posix.go:243 +0x27
net/http.(*Server).Serve(0xc21001f910, 0x7fb1aa8a8798, 0xc210000420, 0x0, 0x0)
/usr/local/go/src/pkg/net/http/server.go:1622 +0x91
main.main()
/mnt1/ops/goapps/src/github.com/bitly/google_auth_proxy/main.go:113 +0x1671
goroutine 5 [IO wait]:
net.runtime_pollWait(0x7fb1aa8a9610, 0x72, 0x0)
/usr/local/go/src/pkg/runtime/netpoll.goc:116 +0x6a
net.(*pollDesc).Wait(0xc21004fae0, 0x72, 0x7fb1aa8a81b0, 0xb)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:81 +0x34
net.(*pollDesc).WaitRead(0xc21004fae0, 0xb, 0x7fb1aa8a81b0)
/usr/local/go/src/pkg/net/fd_poll_runtime.go:86 +0x30
net.(*netFD).Read(0xc21004fa80, 0xc2100aa000, 0x1000, 0x1000, 0x0, ...)
/usr/local/go/src/pkg/net/fd_unix.go:204 +0x2a0
net.(*conn).Read(0xc2100004c0, 0xc2100aa000, 0x1000, 0x1000, 0x30, ...)
/usr/local/go/src/pkg/net/net.go:122 +0xc5
bufio.(*Reader).fill(0xc2100385a0)
/usr/local/go/src/pkg/bufio/bufio.go:91 +0x110
bufio.(*Reader).Peek(0xc2100385a0, 0x1, 0x0, 0x0, 0x0, ...)
/usr/local/go/src/pkg/bufio/bufio.go:119 +0xcb
net/http.(*persistConn).readLoop(0xc21009e380)
/usr/local/go/src/pkg/net/http/transport.go:687 +0xb7
created by net/http.(*Transport).dialConn
/usr/local/go/src/pkg/net/http/transport.go:528 +0x607
I've read through the README and looked at the code, although I'm still pretty new to Go. I don't understand how multiple upstreams is suppose to work? Can the upstreams be multiple different domains? Or only unique uri?
"If multiple, routing is based on path" is a little vague.
Any help would be appreciated.
I've successfully set up Nginx to proxy Gate One, a Web Terminal Emulator and SSH Client. But when I try to use google_auth_proxy with it, Gate One loads a page saying that is trying to connect to server, but it never does. I have no idea what the problem is, but maybe it is because Gate One uses WebSockets. Does google_auth_proxy have support for WebSockets?
Hi,
I'm running the latest version of oauth2_proxy with an email file and google-apps-domain. I have only 1 user in my emails file, who is not me. However, I'm still able to authenticate to the service as myself even though I'm not on the list of allowed emails.
./oauth2_proxy \
-cookie-secret="secret" \
-client-id="foo" \
-client-secret="secret" \
-upstream=http://127.0.0.1:3000/ \
-authenticated-emails-file=/etc/emails \
-google-apps-domain="domain.org"
Right now any user with an @domain.org can authenticate regardless of the file.
My company uses Azure AD as main user account repository. oauth2_proxy sounds like exactly what we need to use nginx reverse proxy for protecting our internal applications. Any plans for supporting Azure AD?
I'm sure I'm missing something subtle here (especially since I'm sure I got this working last year some time) but I can't for the life of me get this to connect to my upstream endpoint. Authentication works happily but I always end up with "404 page not found" as the response in my browser and I see no requests against my upstream endpoint at all.
$ google_auth_proxy --redirect-url="http://my.web.server/oauth2/callback" --google-apps-domain="my_domain.com" --upstream=["http://localhost:8088"] --cookie-secret="nuts" --client-id="my_client_id.apps.googleusercontent.com" --client-secret="my_client_secret"
2014/02/21 14:21:24 mapping [http://localhost:8088] =>
2014/02/21 14:21:24 listening on 127.0.0.1:4180
2014/02/21 14:21:25 127.0.0.1:59878 GET /
2014/02/21 14:21:25 invalid cookie
2014/02/21 14:21:27 127.0.0.1:59880 GET /oauth2/start
2014/02/21 14:21:29 127.0.0.1:59883 GET /oauth2/callback
2014/02/21 14:21:29 body is client_id=my_client_id&client_secret=my_client_secret&code=the_code&grant_type=authorization_code&redirect_uri=http%3A%2F%2Fmy.web.server%2Foauth2%2Fcallback
2014/02/21 14:21:31 calling https://www.googleapis.com/oauth2/v2/userinfo?access_token=the_access_token
2014/02/21 14:21:31 validating: is davewongillies@my_domain.com valid? true
2014/02/21 14:21:31 authenticating davewongillies@my_domain.com completed
2014/02/21 14:21:31 127.0.0.1:59888 GET /
2014/02/21 14:21:34 127.0.0.1:59890 GET /
2014/02/21 14:21:34 invalid cookie
The doco mentions using --upstream=http://localhost:8088
but that results in the following error:
panic: http: invalid pattern
goroutine 1 [running]:
net/http.(*ServeMux).Handle(0xc2000bbb40, 0x698b40, 0x0, 0xc2000bbc60, 0xc2000bf5e0, ...)
/usr/lib/go/src/pkg/net/http/server.go:1426 +0xd8
main.NewOauthProxy(0xc2000001d8, 0x1, 0x1, 0x7ffff3636f94, 0x26, ...)
/home/davewongillies/.local/src/github.com/bitly/google_auth_proxy/oauthproxy.go:48 +0x27a
main.main()
/home/davewongillies/.local/src/github.com/bitly/google_auth_proxy/main.go:82 +0x8c6
goroutine 2 [syscall]:
We are using the proxy to authenticate our internal dashboards on different TVs, but every 2-3 days we have to login again. Any hint on how to fix it?
Hello, I've never used nginx as web server, so probably I'm wrong in some point. I'm executing the bin file google_auth_proxy version 1.1.1 and I have configured the following in oauth2_proxy2.cfg
upstreams = [
"http://127.0.0.1:8080
]
After a successful login into google, the service should redirect me (I assume) to 127.0.0.1:8080 but it doesn't do that... it redirects me to the authentication page again (redirect_url without params)
Here is the log of google_auth_proxy
2015/06/12 17:24:56 validating: is [email protected] valid? true
2015/06/12 17:24:56 10.1.3.9:44668 authenticating [email protected] completed
Is something more that I have to configure or install (I don't have nginx installed on the machine)
It would be great if i could put this in front of my private S3 buckets, and use google auth proxy to authenticate to them.
Could you add the concept of a backend, and have an option to use s3?
CheckBasicAuth() seems to re-implement net/http's Request.BasicAuth() method. Any reason for this that I might be missing? The code is straightforward enough, but it seems reasonable to use the base net/http library to do this, as it would reduce the amount of coding and testing necessary here, and instinctively I'd prefer to use base libraries anywhere security is involved. Happy to send a pull request for the refactor if you guys decide it's worth it!
Because it contains sensitive info from the callback.
e.g. Referer: https://accounts.google.com/o/oauth2/auth?access_type=offline ......
No big problem, but unnecessary.
Hey guys,
Had an idea for a unique set up of oauth2-proxy, and it doesn't seem to work. Wanted to get your opinion on it.
I have a couple of different services that I am protecting with this proxy, but I would like to limit them to a different set of email domains/authorized emails depending on the service (so #12 wouldn't do the trick). It would also be nice if a user only had to log in once and would get access to only the tools they are configured for (I know, getting picky).
So, as you might be guessing, I set up multiple instances, with different upstreams and auth configs, but with the same cookie secret. In front of all of these, there is one instance that sites in-front of a little homepage with links to all of the other tools. The homepage instance allows all for our domain, each of the other instances either allow all from our domain or an authorized_emails list.
The standard case is that someone authenticates with that homepage instance and then clicks on the link to whatever tool they want.
After trying this out, it seems that after authenticating with the first instance it doesn't matter what domain/email my user has, they are allowed through. So I am guessing (I don't know Go very well and haven't spent too much time looking) that the code only checks against the authorized list on initial auth, then as long as you have a valid cookie encrypted with the right secret you are allowed through?
I'm going to do some digging myself, but would it be plausible to have a flag to have the proxy check against the authorized list on every request?
Thanks! This proxy is really great, glad I stumbled on it.
Our LB periodically hits "/status" and if a failure response is returned, it takes it oor till it gets a success response. The problem is auth proxy redirects all calls to login page. Is there a way I can exclude certain paths from getting authenticated?
Consider renaming this project to google-auth-proxy?
If you'd be OK with that, I can supply debian packaging support for this project.
How are you guys running this script in the background? I have it working but would be nice if it was an init.d script that would start automatically and run in the background.
Do you have an example?
I had a setup where I had Kibana running behind oauth2_proxy v 1.1.1, which was running behind nginx, which terminated SSL. It all worked without a hitch.
[ nginx (ssl termination)] --> [ oauth2_proxy 1.1.1 ] ---> [ kibana ]
Now I'm trying to setup oauth2_proxy v2.0.1 for Kibana in a AWS EBS environment. This time around, the AWS ELB handles SSL termination and proxies to nginx on the instance, which in turn proxies to oauth2_proxy
[ ELB (ssl termination) ] --> [ nginx ] --> [ oauth2_proxy 2.0.1 ] --> [ Kibana ]
This does not work. I can authenticate just fine, but after that oauth2_proxy keeps giving me 404s. I have manually checked that Kibana is running.
I have checked out #62 but it doesn't really help me.
Does anyone know whats going on?
A sample log written by oauth2_proxy
{{IP}} - {{email}} [18/Aug/2015:05:57:26 +0000] {{host}} GET - "/" HTTP/1.0 "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/37.0.2062.120 Chrome/37.0.2062.120 Safari/537.36" 404 19 0.000
And the command I use to start oauth2_proxy in my upstart file
exec ./oauth2_proxy --email-domain="{{domain}}" --upstream=http://{{IP}}:5601 --cookie-secret={{secret}} --cookie-secure=true --client-id={{client_id}} --client-secret={[client_secret]} --redirect-url="https://{{domain}}/oauth2/callback" --http-address="127.0.0.1:4180" >> /var/log/oauth2_proxy.log
Hi there,
I've looked through the README and have not seen this mentioned/I assume it's not supported (though I possibly am missing the obvious). Does this offer anyway of passing the username to the upstream?
The requirement being the upstream application authenticates with a username for saved settings etc.
One idea I had (though sounds a bit dodgy) was if the proxy injects a cookie in the request with the username set (and was to discard any cookies of that name which the client sent - so client can't spoof username). Not sure in practice how feasible this is? What kind of alternative are there?
I ran into this problem trying to proxy the RabbitMQ management interface. Rabbit has a concept of virtual nodes, and the API uses them in paths. The default node is /
and an example endpoint using that is /api/queues/%2F/amq.queue
. The %2F before the queue name represents the virtual node and is properly URL encoded.
When this request hits the proxy, it gets URL decoded to /api/queues///amq.queue
. A 301 redirect is returned from the proxy to the invalid location /api/queues/amq.queue
and the original request is never seen by the server.
It appears that the proxy does not support web-sockets. Any plans to add that support?
Thanks for a great product!
Hi,
I don't know why the oauth2_proxy always display "Invalid Account"
I used oauth2_proxy version 1.1.1.
Can you tell me more about "Invalid Account" error ?
Any problem with my google auth configure ? Thank you in advance.
I configured as following:
docker run -d -p 4180:4180 --name googleauth ianneub/google-auth-proxy:latest \
--client-id="***" \
--client-secret="***" \
--upstream=http://0.0.0.0:5601/ \
--redirect-url="http://example/oauth2/callback" \
--cookie-secret="cookiesecret" \
--cookie-httponly=true \
--cookie-secure=false \
--http-address="0.0.0.0:4180"
The output logs is:
2015/07/06 09:38:42 mapping path "/" => upstream "http://0.0.0.0:5601"
2015/07/06 09:38:42 OauthProxy configured for 412430130106-u187qepbqiros408jnk020p61r6719b5.apps.googleusercontent.com
2015/07/06 09:38:42 Cookie settings: secure (https):false httponly:true expiry:168h0m0s domain:<default>
2015/07/06 09:38:42 listening on 0.0.0.0:4180
115.78.161.231 - - [06/Jul/2015:09:39:00 +0000] example.com GET - "/" HTTP/1.0 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36" 403 2259 0.001
115.78.161.231 - - [06/Jul/2015:09:39:01 +0000] example.com GET - "/" HTTP/1.0 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36" 403 2259 0.000
115.78.161.231 - - [06/Jul/2015:09:39:01 +0000] example.com GET - "/favicon.ico" HTTP/1.0 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36" 403 2270 0.000
115.78.161.231 - - [06/Jul/2015:09:39:02 +0000] example.com GET - "/oauth2/start?rd=%2F" HTTP/1.0 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36" 302 307 0.000
2015/07/06 09:39:04 validating: is [email protected] valid? false
2015/07/06 09:39:04 ErrorPage 403 Permission Denied Invalid Account
115.78.161.231 - - [06/Jul/2015:09:39:04 +0000] example.com GET - "/oauth2/callback?state=/&code=4/jHpuVcg-Jw3q5bu_xhhaar-F7C9MzXsKtw6ul-Iam8w" HTTP/1.0 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36" 403 338 0.350
115.78.161.231 - - [06/Jul/2015:09:39:05 +0000] example.com GET - "/favicon.ico" HTTP/1.0 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.124 Safari/537.36" 403 2270 0.000
The html template is currently hard-coded in templates.go
. It'd be useful if there was an application switch that allowed specifying a custom template file for both the login and error pages.
I see this message all the time while trying to login using google oauth:
<Application> would like to:
* Have offline access
Is it possible to get rid of this message?
I am using the following settings:
-authenticated-emails-file=/emails.txt -upstream="http://web:3000/" -cookie-secret="asdf" -http-address=0.0.0.0:80 -cookie-expire=8h0m0s
Thanks!
I'm running google_auth_proxy with the --authenticated-emails-file flag. My email file has 1 email address per line so it looks something like:
[email protected]
[email protected]
Testing with someone not in the authenticated-emails-file but who is in the same google-apps-domain, they are able to gain access.
Here is how the service is started (with some bit redacted):
/usr/lib/go/bin/google_auth_proxy -http-address=127.0.0.1:4170 --redirect-url=http://xxxx.xxxx.xxxx/ --google-apps-domain=xxxx.com --authenticated-emails-file=/etc/nsqadmin_users.cfg --upstream=http://localhost:4171/ --client-id=xxxxxxxxx --client-secret=xxxxxx --cookie-secret=xxxxxxxx
Am I missing something obvious in my config?
I'm trying to wrap google_auth_proxy in a shell script so it can be called at boot. I'm having some problems with the command line args as the documentation seems to be slightly off, so I have a couple of questions:
-
or --
? Because the errors return --
but the docs say it should be -
and the source is not clear which it should be....start-stop-daemon
(used on Ubuntu and AFAIK Debian) to run google_auth_proxy? I can't seem to pass any arguments using this call: start-stop-daemon --start --pidfile $PIDFILE --exec $DAEMON -- $DAEMON_ARGS
where $DAEMON_ARGS
is a string with all the setup arguments.Thanks in advance.
Chris (attempting to create an Ubuntu boot script)
Trying to proxy to transmission web will return an error page because of missing X-Transmission-Session-Id header. Normal nginx proxying uses the directive "proxy_pass_header X-Transmission-Session-Id;". Is it possible to do the same thing with google_auth_proxy??
I have oauth2 authenticating and redirecting to two different locations based on the subdomain of the request, however no matter what url iI access, the same content is always returns. It seems to me that the Host
header is not being passed to the upstream correctly so it always returns the first route.
This is my upstream:
upstream elk {
server someip;
}
server {
listen *:8081;
server_name elk.ops.*;
location / {
proxy_pass http://elk;
proxy_redirect off;
}
}
upstream bosun {
server anotherip;
}
server {
listen *:8081;
server_name bosun.ops.*;
location / {
proxy_pass http://bosun;
proxy_redirect off;
}
}
This is my my auth routing:
upstream google-auth {
least_conn;
server localhost:4180;
}
server {
listen *:80;
server_name ~^auth.(?<domain>ops.*)$;
location = /oauth2/callback {
proxy_pass http://google-auth;
}
location ~/(?<sub>[^/]+)(?<remaining_uri>.*)$ {
rewrite ^ https://$sub.$domain$remaining_uri;
}
}
server {
listen *:80;
server_name ~^(.+).ops.*;
location = /oauth2/start {
proxy_pass http://google-auth/oauth2/start?rd=%2F$1;
}
location / {
proxy_pass http://google-auth;
}
}
You will notice this is similar to a previous issue regarding subdomains.
This is my oauth config:
## <addr>:<port> to listen on for HTTP/HTTPS clients
http_address = "http://localhost:4180"
# https_address = ":443"
## TLS Settings
# tls_cert_file = ""
# tls_key_file = ""
## the OAuth Redirect URL.
# defaults to the "https://" + requested host header + "/oauth2/callback"
redirect_url = "https://auth.ops.xxxx.com/oauth2/callback"
## the http url(s) of the upstream endpoint. If multiple, routing is based on path
upstreams = [
"http://localhost:8081"
]
## Log requests to stdout
request_logging = true
## pass HTTP Basic Auth, X-Forwarded-User and X-Forwarded-Email information to upstream
# pass_basic_auth = true
## pass the request Host Header to upstream
## when disabled the upstream Host is used as the Host Header
pass_host_header = true
## Email Domains to allow authentication for (this authorizes any email on this domain)
## for more granular authorization use `authenticated_emails_file`
## To authorize any email addresses use "*"
email_domains = [
"domain.com"
]
## The OAuth Client ID, Secret
client_id = "sssh my id"
client_secret = "shhhh secret"
## Pass OAuth Access token to upstream via "X-Forwarded-Access-Token"
pass_access_token = true
## Authenticated Email Addresses File (one email per line)
# authenticated_emails_file = ""
## Htpasswd File (optional)
## Additionally authenticate against a htpasswd file. Entries must be created with "htpasswd -s" for SHA encryption
## enabling exposes a username/login signin form
# htpasswd_file = ""
## Templates
## optional directory with custom sign_in.html and error.html
# custom_templates_dir = ""
## Cookie Settings
## Name - the cookie name
## Secret - the seed string for secure cookies; should be 16, 24, or 32 bytes
## for use with an AES cipher when cookie_refresh or pass_access_token
## is set
## Domain - (optional) cookie domain to force cookies to (ie: .yourcompany.com)
## Expire - (duration) expire timeframe for cookie
## Refresh - (duration) refresh the cookie when duration has elapsed after cookie was initially set.
## Should be less than cookie_expire; set to 0 to disable.
## On refresh, OAuth token is re-validated.
cookie_name = "_oauth2_proxy"
cookie_secret = "secret"
cookie_domain = ".ops.xxxx.xxx"
cookie_expire = "168h"
cookie_refresh = "1h"
cookie_secure = true
cookie_httponly = true
The authentication works fine iI redirects me, however the following things currently happen:
request | upstream displayed |
---|---|
https://bosun.ops.xxx.xxx | bosun upstream displayed (correct) |
https://elk.ops.xxx.xxx | bosun upstream displayed (incorrect) |
https://notevenaroute.ops.xxx.xxx | bosun upstream displayed (incorrect) |
I am not an expert by any means but i have a feeling the Host
header isnt being passed correctly? Any one have any ideas how to track down the error and resolved it?
Edit: The header does in fact seem to be correct when i check the nginx logs.
Is it possible to somehow get the logged in users email? If it is, how is this done?
We need to be able to identify the logged in user somehow.
I don't know what I'm doing wrong that I always get 403 cookie is invalid...
First off, thanks for this great piece of software βΒ it fit a need we had very closely and is working great for us with a few modifications.
I'm documenting the changes we made here in case they are helpful for anyone else.
The upstream web services we use google_auth_proxy to protect are available over the public internet at unique hostnames. Each web service is configured to return an Unauthorized
response for any HTTP request that does not contain a shared secret in a special header.
We've found model to be easily enforceable in our apps and compatible with PaaS which often provide a unique publicly addressable hostname as the only means of addressing a service eg. Heroku.
We wanted to configure a single instance of google_auth_proxy to provide authentication for a number of internal services hosted on Heroku, a PaaS service:
internal-app-1.company.com => internal-app-1.paas.com
internal-app-2.company.com => internal-app-2.paas.com
internal-app-3.company.com => internal-app-3.paas.com
We had difficulty doing this with the current implementation of google_auth_proxy because it:
Following the above example, when we configured internal-app-1.company.com
as a google_auth_proxy instance with internal-app-1.paas.com
as an upstream, a request to internal-app-1.company.com
made a request to the internal-app-1.paas.com
service, but with Host header set to internal-app-1.company.com
.
We changed google_auth_proxy to rewrite the hostnames of requests that it proxies to the upstreams and add the shared secret header to bypass our simple authentication protecting our publicly accessible services.
We've found this to be a useful pattern!
To implement it, we used a custom version of httputil.NewSingleHostReverseProxy
who's default behaviour is not to rewrite the host header of proxied requests.
If this pattern is something that you find useful, we'd be happy to work to merge it upstream!
I've managed to successfully fork this repo and adapt it to a new OAuth2 provider. I know others have done the same. However, given how little work it was, I'm wondering if you'd be open to generalizing the server to support multiple providers out-of-the-box, defaulting to Google but allowing for other providers.
I'm happy to take this on, and add tests as I go, as you can see from the latter link. (I'm about to file a PR for a bug I found in Options.Validate
as part of this process.) That said, I understand if you'd rather keep this instance "pure" and let forks for other providers proliferate as they will.
Apologies if you've address this before; didn't see this come up in any earlier issue. Also, thanks for writing this; we've been using it successfully for months now on our Google-managed domain.
Hi,
Thank you very much for providing the Google Auth Proxy code here on github. I have things working pretty well for simple services, but it looks like things get a lot more complicated when you are trying to place a Google Auth Proxy in front of a service that already functions as a Reverse Proxy.
In my example, I created a new Elasticsearch, Fluentd and Kibana stack (EFK). This is very similar to ELK for those that do not know. Well, EFK actually uses Nginx as a reverse proxy so you can visit myexample.com via port 80 and get content back from Elasticsearch which runs on 9200.
Does anyone have experience adding a Google Auth Proxy in front of another service like ELK/EFK that already uses nginx as a reverse proxy? I'm not sure of the exact wording, but this sounds like "double reverse" proxying to me? What is the recommended way ahead for this? I can get the Auth Proxy to function (with SSL), however when the Kibana dashboard loads after successful oAuth you are required to click "load unsafe scripts" in order to get real EFK content....
I'd like to have the Google Auth Proxy configured with HTTPS and my EFK stack configured with HTTP if that makes sense.
Any help would be appreciated, thanks!
Matthew
Hi...
Thanks for sharing this valuable documentation in git. It helped me a lot.
But I did not understand what is cookie-secret key here and its purpose?
I have tried to run 'google_auth_proxy' without '--cookie-secret' option. But it wont work out.
And I tried with a random string. Its showing error as 'Invalid cookie'.
I have searched for cookie secret key in my google web application project also but I can't find it.
Please help me where can I find it and please don't mine if it is a minor issue.
Thanks in advance.
Hi,
I apologise for this as I'm sure I must be doing something wrong, but here we go.
I downloaded and setup the oauth proxy to use with a Google account. After following the various instructions, I ran a command roughly like:
./oauth2_proxy -upstream="http://localhost:9000" --client-id="my-client-id.apps.googleusercontent.com" --client-secret="my-secret" --cookie-secret="this is a cookie secret for this server"
.
I then executed curl localhost:4180/ping
and got a 200 response as expected.
I ensured that I opened up the security groups on my instance and tried to access it from my local machine. But I only got connection refused
.
Skipping a long story, I ended up compiling the simple Go server which is currently here: https://golang.org/doc/articles/wiki/#tmp_3, with a minor modification to use port 4180 instead of 8080.
And everything worked. I could access my Simple Go Server from both localhost (same machine) and from my own dev machine. Which, I think, rules out any firewalls, security groups etc. But possibly doesn't rule out issues with the Amazon AMI I'm using.
I began digging around in the code for the server, and found that the primary difference is this one:
In http.go
, the current code executes:
listener, err := net.Listen(networkType, listenAddr)
, which translated to: listener, err := net.Listen("tcp", "127.0.0.1:4180")
Whereas the Simple Go Server executes something equivalent to net.Listen("tcp", ":4180")
.
Changing the code in http.go
to:
listener, err := net.Listen(networkType, ":4180")
fixed the problem.
And equally, changing my Simple Go Server to net.Listen("tcp", "127.0.0.1:4180")
breaks it. I'm running Go v1.4.2 if that's relevant.
So it's something about how that listener gets setup.
So, I think the bug is either:
I'm assuming it's some configuration issue around networking I'm not familiar with and therefore the solution is 2. If someone can point me in the right direction that'd be fantastic!
Thanks!
Ed
Hello, Sorry for the simple question, but I didn't found something related with this.
I know that you can define some upstreams with different context (/App1, /App2) in order to create some routing logic, but I have some problems with this configuration.
When I use the following configuration in the upstreams.
upstreams = [
"http://server1/App1",
"http://server2/App2" ]
Routing is working but the context "/Appx" is also sent to the server, can I avoid this? I mean... using the configuration that I posted but when an user call oauth2_proxy with App1 context, the "ProxyPass" request go for http://server1/ instead of http://server1/App1.
Thanks for the help, I like this project too much!
Any chance to have HTTP streaming support?
I'm trying to run Cassandra OpsCenter behind the google_oauth_proxy and it establishes long-running connections between browser and the server which don't work (the rest of the page is displayed fine).
When I try to run it behind plain nginx with the same settings it appears to work ok so this should not be an issue with my nginx config.
Hi,
I am looking to use oauth2_proxy
to secure services that are accessed from our App engine application. Each application is provided a service account that is used for any oauth calls that are made from the application. There are also API's provided to get the service account name and a token for a specified scope.
Here is a python script that if run on the appengine application would pass the assert call.
import json
from google.appengine.api import app_identity, urlfetch
access_token, _ = app_identity.get_access_token(['https://www.googleapis.com/auth/userinfo.email'])
json_data = json.loads(
urlfetch.fetch(
'https://www.googleapis.com/oauth2/v1/userinfo?alt=json&access_token={}'.format(access_token)
).content
)
assert app_identity.get_service_account_name() == json_data['email']
I am wondering if there is a way today for us to pass the access token and the service account name and oauth2_proxy will validate with Google that the supplied service account is in the authenticated-emails-file
?
Thanks!
Hi
This might be more of an "FYI" than a bug - but I thought it was worth noting.
I've just had an issue pointing the proxy at Nginx (running mediawiki and php5-fpm).
On a small percentage of requests, I'd receive this error from oauth2_proxy. It mostly occurred on reloads (which probably meant that if-modified-since request was being sent - which might return a blank body):
http: proxy error: malformed HTTP response ""
After some debugging, I found that turning off keepalives on the "upstream" nginx solved the problem:
/etc/nginx/sites-available/whatever-upstream:
keepalive_timeout 0;
I've got a network dump that appears to include nginx responding with an extraneous blank line in the 304 not modified response. I'm not sure if there's anything oauth2 proxy can do to ignore this, or if I should be logging something with the nginx folk... :)
GET ... HTTP/1.1
Host: ...
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_4) AppleWebKit/600.7.12 (KHTML, like Gecko) Version/8.0.7 Safari/600.7.12
Accept: */*
Accept-Encoding: gzip, deflate
Accept-Language: en-us
Authorization: Basic b3NrYXIucGVhcnNvbjo=
Cache-Control: max-age=0
Cookie: ...
Dnt: 1
If-Modified-Since: Sun, 19 Jul 2015 18:42:34 GMT
Referer: https://...
X-Forwarded-Email: ...
X-Forwarded-For: 127.0.0.1
X-Forwarded-User: ...
X-Real-Ip: ...
X-Scheme: https
HTTP/1.1 304 Not Modified
Server: nginx/1.4.6 (Ubuntu)
Date: Sun, 19 Jul 2015 18:59:29 GMT
Connection: keep-alive
X-Powered-By: PHP/5.5.9-1ubuntu4.11
X-Content-Type-Options: nosniff
GET /skins/common/images/Unboxed_logo_wiki.gif HTTP/1.1
Version info:
Hi,
first of all let me thank you for the awesome piece of software that we use everywhere in our company!.
We've been having problems when using jenkins with the following plugin:
https://wiki.jenkins-ci.org/display/JENKINS/Build+Monitor+Plugin
which actually fetches job statuses and displays them and from time to time we would get 500 response from the web server and then it stops working silently.
We've checked everything and after enabling logging with
-request-logging=true
I saw the following in the log
2015/07/01 15:44:55 reverseproxy.go:141: http: proxy error: EOF
some.ip.add.ress - [email protected] [01/Jul/2015:15:44:55 +0000] some.host.name POST 127.0.0.1:8080 "/$stapler/bound/a8679061-c79c-4afa-a8ae-e259247b2532/fetchJobViews" HTTP/1.0 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:38.0) Gecko/20100101 Firefox/38.0" 500 0 0.000
Can you please investigate this.
Thanks!
I'm trying to use google auth proxy with a django project but am having trouble understanding how to configure it.
I have Google Auth Proxy running, and have configured it to pass upstream to Nginx, which then passes to a uWSGI socket serving my Django application. All of that works, but I don't know how to tell Django to accept the authenticated user that Google Auth Proxy passes. (Currently when I go to my site I am greeted by Google Auth Proxy, go through the Google authentication process, and am then forwarded to my Django project which presents the default Django admin login page.)
Google Auth Proxy runs on 127.0.0.1:4180 and forwards to 127.0.0.1:9090. My Nginx config looks like this:
server {
listen 9090;
location / {
uwsgi_pass 127.0.0.1:9991;
include /usr/local/etc/nginx/uwsgi_params;
}
}
server {
listen 80;
server_name mysite.com;
location / {
proxy_pass http://127.0.0.1:4180;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 1;
proxy_send_timeout 30;
proxy_read_timeout 30;
}
}
According to bitly's introductory blog post, Google Auth Proxy will pass the authenticated user to the upstream application "as HTTP Basic Auth (with an empty value for the password), and in a HTTP Header as X-Forwarded-User for applications that need that context."
My understanding is that I cannot set an environment variable in Nginx, so Django's Authentication using REMOTE_USER seems like it wouldn't work.
What's the best way to handle this?
I've created the configuration file with all the necessary properties and my stuff (Configured with Google) , I've executed the service and it works properly. It ask me for the Google Account I introduce it (I have 2 step auth enabled) and when the Consent screen appear and I click in Accept. the following error appear.
2015/05/31 08:25:08 ErrorPage 500 Internal Error Post https://accounts.google.com/o/oauth2/token: x509: certificate has expired or is not yet valid
Maybe something is missing or is wrong in my configuration file. I need some help at this point.
Without an explicit license, we have to assume copyright Jehiah Czebotar, all rights reserved. So the source is technically open, but anybody who uses it without permission might be vulnerable to legal action.
Hi
We're trying to use the proxy in front of services not on port 80. When the authentication flow succeeds, the redirect is sent to the correct destination but without the port number.
Here's the redirect:
Remote Address: XX.XX.XX.XX:5000
Request URL: http://XXX-dev.wondermall.com:4000/dashboards
Request Method: GET
Status Code: 301 Moved Permanently
Request Headers
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8,he;q=0.6
Cache-Control: no-cache
Connection: keep-alive
Cookie: _oauthproxy=XXXXX9uZGVybWFsbC5jb20=|1408535213|_MvTE1KeXKyhtebp5KrzfAeWEn4=
Host: XXXX-dev.wondermall.com:4000
Pragma: no-cache
Referer: http://XXX-dev.wondermall.com:4000/
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.143 Safari/537.36
Response Headers
Connection: keep-alive
Content-Length: 299
Content-Type: text/html; charset=utf-8
Date: Wed, 20 Aug 2014 11:54:46 GMT
Location: http://XXX-dev.wondermall.com/dashboards/ # Port number missing
Server: nginx/1.2.1
I'd like to share a cookie between multiple services, some of which make cross domain requests using JS. Currently, HttpOnly is set to true when the cookie is set and cleared. It'd be great to provide an option to disable this.
Instinctively, I'd set this using a -cookie-httponly
flag, but there's already a -cookie-https-only
flag. I'd be inclined to rename this to -cookie-secure
, which I think would be closer to the expected naming, but it's a breaking change.
I'm happy to submit a PR to do this, but I wanted to check your thoughts on renaming the https flag.
Hi,
We're looking into deploying this - but it's unclear whether this is intended to be run as a single reverse proxy with many apps behind it (not sure how that could work from a request routing POV) or as a reverse proxy on every box (raising the question of configuring google apps for each callback URI properly).
How does Bitly deploy this? How do you manage configuring lots of separate apps to be fronted this?
Apologies if this question is obtuse!
It would be great to see Docker support for this project, as deployment would be quick and easy. Has someone done this already? If not, I might take a crack at it soon.
Hi i am new using Go, i want to setup enable oauth in Nginx when i follow instruction i get following error
go get github.com/bitly/oauth2_proxy
../bitly/oauth2_proxy/validator.go:24: undefined: WatchForUpdates
Please help me
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.