Code Monkey home page Code Monkey logo

caddy-docker-proxy's Introduction

caddy-docker-proxy's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

caddy-docker-proxy's Issues

How to inspect the generated caddy file inside a docker container

Hi,

I have succussfully setup the proxy towards whoami services, however when I add myown service running http on port 5000 everything fails, the proxy isn't even responding.

How can I inspect the generated configuration inside the container? I have tried with docker logs without seeing anything useful. Thanks in advance.

Caddy version ends up in generated Caddyfile

I am having a pretty weird issue: The caddy version ends up in the generated config and breaks it.

This only happens when reloaded, not with the Caddyfile generated on startup.
Also, the version ends up either in the beginning or between the container and service section (one line before service-1.example.com).
This also happens when only the services are loaded.

I have been using the docker container provided by this repo in the past but have switched to building the image with this command: docker build --build-arg plugins=docker,cloudflare github.com/abiosoft/caddy-docker.git.

The issue arised with the newly built container, but as this plugin prints the version within the config file, I would guess that something is wrong here. I checked the source code but could not find any clues to why this is happending.

It did work one time in between, but after lots of restarting containers and services, I was unable to reach that state again. This is really weird.

Here is the log:

2018/05/07 10:42:40 [INFO] New CaddyFile:
container.example.com {
  import /caddy/Caddy_common_headers
  log / /caddylogs/container.example.com {combined}
  proxy / 172.17.0.1:8080
  tls {
    clients /caddy/clientCA.crt
  }
}
*.container.example.com {
  import /caddy/Caddy_common_headers
  log / /caddylogs/sub_container.example.com {combined}
  proxy / 172.17.0.1:8080
  tls {
    clients /caddy/clientCA.crt
    dns cloudflare
  }
}
service-1.example.com {
  import /caddy/Caddy_common_tlsclientauth
  log / /caddylogs/service-1.example.com {combined}
  proxy / service-1:8080
}
service-2.example.com {
  import /caddy/Caddy_common_tlsclientauth
  log / /caddylogs/service-2.example.com {combined}
  proxy / service-2:8080
}
service-3.example.com {
  import /caddy/Caddy_common_tlsclientauth
  log / /caddylogs/service-3.example.com {combined}
  proxy / service-3:8080
}
Activating privacy features... done.
https://container.example.com
2018/05/07 10:42:40 https://container.example.com
https://*.container.example.com
2018/05/07 10:42:40 https://*.container.example.com
https://service-1.example.com
2018/05/07 10:42:40 https://service-1.example.com
https://service-2.example.com
2018/05/07 10:42:40 https://service-2.example.com
https://service-3.example.com
2018/05/07 10:42:40 https://service-3.example.com
http://container.example.com
2018/05/07 10:42:40 http://container.example.com
http://*.container.example.com
2018/05/07 10:42:40 http://*.container.example.com
http://service-1.example.com
2018/05/07 10:42:40 http://service-1.example.com
http://service-2.example.com
2018/05/07 10:42:40 http://service-2.example.com
http://service-3.example.com
2018/05/07 10:42:40 http://service-3.example.com
2018/05/07 10:42:40 [INFO] New CaddyFile:
0.10.14
container.example.com {
  import /caddy/Caddy_common_headers
  log / /caddylogs/container.example.com {combined}
  proxy / 172.17.0.1:8080
  tls {
    clients /caddy/clientCA.crt
  }
}
*.container.example.com {
  import /caddy/Caddy_common_headers
  log / /caddylogs/sub_container.example.com {combined}
  proxy / 172.17.0.1:8080
  tls {
    clients /caddy/clientCA.crt
    dns cloudflare
  }
}
service-1.example.com {
  import /caddy/Caddy_common_tlsclientauth
  log / /caddylogs/service-1.example.com {combined}
  proxy / service-1:8080
}
service-2.example.com {
  import /caddy/Caddy_common_tlsclientauth
  log / /caddylogs/service-2.example.com {combined}
  proxy / service-2:8080
}
service-3.example.com {
  import /caddy/Caddy_common_tlsclientauth
  log / /caddylogs/service-3.example.com {combined}
  proxy / service-3:8080
}
2018/05/07 10:42:40 [INFO] SIGUSR1: Reloading
2018/05/07 10:42:40 [INFO] Reloading
2018/05/07 10:42:40 [ERROR] SIGUSR1: :2 - Error during parsing: Unknown directive 'container.example.com'

invalid registration

When i try to create my docker swarm service:
2018/07/21 08:57:41 registration error: acme: Error 400 - urn:ietf:params:acme:error:invalidEmail - Error creating new account :: not a valid e-mail address

Here is the composer file:

version: '3.3'
services:
  caddy:
    image: lucaslorentz/caddy-docker-proxy:latest
    command: -agree -email c*@*.com
    ports:
     - 80:80
    volumes:
     - /var/run/docker.sock:/var/run/docker.sock
     - /var/volumes/caddy:/root/.caddy
    networks:
     - caddy
    logging:
      driver: json-file
networks:
  caddy:
    external: true

Default port mapping

One good thing about the nginx-proxy image is that it minimizes the configuration with good defaults. For example, if the mapping port number is not explicitly declared, it checks if there is only one exposed port in the running container and maps to that. If there are multiple ports open, it tries to see if one of those ports is 80 or 443 and maps to that. In every other condition, explicitly providing the port number is mandatory. However, explicit declaration is encouraged to avoid surprises as a best practice.

If such behavior is part of this plugin then it should be documented. If it is not so then we should think about implementing it.

Lost port 80 listener after some days

Hi,

first all works fine, but after some days the port 80 listener is lost! Because of -port 80 it shouldn't bind to 2015...?

/ # netstat -punta | grep LISTEN
tcp        0      0 :::443                  :::*                    LISTEN      1/caddy
tcp        0      0 :::2015                 :::*                    LISTEN      1/caddy

Looks like a reload problem?

2018/04/23 15:04:40 [ERROR] SIGUSR1: listen tcp :80: bind: address already in use
2018/04/23 15:05:09 [ERROR] SIGUSR1: listen tcp :443: bind: address already in use
2018/04/23 15:09:28 [ERROR] SIGUSR1: listen tcp :443: bind: address already in use
2018/04/23 15:14:00 [ERROR] SIGUSR1: listen tcp :80: bind: address already in use
2018/04/23 15:36:56 [ERROR] SIGUSR1: listen tcp :443: bind: address already in use
2018/04/23 15:36:56 [INFO] New CaddyFile:
example.com www.example.com {
  errors stdout
  log stdout
  proxy / 172.17.0.3:2015 {
    transparent
  }
}
2018/04/23 15:36:56 [INFO] SIGUSR1: Reloading
2018/04/23 15:36:56 [INFO] Reloading
2018/04/23 15:36:56 [ERROR] SIGUSR1: listen tcp :443: bind: address already in use

Tried to solve with "sigusr1", but reload fails.

docker exec -ti revproxy kill -sigusr1 1

docker exec -ti revproxy netstat -punta | grep LISTEN
tcp        0      0 :::443                  :::*                    LISTEN      1/caddy
tcp        0      0 :::2015                 :::*                    LISTEN      1/caddy

But reloading fails

2018/04/26 09:11:50 [INFO] SIGUSR1: Reloading
2018/04/26 09:11:50 [INFO] Reloading
2018/04/26 09:11:50 [ERROR] SIGUSR1: listen tcp :443: bind: address already in use

Solved by a revproxy container restart

docker exec -ti revproxy netstat -punta | grep LISTEN
tcp        0      0 :::80                   :::*                    LISTEN      1/caddy
tcp        0      0 :::443                  :::*                    LISTEN      1/caddy
```

Any idea? Looks like it lost the `-port 80` and bind 2015 instead?

revproxy docker run command
```
docker run -dti --restart always --name revproxy --read-only -v /var/run/docker.sock:/var/run/docker.sock:ro -p 80:80 -p 443:443 -v revproxy_certs:/root/.caddy lucaslorentz/caddy-docker-proxy:0.1.2-alpine -email <EMAIL_ADDRESS> -agree=true -log stdout -port 80
```

Some times the website isn't available with http and https, but I haven't debugged that problem myself. So I don't know if it is related to the HTTP / port 80 issue.

Multiple containers per domain possible?

How would I setup multiple containers making up a single domain?

For example all containers would use example.com but with different paths /, /admin, /dashboard for each. Currently if I add multiple containers with the same domain the generator just creates multiple entries for the domain.

Constant errors when not using swarm

dockerd[674]: time="2018-08-16T11:37:52.592586915+02:00" level=error msg="Error getting services: This node is not a swarm manager. Use \"docker swarm init\" or \"docker swarm join\" to connect this node to swarm and try again."
dockerd[674]: time="2018-08-16T11:37:52.593329658+02:00" level=error msg="Handler for GET /v1.35/services returned error: This node is not a swarm manager. Use \"docker swarm init\" or \"docker swarm join\" to connect this node to swarm and try again."

Maybe something could be done to limit the requests and therefore error messages? Like only check for swarm once when caddy is starting and then remember it's not available?

[Help] Caddy use ingress network instead of custom network

Iยดve got some services which are setup with docker stack deploy.
All of the necessary services are connected to a network called "proxy_network".

Caddy-docker recognize these services correctly but use the ip address of the ingress network instead of the proxy_network. Is there any way to block caddy-docker from using the ip addresses of the ingress network?

Thank you in advance

Allow top-level directives

Would be nice to be able to insert top-level caddy-directives trough labels.

I try to insert a import directive to import additional caddy configuration files. I tried with caddy.import=/etc/caddy/Caddyfile*, but this results in
{ import /etc/caddy/Caddyfile* }

The import directive needs to be at the top-level. Any thoughts on that?

setup troubles [support]

after failing to set up nginx-proxy with VIRTUAL_PATH, I tried deploying your docker image, but can't make it work.
There's a mattermost install on the server instantiated from https://github.com/mattermost/mattermost-docker.git with a docker-compose.override.yml:

version: "2"

services:
  web:
    build: web
    ports:
      - "8580:80"
      - "8543:443"
    labels:
      - caddy.address=tii.tu-dresden.de:80/mm
      - caddy.targetport=8580
      - caddy.targetpath=/
      - caddy.caddy.targetprotocol=http

but if I go to that address โ€” nothing. How can I even debug the problem? docker exec -it {/caddy} bash doesn't work, the container log just tells me

Activating privacy features... done.
http://:2015

strange thing is, lsof -i on the docker host doesn't even show that caddy has any port opened? .. ๐Ÿค”

Publish armhf compatible image

I'm a big fan of the project and use it on my x86_64 server however I've recently invested in an armhf VPS which I'd like to use this software on. Is it possible you can publish an armhf compatible image alongside the existing x86_64 one on the docker registry?

If not, would it be possible to provide instructions for compiling under armhf?

Shorthand for transparent proxy

Currently, to do transparent proxy, I have to do this (based on test file)

caddy: host.com
caddy.proxy: "/ service_name:80"
caddy.proxy.transparent: ""

I had to type the service name again and use 3 labels, instead of 2 or even a single label if #12 was solved somehow.

[Q] Redir path + External IP support?

Hi Guys,

I am having a bit of a problem trying to convert the following:

  # Backend (ghost)
  proxy /blog example_blog:2368 {
    transparent
    websocket
  }

  redir 302 {
    if {path} is /
    / /blog
  }

I have tried multiple variations of below but I still seem to be failing on cracking it. Does anyone know what I am doing wrong?

    deploy:
      labels:
        caddy.address: example.me.uk/blog
        caddy.targetport: 2368
        caddy.proxy.websocket:
        caddy.proxy.transparent:
        caddy.redir: 302
        caddy.redir: if {path} is /
        caddy.redir: / /blog

The other one I am still struggling with is when you have an external (non docker) reverse proxy config like:

  # Backend (vmware)
  proxy / https://172.19.255.110 {
    insecure_skip_verify
    transparent
    websocket
  }

How can this also be converted? what am I missing from the examples?

Kind Regards

add option to disable plugin through command line parameter

As I've reported here:
caddyserver/caddy#2326 (comment)

my caddy server is currently not able to start up when the docker service is running, because i have a caddy executable which has this plugin compiled into it.

Is there a way to disable the plugin through a paramter?
I think it would be even be better to disable this by default and then require the user to enable it through a parameter, since probably noone expects caddy to interact with docker when a Caddyfile was provided.

Recompiling caddy without this plugin is not a convenient option for me. I'd rather keep using the version of my distro, which contains this plugin. This way, I can update caddy with the package manager.

[FR]merge the same caddy.address from different containers

for example, there is an application based php+mysql like wordpress. It will create two container, one for static files, another for php backend, both of them use the same domain. if we use caddy docker plugin, it will report duplicate error. because two container have the same label "caddy.address".

Second vhost not bind to port 80

Hello @lucaslorentz ,

I try to add backends to the revproxy, but it doesn't work... I don't know why.

Generated caddyfile looks good to me (domain replaced!)

2018/04/05 14:58:38 [INFO] New CaddyFile:
2018/04/05 14:58:38 sub1.example.com {
  proxy / 172.17.0.7:2015
  tls off
}
sub2.example.com {
  proxy / 172.17.0.4:2015
}

Activating privacy features... done.
http://sub2.example.com
2018/04/05 14:58:38 http://sub2.example.com
http://sub1.example.com:2015
2018/04/05 14:58:38 http://sub1.example.com:2015
https://sub2.example.com
2018/04/05 14:58:38 https://sub2.example.com
2018/04/05 14:59:11 [INFO] sub1.example.com - No such site at :80 (Remote: <IP>, Referer: )

sub1 works fine, also with tls. sub2 doesn't work. Backend IP, Port and content is ok, but caddy proxy log says "No such site at :80"

Looks like sub2 isn't bind to port 80?! Diff is just "tls off". So it should bind to port 80 and 2015, right? Is it a plugin / caddy issue?

Reverse proxy for non-swarm containers

While the Swarm mode is the way to deploy services and it might become the default at some point, many people are still deploying individual containers directly. Theoretically, the plugin should work in those situations as well, but we need to test and document it.

Issues related to PR #63

After recent changes we observe following problems with configuring caddy server using caddy-docker-proxy plugin:

  1. The proxy directive is always added to the resulted config.
  2. Only first caddy.address directive argument is honoured, while remaining are ignored.
  3. caddy.proxy directive value is prepended to resulting proxying directive, which brakes proxying.

1. The proxy directive is always added to the resulted config

@jtyr already opened an issue related to the this problem

2. Only first caddy.address directive argument is honoured while remaining are ignored

Given compose file:

labels:
  caddy.address:    http://domain.com https://domain.com
  caddy.targetport: 80

Expected generated config file:

http://domain.com https://domain.com {
  proxy / app1:80
}

Actual generated config file:

http://domain.com {
  proxy / app1:80
}

Issue can be reproduced with lucaslorentz/caddy-docker-proxy:test image.

3. caddy.proxy directive value is prepended to resulting proxying directive, which brakes proxying

Given compose file:

labels:
  caddy.address: http://domain.com
  caddy.proxy:   / tasks.app:80

Expected generated config file:

http://domain.com {
  proxy / tasks.app:80 
}

Actual generated config file:

http://domain.com {
  proxy / tasks.app:80 / app1
}

Issue can be reproduced with lucaslorentz/caddy-docker-proxy:test image.


All listed issues have workarounds and are not critical, but all together they contribute to a much bigger problem - huge and repeatable configurations, which may result in mistakes adding/removing directives or simply in missing configuration directives, like https redirects.
Here is what I mean. Before changes our docker-compose file looked similar to this:

services:
  proxy:
    command: ['-log', 'stdout']
    deploy:
      labels:
        caddy.address: (default)
        caddy.errors:
        caddy.header:   / -Server
        caddy.log:      / stdout "\{\"hostname\":\"{host}\",\"remote\":\"{remote}\",\"user\":\"{user}\",\"when_iso\":\"{when_iso}\",\"method\":\"{method}\",\"host\":\"{host}\",\"uri\":\"{uri}\",\"proto\":\"{proto}\",\"status\":{status},\"size\":{size},\"query\":\"{query}\",\"mitm\":\"{mitm}\",\"latency_ms\":{latency_ms},\"tls_cipher\":\"{tls_cipher}\",\"tls_version\":\"{tls_version}\",\"scheme\":\"{scheme}\",\"referer\":\"{>Referer}\",\"user_agent\":\"{>User-Agent}\",\"fragment\":\"{fragment}\"\}"
        caddy.redir.if: "{scheme} is http"
        caddy.redir./:  https://{host}{uri}
        caddy.redir:    301
        caddy.tls:      cert.pem key.pem
    image: lucaslorentz/caddy-docker-proxy:latest
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

  app1:
    deploy:
      labels:
        caddy.address:           http://app1.domain.com https://app1.domain.com
        caddy.import:            default
        caddy.proxy.transparent:
        caddy.targetport:        80
    image: brndnmtthws/nginx-echo-headers

  app2:
    deploy:
      labels:
        caddy.address:           http://app2.domain.com https://app2.domain.com
        caddy.import:            default
        caddy.proxy.transparent:
        caddy.targetport:        80
    image: brndnmtthws/nginx-echo-headers

  app3:
  ...

Which resulted in following config:

(default) {
  errors
  header / -Server
  log / stdout "\{\"hostname\":\"{host}\",\"remote\":\"{remote}\",\"user\":\"{user}\",\"when_iso\":\"{when_iso}\",\"method\":\"{method}\",\"host\":\"{host}\",\"uri\":\"{uri}\",\"proto\":\"{proto}\",\"status\":{status},\"size\":{size},\"query\":\"{query}\",\"mitm\":\"{mitm}\",\"latency_ms\":{latency_ms},\"tls_cipher\":\"{tls_cipher}\",\"tls_version\":\"{tls_version}\",\"scheme\":\"{scheme}\",\"referer\":\"{>Referer}\",\"user_agent\":\"{>User-Agent}\",\"fragment\":\"{fragment}\"\}"
  redir 301 {
    / https://{host}{uri}
    if {scheme} is http
  }
  tls cert.pem key.pem
}
http://app1.domain.com https://app1.domain.com {
  import default
  proxy / caddy_app1:80 {
    transparent
  }
}
http://app2.domain.com https://app2.domain.com {
  import default
  proxy / caddy_app2:80 {
    transparent
  }
}
...

But now, we had to rewrite it to following:

services:
  proxy:
    command: ['-log', 'stdout']
    image: lucaslorentz/caddy-docker-proxy:test
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

  app1:
    deploy:
      labels:
        caddy_1.address:           http://app1.domain.com
        caddy_1.redir:             https://{host}{uri}
        caddy_1.header:            / -Server
        caddy_1.log:               / stdout "\{\"hostname\":\"{host}\",\"remote\":\"{remote}\",\"user\":\"{user}\",\"when_iso\":\"{when_iso}\",\"method\":\"{method}\",\"host\":\"{host}\",\"uri\":\"{uri}\",\"proto\":\"{proto}\",\"status\":{status},\"size\":{size},\"query\":\"{query}\",\"mitm\":\"{mitm}\",\"latency_ms\":{latency_ms},\"tls_cipher\":\"{tls_cipher}\",\"tls_version\":\"{tls_version}\",\"scheme\":\"{scheme}\",\"referer\":\"{>Referer}\",\"user_agent\":\"{>User-Agent}\",\"fragment\":\"{fragment}\"\}"
        caddy_2.address:           https://app1.domain.com
        caddy_2.errors:
        caddy_2.header:            / -Server
        caddy_2.log:               / stdout "\{\"hostname\":\"{host}\",\"remote\":\"{remote}\",\"user\":\"{user}\",\"when_iso\":\"{when_iso}\",\"method\":\"{method}\",\"host\":\"{host}\",\"uri\":\"{uri}\",\"proto\":\"{proto}\",\"status\":{status},\"size\":{size},\"query\":\"{query}\",\"mitm\":\"{mitm}\",\"latency_ms\":{latency_ms},\"tls_cipher\":\"{tls_cipher}\",\"tls_version\":\"{tls_version}\",\"scheme\":\"{scheme}\",\"referer\":\"{>Referer}\",\"user_agent\":\"{>User-Agent}\",\"fragment\":\"{fragment}\"\}"
        caddy_2.proxy.transparent:
        caddy_2.targetport:        80
        caddy_2.tls:               cert.pem key.pem
    image: brndnmtthws/nginx-echo-headers

  app2:
    deploy:
      labels:
        caddy_1.address:           http://app2.domain.com
        caddy_1.redir:             https://{host}{uri}
        caddy_1.header:            / -Server
        caddy_1.log:               / stdout "\{\"hostname\":\"{host}\",\"remote\":\"{remote}\",\"user\":\"{user}\",\"when_iso\":\"{when_iso}\",\"method\":\"{method}\",\"host\":\"{host}\",\"uri\":\"{uri}\",\"proto\":\"{proto}\",\"status\":{status},\"size\":{size},\"query\":\"{query}\",\"mitm\":\"{mitm}\",\"latency_ms\":{latency_ms},\"tls_cipher\":\"{tls_cipher}\",\"tls_version\":\"{tls_version}\",\"scheme\":\"{scheme}\",\"referer\":\"{>Referer}\",\"user_agent\":\"{>User-Agent}\",\"fragment\":\"{fragment}\"\}"
        caddy_2.address:           https://app2.domain.com
        caddy_2.errors:
        caddy_2.header:            / -Server
        caddy_2.log:               / stdout "\{\"hostname\":\"{host}\",\"remote\":\"{remote}\",\"user\":\"{user}\",\"when_iso\":\"{when_iso}\",\"method\":\"{method}\",\"host\":\"{host}\",\"uri\":\"{uri}\",\"proto\":\"{proto}\",\"status\":{status},\"size\":{size},\"query\":\"{query}\",\"mitm\":\"{mitm}\",\"latency_ms\":{latency_ms},\"tls_cipher\":\"{tls_cipher}\",\"tls_version\":\"{tls_version}\",\"scheme\":\"{scheme}\",\"referer\":\"{>Referer}\",\"user_agent\":\"{>User-Agent}\",\"fragment\":\"{fragment}\"\}"
        caddy_2.proxy.transparent:
        caddy_2.targetport:        80
        caddy_2.tls:               cert.pem key.pem
    image: brndnmtthws/nginx-echo-headers

  app3:
  ...

Wrong swarm network ip address

Hi @lucaslorentz,

I have a problem with the plugin.

It uses the wrong ip address and can't reach the backend container.

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 10.0.4.7/32 brd 10.0.4.7 scope global lo
       valid_lft forever preferred_lft forever
    inet 10.0.3.11/32 brd 10.0.3.11 scope global lo
       valid_lft forever preferred_lft forever
259: eth0@if260: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP 
    link/ether 02:42:0a:00:04:0a brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.4.10/24 brd 10.0.4.255 scope global eth0
       valid_lft forever preferred_lft forever

caddyfile

example.com {
  errors stdout
  log stdout
  proxy / 10.0.4.10:2015
}

Ping test from web proxy container.

docker exec -ti proxy_webproxy.1.zr2bmrav82ok717yu7j51jthp ping 10.0.4.10
PING 10.0.4.10 (10.0.4.10): 56 data bytes
^C
--- 10.0.4.10 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss

Added an alias to networks section to get a hostname. It's resolved to 10.0.3.11

docker exec -ti proxy_webproxy.1.zr2bmrav82ok717yu7j51jthp ping webspace1_httpd
PING webspace1_httpd (10.0.3.11): 56 data bytes
64 bytes from 10.0.3.11: seq=0 ttl=64 time=0.060 ms
64 bytes from 10.0.3.11: seq=1 ttl=64 time=0.091 ms
64 bytes from 10.0.3.11: seq=2 ttl=64 time=0.083 ms
^C
--- webspace1_httpd ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.060/0.078/0.091 ms

So caddy plugin uses the wrong ip address?
The backend have two networks attached (proxy, mysql).

Any idea how to solve my issue?

Caddyfile loaded multiple times; first by docker, then by short

version: '3.7'

configs:
  caddy-basic-content:
    file: ./caddy/Caddyfile
    labels:
      caddy:

services:
  caddy:
    image: lucaslorentz/caddy-docker-proxy:alpine
    ports:
      - 80:80
      - 443:443
    networks:
      - caddy
    command: -email ${EMAIL} -log stdout
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    deploy:
      placement:
        constraints:
          - node.role == manager
      replicas: 1
      restart_policy:
        condition: any
      resources:
        reservations:
          cpus: '0.1'
          memory: 200M


  postgres:
    image: postgres
    networks:
      - backend
    env_file:
      - .env
    volumes:
      - postgres:/var/lib/postgresql/data

  pgadmin:
    image: dpage/pgadmin4
    ports:
      - 5000:80
    env_file:
      - .env
    networks:
      - caddy
      - backend

  rabbitmq:
    image: rabbitmq
    networks:
      - backend
    environment:
      RABBITMQ_VM_MEMORY_HIGH_WATERMARK: 1024MiB

  redis:
    image: redis
    networks:
      - backend
    deploy:
      resources:
        limits:
          memory: 1024M

  celery:
    image: yswtrue/careerke
    env_file:
      - .env
    networks:
      - backend
    volumes:
      - ../:/var/www/careerke_backend
    working_dir: /var/www/careerke_backend
    command:
      - celery
      - -A
      - careerke_backend
      - worker
      - -l
      - info

  careerke_backend:
    image: yswtrue/careerke
    env_file:
      - .env
    volumes:
      - ../:/var/www/careerke_backend
    command:
      - uwsgi
      - --http
      - :8080
      - --module
      - careerke_backend.wsgi
    networks:
      - caddy
      - backend
    working_dir: /var/www/careerke_backend
    deploy:
      labels:
        caddy.address: ${HOST}
        caddy.targetport: 8080

volumes:
  postgres:
  caddy:

networks:
  caddy:
    driver: overlay
  backend:
    driver: overlay

This is my docker-composer.yml, the caddy can start at first time I deploy. But when I redeploy the docker-compose.yml, the caddy container won't start. This is the logs.

careerke_caddy.1.9og35psybrg9@izuf669jm0y04a7im10qcqz    | 2018/12/06 11:18:51 Caddyfile loaded multiple times; first by docker, then by short
careerke_caddy.1.lx84chwl25k2@izuf669jm0y04a7im10qcqz    | 2018/12/06 11:19:04 Caddyfile loaded multiple times; first by docker, then by short
careerke_caddy.1.jhh5o7ffv78z@izuf669jm0y04a7im10qcqz    | 2018/12/06 11:18:45 Caddyfile loaded multiple times; first by docker, then by short
careerke_caddy.1.id8qwx4fjk77@izuf669jm0y04a7im10qcqz    | 2018/12/06 11:18:58 Caddyfile loaded multiple times; first by docker, then by short

Thanks.

Preparing Release v0.2.0

Breaking changes:

  • Add default port behavior. When caddy.targetport is not specified the proxy directive is generated without port and caddy defaults it based on protocol (http: 80, https: 443). That means proxy directives are now generated just by specifying caddy.address label. If you don't want a proxy directive, like when creating snippets, use caddy label instead of caddy.address.

New features:

  • ARM arm32v6 images
  • Add label caddy.targetprotocol to specify the protocol that should be added to proxy directive
  • Avoid using ingress networks ips on proxies
  • Validate new caddyfile before replacing the current one, that improves stability when configs are wrong
  • Add configuration to change the label prefix (caddy by default). That allows you to split your cluster into multiple caddy groups. Contribution from @jtyr
  • Avoid errors when swarm is not available by skipping swarm objects inspection
  • Sort websites by their names. That makes sure caddyfile snippets are always inserted before normal websites. Contribution from @jtyr
  • Add swarm configs content to the beginning of caddyfiles. You have to add a label with the label prefix to your swarm config.
  • Increase polling interval to 30 seconds and make it configurable through CLI option docker-polling-interval or env variable CADDY_DOCKER_POLLING_INTERVAL
  • Windows images with support automatic caddy reload

multiple caddy environment

Hi

I have a situation where we want to use a 2 tier caddy setup in a frontend / backend configuration

Is it possible to introduce some sort of namespaces so we can chuck config to different caddys in the network?

ie

---

version: '3.2'
services:
  caddy-front:
   image: jcowey/caddy
   volumes:
     - $(pwd)/caddyfile:/etc/caddy/Caddyfile
    ports:
      - 80:80
      - 443:443
    networks:
      - webnet1

  caddy-backend1:
    image: jcowey/caddy
    networks:
      - webnet2
      - webnet1
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

  caddy-backend2:
    image: jcowey/caddy
    networks:
      - webnet2
      - webnet1
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

  app1:
    image: some/image
    networks:
      - webnet2
    labels:
      caddy1.address: apptest1.lvh.me:80
      caddy1.targetport: 80

  app2:
    image: some/image
    networks:
      - webnet2
    labels:
      caddy2.address: apptest2.lvh.me:80
      caddy2.targetport: 80

networks:
  webnet1:
    external: true
  webnet2:
    external: true

This will probably envovle starting caddy with a flag to define what id it is

hope this makes sense

Regards

how to add nested directive like header?

for example here what we want is:

header /api {
	Access-Control-Allow-Origin  *
	Access-Control-Allow-Methods "GET, POST, OPTIONS"
	-Server
}

how to add lables for caddy-docker-proxy?
current it only support

      caddy.header_1: "/ Access-Control-Allow-Origin  *"
      caddy.header_2: '/ Access-Control-Allow-Methods "GET, POST, OPTIONS"'
      caddy.header_3: "/ -Server"

and generate:

http://test.example.com {
  header / Access-Control-Allow-Origin  *
  heaer / Access-Control-Allow-Methods "GET, POST, OPTIONS"
  header / -Server
  proxy / 172.18.0.8:80
  tls off
}

it's not nested directive.
the same problem when we want to define something from backend

proxy / web1.local:80 web2.local:90 web3.local:100 {
	policy round_robin
	health_check /health
	transparent
        header_downstream Server  {<Header}
}

how could we define them as lables format?

Tricky to add more directives to the proxy block

...
-l caddy.address=example.com \
-l caddy.targetport=8080 \
-l 'caddy.proxy=/socket backend:8080/socket' \
-l 'caddy.proxy./socket backend:8080/socket=websocket' \
...

results in:

example.com {
  proxy / 172.17.0.1:8080 {
    /socket backend:8080/socket websocket
  }
}

while I was going for:

example.com {
  proxy / 172.17.0.1:8080
  proxy /socket backend:8080/socket {
    websocket
  }
}

Plugin is always adding proxy directive

After the recent changes, the plugin is always adding the proxy directive regardless if the directive has any proxy related data defined. Before we could use:

    labels:
      caddy_1.address: http://example.lan:8080
      caddy_1.redir: https://example.lan:8443{uri}

      caddy_2.address: https://example.lan:8443
      caddy_2.tls: self_signed
      caddy_2.targetport: 80
      caddy_2.proxy.keepalive: 0

which generated:

http://example.lan:8080 {
  redir https://example.lan:8443{uri}
}
https://example.lan:8443 {
  tls self_signed
  proxy / myservice:80 {
    keepalive 0
  }
}

but now it generates:

http://example.lan:8080 {
  ### SEE THE LINE BELLOW ###
  proxy / myservice:80
  redir https://example.lan:8443{uri}
}
https://example.lan:8443 {
  tls self_signed
  proxy / myservice:80  {
    keepalive 0
  }
}

Compatibility to rancher server networking

I found your reverse proxy plugin and could be move from my own custom to your solution.
But to be compatible with rancher server the ip address have to be read from a custom label.

            "IpcMode": "",
                "io.rancher.cni.network": "ipsec",
                "io.rancher.container.ip": "10.42.155.12/16",

Would it possible to add this custom value or overwrite the ip variable by this one optional?

Proxy wont forward domain.com/<subdir>

Hello there, first of all, great proxy but I can't get it to work for me. I'm not the best in docker and I'm still learning so please forgive me when I ask stupid questions.

So this is my scenario, I want to proxy domain.com/wp1 to a WordPress container. The Wordpress container has two networks attached. one for the backend and one for the frontend. The backend one is used for communication between WordPress and mySQL Server. The frontend one is used to proxy the request domain.com/wp1 to the container.

I do this because I don't want that other containers can access my WordPress database.

And this approach worked somewhat. the proxy detects the container and generates a proxy directive

domain.com/wp1 {
  proxy / 172.19.0.3
}

But it, in the end, it won't work, I cannot access it.

The labels I uses are

labels:
  - caddy.address=domain.com/wp1
  - caddy.proxy.transparent

am i doing something wrong here?

Problem with port number

Modified example yml for usage with docker-compose without swarm and get the following error message.

ERROR: for whoami0  Cannot create container for service whoami0: json: cannot unmarshal number into Go value of type string

ERROR: for whoami1  Cannot create container for service whoami1: json: cannot unmarshal number into Go value of type string

yml

version: '2'

services:
  caddy:
    image: lucaslorentz/caddy-docker-proxy:alpine
    ports:
      - 2015:2015
    command: -email [email protected] -agree=true
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

  whoami0:
    image: jwilder/whoami
    labels:
      caddy.address: whoami0.example.com
      caddy.targetport: 8000
      caddy.tls: "off"

  whoami1:
    image: jwilder/whoami
    labels:
      caddy.address: whoami1.example.com
      caddy.targetport: 8000
      caddy.tls: "off"

To fix it change port number to a string caddy.targetport: "8000" instead of caddy.targetport: 8000.
Is there a way to convert / work around the issue plugin / go side?

Should we change docker/docker to moby/moby

"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/client"

A while ago, docker decided to rename their repositories and rebrand a few things under moby. While GitHub has redirects in place to maintain backward compatibility, I think we might want to update the code to reference new location by replacing docker/docker to moby/moby.

Connect to a Docker daemon on TCP

Currently, only the /var/run/docker.sock socket connection seems to be possible. This works fine if the Docker Swarm manager is on the same machine where Caddy is running or the the file system containing the socket file is mounted accordingly. It might be useful for some people if instead a TCP address is configurable while keeping the socket as the default.

Add loadbalancing to multiple containers/replicas

Hi @lucaslorentz ,

traffic won't loadbalanced between backends / replicas at the moment.
Instead of add the backend to an existing caddyfile entry it generates a second one.

http://192.168.199.22/httpd1 {
  proxy / 10.0.2.5:80
}
http://192.168.199.22/httpd1 {
  proxy / 10.0.2.6:80
}

It should be balanced to the backend containers:

http://192.168.199.22/httpd1 {
  proxy / 10.0.2.5:80 10.0.2.6:80
}

No 443 port listener

Tried a new test deploy with tls, but caddy have no 443 listener?

docker exec -ti proxy_webproxy.1.xdspyx2d6lduvfyle7tkymx57 netstat -punta |grep LISTEN
tcp        0      0 127.0.0.11:45748        0.0.0.0:*               LISTEN      -
tcp        0      0 :::80                   :::*                    LISTEN      1/caddy

But should have tls vhosts?

2018/04/30 15:38:27 [INFO] New CaddyFile:
example.com www.example.com {
  errors stdout
  log stdout
  proxy / 10.0.2.104:2015
}
sub3.example.com {
  basicauth / <USER> <PW>
  proxy / 10.0.2.98:8080
}
2018/04/30 15:38:27 [INFO] SIGUSR1: Reloading
2018/04/30 15:38:27 [INFO] Reloading
2018/04/30 15:38:27 [INFO] Reloading complete
2018/04/30 15:38:27 http: Server closed

http works fine, but without the 443 port listener all the redirected http to https traffic will fail...
Tested with 0.1.0-alpine and 0.1.2-alpine.

Works fine with manual deployment (docker run...), but first docker stack deploy failed today.

Custom config without docker swarm

What do you thing about adding a way to merge a default Caddyfile that we can mount in container so we can use this without swarm ?

This will allow to add default config like redirect http to https.

Docker Swarm and service Tags

Looks like caddy-docker-proxy doesn't recognize services labels correctly.
If i deploy caddy-docker-proxy on each nodes then i end up having one loadbalancing for the container.
But it should be able to only run on one node (a manager).

PS:
isn't reading informations from services and container redondant?
this is not much as they should result in the same directives but maybe choosing a mode:

  • docker to look only at containers
  • swarm to only look at services
    Or retain the container ids that were already parsed trought services

Security problem with mounted docker.sock?

Because docker.sock ist mountet to the caddy reverse proxy it could be a security issue? What do you think about? Would it possible to split it up in a caddy reverse proxy and a second container with a shared caddyfile / caddy config?

Dot in attributs

Simple question: How to do ?

# .htaccess / data / config / ... shouldn't be accessible from outside
status 403 {
	/.htaccess
}

caddy.status./.htaccess= became / { .htaccess }

Negotiate API version instead of relying on DOCKER_API_VERSION alone

Using DOCKER_API_VERSION works, however the Docker client API is also able to negotiate an API version itself:

dockerPing, err := dockerClient.Ping(ctx)
// deal with err
dockerClient.NegotiateAPIVersionPing(dockerPing)

// or

dockerClient.NegotiateAPIVersion(ctx)

The ping response includes the Docker daemon's max supported API version, and the client will set the right options to make everything work. You can also call NegotiateAPIVersion, which does the same thing, but discards the errors and uses defaults if something goes wrong. The former's probably better, since you can bail out early if Ping fails (i.e. you can report a problem with the connection to Docker before trying to start using it).

It'd be a good idea to stick this into GenerateCaddyFile after you create the client.

EDIT: I should also mention that this is 100% backwards compatible, as setting DOCKER_API_VERSION causes the client to ignore negotiation and take what was specified. I'm happy to send a PR if that's easier. ๐Ÿ˜„

Error while building

This line in build.sh keeps throwing this error.
Is there something missing from the docs on how to build?

โžœ  caddy-docker-proxy git:(master) โœ— go test -race -v $(glide novendor)

plugin/generator.go:13:2: cannot find package "github.com/docker/docker/api/types" in any of:
        /usr/local/Cellar/go/1.9.4/libexec/src/github.com/docker/docker/api/types (from $GOROOT)
        /Users/xo/code/go/src/github.com/docker/docker/api/types (from $GOPATH)
plugin/loader.go:10:2: cannot find package "github.com/docker/docker/api/types/filters" in any of:
        /usr/local/Cellar/go/1.9.4/libexec/src/github.com/docker/docker/api/types/filters (from $GOROOT)
        /Users/xo/code/go/src/github.com/docker/docker/api/types/filters (from $GOPATH)
plugin/generator.go:14:2: cannot find package "github.com/docker/docker/api/types/swarm" in any of:
        /usr/local/Cellar/go/1.9.4/libexec/src/github.com/docker/docker/api/types/swarm (from $GOROOT)
        /Users/xo/code/go/src/github.com/docker/docker/api/types/swarm (from $GOPATH)
plugin/generator.go:15:2: cannot find package "github.com/docker/docker/client" in any of:
        /usr/local/Cellar/go/1.9.4/libexec/src/github.com/docker/docker/client (from $GOROOT)
        /Users/xo/code/go/src/github.com/docker/docker/client (from $GOPATH)
main.go:7:2: cannot find package "github.com/caddyserver/dnsproviders/route53" in any of:
        /usr/local/Cellar/go/1.9.4/libexec/src/github.com/caddyserver/dnsproviders/route53 (from $GOROOT)
        /Users/xo/code/go/src/github.com/caddyserver/dnsproviders/route53 (from $GOPATH)
main.go:5:2: cannot find package "github.com/lucaslorentz/caddy-docker-proxy/plugin" in any of:
        /usr/local/Cellar/go/1.9.4/libexec/src/github.com/lucaslorentz/caddy-docker-proxy/plugin (from $GOROOT)
        /Users/xo/code/go/src/github.com/lucaslorentz/caddy-docker-proxy/plugin (from $GOPATH)
main.go:9:9: cannot find package "github.com/miekg/caddy-prometheus" in any of:
        /usr/local/Cellar/go/1.9.4/libexec/src/github.com/miekg/caddy-prometheus (from $GOROOT)
        /Users/xo/code/go/src/github.com/miekg/caddy-prometheus (from $GOPATH)

This is the only change I've made.

diff --git a/main.go b/main.go
index e3e1f66..466db20 100644
--- a/main.go
+++ b/main.go
@@ -6,6 +6,8 @@ import (

        _ "github.com/caddyserver/dnsproviders/route53"

+        _ "github.com/miekg/caddy-prometheus"
+
        // Caddy
        "github.com/mholt/caddy/caddy/caddymain"
 )

Missing proxy directive if errors / log directive added

I have a working proxy caddyfile with caddy.address=domain, but if additional directives added the proxy section is missing.

base works fine

-l "caddy.address=<DOMAIN1> <DOMAIN2>"

added

-l caddy.log=stdout -l caddy.error=stdout

result

<DOMAIN1> <DOMAIN2> {
  errors stdout
  log stdout
}

Maybe a bug? Why proxy section is missing?

[Help] How to convert complex nginx config

Hi @lucaslorentz,
I have a complex nginx reverse proxy example which should be converted to caddy-docker-proxy.

I'm not working with nginx so it's not totally clear to me how it should be to work as it should.

https://github.com/tootsuite/documentation/blob/master/Running-Mastodon/Production-guide.md

map $http_upgrade $connection_upgrade {
  default upgrade;
  ''      close;
}

server {
  listen 80;
  listen [::]:80;
  server_name example.com;
  root /home/mastodon/live/public;
  # Useful for Let's Encrypt
  location /.well-known/acme-challenge/ { allow all; }
  location / { return 301 https://$host$request_uri; }
}

server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;
  server_name example.com;

  ssl_protocols TLSv1.2;
  ssl_ciphers HIGH:!MEDIUM:!LOW:!aNULL:!NULL:!SHA;
  ssl_prefer_server_ciphers on;
  ssl_session_cache shared:SSL:10m;

  ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

  keepalive_timeout    70;
  sendfile             on;
  client_max_body_size 80m;

  root /home/mastodon/live/public;

  gzip on;
  gzip_disable "msie6";
  gzip_vary on;
  gzip_proxied any;
  gzip_comp_level 6;
  gzip_buffers 16 8k;
  gzip_http_version 1.1;
  gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

  add_header Strict-Transport-Security "max-age=31536000";

  location / {
    try_files $uri @proxy;
  }

  location ~ ^/(emoji|packs|system/accounts/avatars|system/media_attachments/files) {
    add_header Cache-Control "public, max-age=31536000, immutable";
    try_files $uri @proxy;
  }
  
  location /sw.js {
    add_header Cache-Control "public, max-age=0";
    try_files $uri @proxy;
  }

  location @proxy {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Proxy "";
    proxy_pass_header Server;

    proxy_pass http://127.0.0.1:3000;
    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    tcp_nodelay on;
  }

  location /api/v1/streaming {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Proxy "";

    proxy_pass http://127.0.0.1:4000;
    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    tcp_nodelay on;
  }

  error_page 500 501 502 503 504 /500.html;
}

I would ask caddy community, but it ist caddy-docker-proxy specific question.

Some parts should be easy (tls, websocket / headers), but I don't know how the location / @Proxy part is working and should be ported.

The backends are port 3000 and 4000 of different containers selected by path / websocket connection I think?

Generator is using IP for only some configs?

Is there a reason the Caddyfile is now being updated with IP addresses whereas it use to have the container dns name?

All my other containers are like the portainer one.

homeassistant.domain.tld {
  proxy / 10.0.10.11:8123 {
    transparent
    websocket
  }
}
portainer.domain.tld {
  proxy / portainer:9000
}

[Help] Is there a way to configure using staging environment of Let's Encrypt?

I'm using the docker image of this project as the entrance of my services.
However, I forgot to expose 80 port (exposed 443 port only) so the HTTP challenges failed, and due to the restart policy, the container restarted again and again and thus the rate limit of Let's Encrypt was hit.

I am wondering if there is a way to configure Caddy-docker-proxy using staging environment of Let's Encrypt until I make all my services work.

Thank you!

proxying to https backend

hi

i need to proxy to a https backend on 443

delcaring the port as 443 causes a 400 error due to caddy trying to communicate on http

how would i proxy to a https backend?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.