Code Monkey home page Code Monkey logo

caddy-docker-proxy's Introduction

Caddy-Docker-Proxy

Build Status Go Report Card

NEW MODULE NAME!

We've renamed our go module, from version 2.7.0 forward you should import caddy-docker-proxy using github.com/lucaslorentz/caddy-docker-proxy/v2 or a specific version github.com/lucaslorentz/caddy-docker-proxy/[email protected].

The old name github.com/lucaslorentz/caddy-docker-proxy/plugin will be a available for backwards compatibility, but it will not have the latest version.

Introduction

This plugin enables Caddy to be used as a reverse proxy for Docker containers via labels.

How does it work?

The plugin scans Docker metadata, looking for labels indicating that the service or container should be served by Caddy.

Then, it generates an in-memory Caddyfile with site entries and proxies pointing to each Docker service by their DNS name or container IP.

Every time a Docker object changes, the plugin updates the Caddyfile and triggers Caddy to gracefully reload, with zero-downtime.

Table of contents

Basic usage example, using docker-compose

$ docker network create caddy

caddy/docker-compose.yml

version: "3.7"
services:
  caddy:
    image: lucaslorentz/caddy-docker-proxy:ci-alpine
    ports:
      - 80:80
      - 443:443
    environment:
      - CADDY_INGRESS_NETWORKS=caddy
    networks:
      - caddy
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - caddy_data:/data
    restart: unless-stopped

networks:
  caddy:
    external: true

volumes:
  caddy_data: {}
$ docker-compose up -d

whoami/docker-compose.yml

version: '3.7'
services:
  whoami:
    image: traefik/whoami
    networks:
      - caddy
    labels:
      caddy: whoami.example.com
      caddy.reverse_proxy: "{{upstreams 80}}"

networks:
  caddy:
    external: true
$ docker-compose up -d

Now, visit https://whoami.example.com. The site will be served automatically over HTTPS with a certificate issued by Let's Encrypt or ZeroSSL.

Labels to Caddyfile conversion

Please first read the Caddyfile Concepts documentation to understand the structure of a Caddyfile.

Any label prefixed with caddy will be converted into a Caddyfile config, following these rules:

Tokens and arguments

Keys are the directive name, and values are whitespace separated arguments:

caddy.directive: arg1 arg2
↓
{
	directive arg1 arg2
}

If you need whitespace or line-breaks inside one of the arguments, use double-quotes or backticks around it:

caddy.respond: / "Hello World" 200
↓
{
	respond / "Hello World" 200
}
caddy.respond: / `Hello\nWorld` 200
↓
{
	respond / `Hello
World` 200
}
caddy.respond: |
	/ `Hello
	World` 200
↓
{
	respond / `Hello
World` 200
}

Dots represent nesting, and grouping is done automatically:

caddy.directive: argA  
caddy.directive.subdirA: valueA  
caddy.directive.subdirB: valueB1 valueB2
↓
{
	directive argA {  
		subdirA valueA  
		subdirB valueB1 valueB2  
	}
}

Arguments for the parent directive are optional (e.g. no arguments to directive, setting subdirective subdirA directly):

caddy.directive.subdirA: valueA
↓
{
	directive {
		subdirA valueA
	}
}

Labels with empty values generate a directive without any arguments:

caddy.directive:
↓
{
	directive
}

Ordering and isolation

Be aware that directives are subject to be sorted according to the default directive order defined by Caddy, when the Caddyfile is parsed (after the Caddyfile is generated from labels).

Directives from labels are ordered alphabetically by default:

caddy.bbb: value
caddy.aaa: value
↓
{
	aaa value 
	bbb value
}

Suffix _<number> isolates directives that otherwise would be grouped:

caddy.route_0.a: value
caddy.route_1.b: value
↓
{
	route {
		a value
	}
	route {
		b value
	}
}

Prefix <number>_ isolates directives but also defines a custom ordering for directives (mainly relevant within route blocks), and directives without order prefix will go last:

caddy.1_bbb: value
caddy.2_aaa: value
caddy.3_aaa: value
↓
{
	bbb value
	aaa value
	aaa value
}

Sites, snippets and global options

A label caddy creates a site block:

caddy: example.com
caddy.respond: "Hello World" 200
↓
example.com {
	respond "Hello World" 200
}

Or a snippet:

caddy: (encode)
caddy.encode: zstd gzip
↓
(encode) {
	encode zstd gzip
}

It's also possible to isolate Caddy configurations using suffix _<number>:

caddy_0: (snippet)
caddy_0.tls: internal
caddy_1: site-a.com
caddy_1.import: snippet
caddy_2: site-b.com
caddy_2.import: snippet
↓
(snippet) {
	tls internal
}
site_a {
	import snippet
}
site_b {
	import snippet
}

Global options can be defined by not setting any value for caddy. They can be set in any container/service, including caddy-docker-proxy itself. Here is an example

caddy.email: [email protected]
↓
{
	email [email protected]
}

Named matchers can be created using @ inside labels:

caddy: localhost
[email protected]: /sourcepath /sourcepath/*
caddy.reverse_proxy: @match localhost:6001
↓
localhost {
	@match {
		path /sourcepath /sourcepath/*
	}
	reverse_proxy @match localhost:6001
}

Go templates

Golang templates can be used inside label values to increase flexibility. From templates, you have access to current Docker resource information. But, keep in mind that the structure that describes a Docker container is different from a service.

While you can access a service name like this:

caddy.respond: /info "{{.Spec.Name}}"
↓
respond /info "myservice"

The equivalent to access a container name would be:

caddy.respond: /info "{{index .Names 0}}"
↓
respond /info "mycontainer"

Sometimes it's not possile to have labels with empty values, like when using some UI to manage Docker. If that's the case, you can also use our support for go lang templates to generate empty labels.

caddy.directive: {{""}}
↓
directive

Template functions

The following functions are available for use inside templates:

upstreams

Returns all addresses for the current Docker resource separated by whitespace.

For services, that would be the service DNS name when proxy-service-tasks is false, or all running tasks IPs when proxy-service-tasks is true.

For containers, that would be the container IPs.

Only containers/services that are connected to Caddy ingress networks are used.

⚠️ caddy docker proxy does a best effort to automatically detect what are the ingress networks. But that logic fails on some scenarios: #207. To have a more resilient solution, you can manually configure Caddy ingress network using CLI option ingress-networks, environment variable CADDY_INGRESS_NETWORKS. You can also specify the ingress network per container/service by adding to it a label caddy_ingress_network with the network name.

Usage: upstreams [http|https] [port]

Examples:

caddy.reverse_proxy: {{upstreams}}
↓
reverse_proxy 192.168.0.1 192.168.0.2
caddy.reverse_proxy: {{upstreams https}}
↓
reverse_proxy https://192.168.0.1 https://192.168.0.2
caddy.reverse_proxy: {{upstreams 8080}}
↓
reverse_proxy 192.168.0.1:8080 192.168.0.2:8080
caddy.reverse_proxy: {{upstreams http 8080}}
↓
reverse_proxy http://192.168.0.1:8080 http://192.168.0.2:8080

⚠️ Be carefull with quotes around upstreams. Quotes should only be added when using yaml.

caddy.reverse_proxy: "{{upstreams}}"
↓
reverse_proxy "192.168.0.1 192.168.0.2"

Examples

Proxying all requests to a domain to the container

caddy: example.com
caddy.reverse_proxy: {{upstreams}}

Proxying all requests to a domain to a subpath in the container

caddy: example.com
caddy.rewrite: * /target{path}
caddy.reverse_proxy: {{upstreams}}

Proxying requests matching a path, while stripping that path prefix

caddy: example.com
caddy.handle_path: /source/*
caddy.handle_path.0_reverse_proxy: {{upstreams}}

Proxying requests matching a path, rewriting to different path prefix

caddy: example.com
caddy.handle_path: /source/*
caddy.handle_path.0_rewrite: * /target{uri}
caddy.handle_path.1_reverse_proxy: {{upstreams}}

Proxying all websocket requests, and all requests to /api*, to the container

caddy: example.com
[email protected]_header: Connection *Upgrade*
[email protected]_header: Upgrade websocket
caddy.0_reverse_proxy: @ws {{upstreams}}
caddy.1_reverse_proxy: /api* {{upstreams}}

Proxying multiple domains, with certificates for each

caddy: example.com, example.org, www.example.com, www.example.org
caddy.reverse_proxy: {{upstreams}}

More community-maintained examples are available in the Wiki.

Docker configs

Note: This is for Docker Swarm only. Alternatively, use CADDY_DOCKER_CADDYFILE_PATH or -caddyfile-path

You can also add raw text to your Caddyfile using Docker configs. Just add Caddy label prefix to your configs and the whole config content will be inserted at the beginning of the generated Caddyfile, outside any server blocks.

Here is an example

Proxying services vs containers

Caddy docker proxy is able to proxy to swarm services or raw containers. Both features are always enabled, and what will differentiate the proxy target is where you define your labels.

Services

To proxy swarm services, labels should be defined at service level. In a docker-compose file, labels should be inside deploy, like:

services:
  foo:
    deploy:
      labels:
        caddy: service.example.com
        caddy.reverse_proxy: {{upstreams}}

Caddy will use service DNS name as target or all service tasks IPs, depending on configuration proxy-service-tasks.

Containers

To proxy containers, labels should be defined at container level. In a docker-compose file, labels should be outside deploy, like:

services:
  foo:
    labels:
      caddy: service.example.com
      caddy.reverse_proxy: {{upstreams}}

Execution modes

Each caddy docker proxy instance can be executed in one of the following modes.

Server

Acts as a proxy to your Docker resources. The server starts without any configuration, and will not serve anything until it is configured by a "controller".

In order to make a server discoverable and configurable by controllers, you need to mark it with label caddy_controlled_server and define the controller network via CLI option controller-network or environment variable CADDY_CONTROLLER_NETWORK.

Server instances doesn't need access to Docker host socket and you can run it in manager or worker nodes.

Configuration example

Controller

Controller monitors your Docker cluster, generates Caddy configuration and pushes to all servers it finds in your Docker cluster.

When controller instances are connected to more than one network, it is also necessary to define the controller network via CLI option controller-network or environment variable CADDY_CONTROLLER_NETWORK.

Controller instances require access to Docker host socket.

A single controller instance can configure all server instances in your cluster.

Configuration example

Standalone (default)

This mode executes a controller and a server in the same instance and doesn't require additional configuration.

Configuration example

Caddy CLI

This plugin extends caddy's CLI with the command caddy docker-proxy.

Run caddy help docker-proxy to see all available flags.

Usage of docker-proxy:
  --caddyfile-path string
        Path to a base Caddyfile that will be extended with Docker sites
  --envfile
        Path to an environment file with environment variables in the KEY=VALUE format to load into the Caddy process
  --controller-network string
        Network allowed to configure Caddy server in CIDR notation. Ex: 10.200.200.0/24
  --ingress-networks string
        Comma separated name of ingress networks connecting Caddy servers to containers.
        When not defined, networks attached to controller container are considered ingress networks
  --docker-sockets
        Comma separated docker sockets
        When not defined, DOCKER_HOST (or default docker socket if DOCKER_HOST not defined)
  --docker-certs-path
        Comma separated cert path, you could use empty value when no cert path for the concern index docker socket like cert_path0,,cert_path2
  --docker-apis-version
        Comma separated apis version, you could use empty value when no api version for the concern index docker socket like cert_path0,,cert_path2
  --label-prefix string
        Prefix for Docker labels (default "caddy")
  --mode
        Which mode this instance should run: standalone | controller | server
  --polling-interval duration
        Interval Caddy should manually check Docker for a new Caddyfile (default 30s)
  --event-throttle-interval duration
        Interval to throttle caddyfile updates triggered by docker events (default 100ms)
  --process-caddyfile
        Process Caddyfile before loading it, removing invalid servers (default true)
  --proxy-service-tasks
        Proxy to service tasks instead of service load balancer (default true)
  --scan-stopped-containers
        Scan stopped containers and use their labels for Caddyfile generation (default false)

Those flags can also be set via environment variables:

CADDY_DOCKER_CADDYFILE_PATH=<string>
CADDY_DOCKER_ENVFILE=<string>
CADDY_CONTROLLER_NETWORK=<string>
CADDY_INGRESS_NETWORKS=<string>
CADDY_DOCKER_SOCKETS=<string>
CADDY_DOCKER_CERTS_PATH=<string>
CADDY_DOCKER_APIS_VERSION=<string>
CADDY_DOCKER_LABEL_PREFIX=<string>
CADDY_DOCKER_MODE=<string>
CADDY_DOCKER_POLLING_INTERVAL=<duration>
CADDY_DOCKER_PROCESS_CADDYFILE=<bool>
CADDY_DOCKER_PROXY_SERVICE_TASKS=<bool>
CADDY_DOCKER_SCAN_STOPPED_CONTAINERS=<bool>
CADDY_DOCKER_NO_SCOPE=<bool, default scope used>

Check examples folder to see how to set them on a Docker Compose file.

Docker images

Docker images are available at Docker hub: https://hub.docker.com/r/lucaslorentz/caddy-docker-proxy/

Choosing the version numbers

The safest approach is to use a full version numbers like 0.1.3. That way you lock to a specific build version that works well for you.

But you can also use partial version numbers like 0.1. That means you will receive the most recent 0.1.x image. You will automatically receive updates without breaking changes.

Chosing between default or alpine images

Our default images are very small and safe because they only contain Caddy executable. But they're also quite hard to troubleshoot because they don't have shell or any other Linux utilities like curl or dig.

The alpine images variant are based on the Linux Alpine image, a very small Linux distribution with shell and basic utilities tools. Use -alpine images if you want to trade security and small size for a better troubleshooting experience.

CI images

Images with the ci tag suffix means they were automatically generated by automated builds. CI images reflect the current state of master branch and their stability is not guaranteed. You may use CI images if you want to help testing the latest features before they're officially released.

ARM architecture images

Currently we provide linux x86_64 images by default.

You can also find images for other architectures like arm32v6 images that can be used on Raspberry Pi.

Windows images

We recently introduced experimental windows containers images with the tag suffix nanoserver-ltsc2022.

Be aware that this needs to be tested further.

This is an example of how to mount the windows Docker pipe using CLI:

$ docker run --rm -it -v //./pipe/docker_engine://./pipe/docker_engine lucaslorentz/caddy-docker-proxy:ci-nanoserver-ltsc2022

Custom images

If you need additional Caddy plugins, or need to use a specific version of Caddy, then you may use the builder variant of the official Caddy Docker image to make your own Dockerfile.

The main difference from the instructions on the official image is that you must override CMD to have the container run using the caddy docker-proxy command provided by this plugin.

ARG CADDY_VERSION=2.6.1
FROM caddy:${CADDY_VERSION}-builder AS builder

RUN xcaddy build \
    --with github.com/lucaslorentz/caddy-docker-proxy/v2 \
    --with <additional-plugins>

FROM caddy:${CADDY_VERSION}-alpine

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

CMD ["caddy", "docker-proxy"]

Connecting to Docker Host

The default connection to Docker host varies per platform:

  • At Unix: unix:///var/run/docker.sock
  • At Windows: npipe:////./pipe/docker_engine

You can modify Docker connection using the following environment variables:

  • DOCKER_HOST: to set the URL to the Docker server.
  • DOCKER_API_VERSION: to set the version of the API to reach, leave empty for latest.
  • DOCKER_CERT_PATH: to load the TLS certificates from.
  • DOCKER_TLS_VERIFY: to enable or disable TLS verification; off by default.

Volumes

On a production Docker swarm cluster, it's very important to store Caddy folder on persistent storage. Otherwise Caddy will re-issue certificates every time it is restarted, exceeding Let's Encrypt's quota.

To do that, map a persistent Docker volume to /data folder.

For resilient production deployments, use multiple Caddy replicas and map /data folder to a volume that supports multiple mounts, like Network File Sharing Docker volumes plugins.

Multiple Caddy instances automatically orchestrate certificate issuing between themselves when sharing /data folder.

Trying it

With docker-compose file

Clone this repository.

Deploy the compose file to swarm cluster:

$ docker stack deploy -c examples/standalone.yaml caddy-docker-demo

Wait a bit for services to startup...

Now you can access each service/container using different URLs

$ curl -k --resolve whoami0.example.com:443:127.0.0.1 https://whoami0.example.com
$ curl -k --resolve whoami1.example.com:443:127.0.0.1 https://whoami1.example.com
$ curl -k --resolve whoami2.example.com:443:127.0.0.1 https://whoami2.example.com
$ curl -k --resolve whoami3.example.com:443:127.0.0.1 https://whoami3.example.com
$ curl -k --resolve config.example.com:443:127.0.0.1 https://config.example.com
$ curl -k --resolve echo0.example.com:443:127.0.0.1 https://echo0.example.com/sourcepath/something

After testing, delete the demo stack:

$ docker stack rm caddy-docker-demo

With run commands

$ docker run --name caddy -d -p 443:443 -v /var/run/docker.sock:/var/run/docker.sock lucaslorentz/caddy-docker-proxy:ci-alpine

$ docker run --name whoami0 -d -l caddy=whoami0.example.com -l "caddy.reverse_proxy={{upstreams 80}}" -l caddy.tls=internal traefik/whoami

$ docker run --name whoami1 -d -l caddy=whoami1.example.com -l "caddy.reverse_proxy={{upstreams 80}}" -l caddy.tls=internal traefik/whoami

$ curl -k --resolve whoami0.example.com:443:127.0.0.1 https://whoami0.example.com
$ curl -k --resolve whoami1.example.com:443:127.0.0.1 https://whoami1.example.com

$ docker rm -f caddy whoami0 whoami1

Building it

You can build Caddy using xcaddy or caddy docker builder.

Use module name github.com/lucaslorentz/caddy-docker-proxy/v2 to add this plugin to your build.

caddy-docker-proxy's People

Contributors

accesstechnology-mike avatar am97 avatar amaelfr avatar avirut avatar bytemain avatar dependabot[bot] avatar dervomsee avatar durandj avatar farzeni avatar francislavoie avatar hakvroot avatar heavenvolkoff avatar its-alex avatar jauderho avatar jonatasbaldin avatar jtyr avatar kiwiz avatar lucaslorentz avatar majr25 avatar maxiride avatar ptman avatar rafaelgieschke avatar solidaxel avatar svendowideit avatar testwill avatar thejaymann avatar thekondor avatar unixfox avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

caddy-docker-proxy's Issues

Connect to a Docker daemon on TCP

Currently, only the /var/run/docker.sock socket connection seems to be possible. This works fine if the Docker Swarm manager is on the same machine where Caddy is running or the the file system containing the socket file is mounted accordingly. It might be useful for some people if instead a TCP address is configurable while keeping the socket as the default.

add option to disable plugin through command line parameter

As I've reported here:
caddyserver/caddy#2326 (comment)

my caddy server is currently not able to start up when the docker service is running, because i have a caddy executable which has this plugin compiled into it.

Is there a way to disable the plugin through a paramter?
I think it would be even be better to disable this by default and then require the user to enable it through a parameter, since probably noone expects caddy to interact with docker when a Caddyfile was provided.

Recompiling caddy without this plugin is not a convenient option for me. I'd rather keep using the version of my distro, which contains this plugin. This way, I can update caddy with the package manager.

Dot in attributs

Simple question: How to do ?

# .htaccess / data / config / ... shouldn't be accessible from outside
status 403 {
	/.htaccess
}

caddy.status./.htaccess= became / { .htaccess }

Shorthand for transparent proxy

Currently, to do transparent proxy, I have to do this (based on test file)

caddy: host.com
caddy.proxy: "/ service_name:80"
caddy.proxy.transparent: ""

I had to type the service name again and use 3 labels, instead of 2 or even a single label if #12 was solved somehow.

[FR]merge the same caddy.address from different containers

for example, there is an application based php+mysql like wordpress. It will create two container, one for static files, another for php backend, both of them use the same domain. if we use caddy docker plugin, it will report duplicate error. because two container have the same label "caddy.address".

Wrong swarm network ip address

Hi @lucaslorentz,

I have a problem with the plugin.

It uses the wrong ip address and can't reach the backend container.

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 10.0.4.7/32 brd 10.0.4.7 scope global lo
       valid_lft forever preferred_lft forever
    inet 10.0.3.11/32 brd 10.0.3.11 scope global lo
       valid_lft forever preferred_lft forever
259: eth0@if260: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP 
    link/ether 02:42:0a:00:04:0a brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.4.10/24 brd 10.0.4.255 scope global eth0
       valid_lft forever preferred_lft forever

caddyfile

example.com {
  errors stdout
  log stdout
  proxy / 10.0.4.10:2015
}

Ping test from web proxy container.

docker exec -ti proxy_webproxy.1.zr2bmrav82ok717yu7j51jthp ping 10.0.4.10
PING 10.0.4.10 (10.0.4.10): 56 data bytes
^C
--- 10.0.4.10 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss

Added an alias to networks section to get a hostname. It's resolved to 10.0.3.11

docker exec -ti proxy_webproxy.1.zr2bmrav82ok717yu7j51jthp ping webspace1_httpd
PING webspace1_httpd (10.0.3.11): 56 data bytes
64 bytes from 10.0.3.11: seq=0 ttl=64 time=0.060 ms
64 bytes from 10.0.3.11: seq=1 ttl=64 time=0.091 ms
64 bytes from 10.0.3.11: seq=2 ttl=64 time=0.083 ms
^C
--- webspace1_httpd ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.060/0.078/0.091 ms

So caddy plugin uses the wrong ip address?
The backend have two networks attached (proxy, mysql).

Any idea how to solve my issue?

Add loadbalancing to multiple containers/replicas

Hi @lucaslorentz ,

traffic won't loadbalanced between backends / replicas at the moment.
Instead of add the backend to an existing caddyfile entry it generates a second one.

http://192.168.199.22/httpd1 {
  proxy / 10.0.2.5:80
}
http://192.168.199.22/httpd1 {
  proxy / 10.0.2.6:80
}

It should be balanced to the backend containers:

http://192.168.199.22/httpd1 {
  proxy / 10.0.2.5:80 10.0.2.6:80
}

Constant errors when not using swarm

dockerd[674]: time="2018-08-16T11:37:52.592586915+02:00" level=error msg="Error getting services: This node is not a swarm manager. Use \"docker swarm init\" or \"docker swarm join\" to connect this node to swarm and try again."
dockerd[674]: time="2018-08-16T11:37:52.593329658+02:00" level=error msg="Handler for GET /v1.35/services returned error: This node is not a swarm manager. Use \"docker swarm init\" or \"docker swarm join\" to connect this node to swarm and try again."

Maybe something could be done to limit the requests and therefore error messages? Like only check for swarm once when caddy is starting and then remember it's not available?

No 443 port listener

Tried a new test deploy with tls, but caddy have no 443 listener?

docker exec -ti proxy_webproxy.1.xdspyx2d6lduvfyle7tkymx57 netstat -punta |grep LISTEN
tcp        0      0 127.0.0.11:45748        0.0.0.0:*               LISTEN      -
tcp        0      0 :::80                   :::*                    LISTEN      1/caddy

But should have tls vhosts?

2018/04/30 15:38:27 [INFO] New CaddyFile:
example.com www.example.com {
  errors stdout
  log stdout
  proxy / 10.0.2.104:2015
}
sub3.example.com {
  basicauth / <USER> <PW>
  proxy / 10.0.2.98:8080
}
2018/04/30 15:38:27 [INFO] SIGUSR1: Reloading
2018/04/30 15:38:27 [INFO] Reloading
2018/04/30 15:38:27 [INFO] Reloading complete
2018/04/30 15:38:27 http: Server closed

http works fine, but without the 443 port listener all the redirected http to https traffic will fail...
Tested with 0.1.0-alpine and 0.1.2-alpine.

Works fine with manual deployment (docker run...), but first docker stack deploy failed today.

Problem with port number

Modified example yml for usage with docker-compose without swarm and get the following error message.

ERROR: for whoami0  Cannot create container for service whoami0: json: cannot unmarshal number into Go value of type string

ERROR: for whoami1  Cannot create container for service whoami1: json: cannot unmarshal number into Go value of type string

yml

version: '2'

services:
  caddy:
    image: lucaslorentz/caddy-docker-proxy:alpine
    ports:
      - 2015:2015
    command: -email [email protected] -agree=true
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

  whoami0:
    image: jwilder/whoami
    labels:
      caddy.address: whoami0.example.com
      caddy.targetport: 8000
      caddy.tls: "off"

  whoami1:
    image: jwilder/whoami
    labels:
      caddy.address: whoami1.example.com
      caddy.targetport: 8000
      caddy.tls: "off"

To fix it change port number to a string caddy.targetport: "8000" instead of caddy.targetport: 8000.
Is there a way to convert / work around the issue plugin / go side?

multiple caddy environment

Hi

I have a situation where we want to use a 2 tier caddy setup in a frontend / backend configuration

Is it possible to introduce some sort of namespaces so we can chuck config to different caddys in the network?

ie

---

version: '3.2'
services:
  caddy-front:
   image: jcowey/caddy
   volumes:
     - $(pwd)/caddyfile:/etc/caddy/Caddyfile
    ports:
      - 80:80
      - 443:443
    networks:
      - webnet1

  caddy-backend1:
    image: jcowey/caddy
    networks:
      - webnet2
      - webnet1
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

  caddy-backend2:
    image: jcowey/caddy
    networks:
      - webnet2
      - webnet1
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

  app1:
    image: some/image
    networks:
      - webnet2
    labels:
      caddy1.address: apptest1.lvh.me:80
      caddy1.targetport: 80

  app2:
    image: some/image
    networks:
      - webnet2
    labels:
      caddy2.address: apptest2.lvh.me:80
      caddy2.targetport: 80

networks:
  webnet1:
    external: true
  webnet2:
    external: true

This will probably envovle starting caddy with a flag to define what id it is

hope this makes sense

Regards

Default port mapping

One good thing about the nginx-proxy image is that it minimizes the configuration with good defaults. For example, if the mapping port number is not explicitly declared, it checks if there is only one exposed port in the running container and maps to that. If there are multiple ports open, it tries to see if one of those ports is 80 or 443 and maps to that. In every other condition, explicitly providing the port number is mandatory. However, explicit declaration is encouraged to avoid surprises as a best practice.

If such behavior is part of this plugin then it should be documented. If it is not so then we should think about implementing it.

Issues related to PR #63

After recent changes we observe following problems with configuring caddy server using caddy-docker-proxy plugin:

  1. The proxy directive is always added to the resulted config.
  2. Only first caddy.address directive argument is honoured, while remaining are ignored.
  3. caddy.proxy directive value is prepended to resulting proxying directive, which brakes proxying.

1. The proxy directive is always added to the resulted config

@jtyr already opened an issue related to the this problem

2. Only first caddy.address directive argument is honoured while remaining are ignored

Given compose file:

labels:
  caddy.address:    http://domain.com https://domain.com
  caddy.targetport: 80

Expected generated config file:

http://domain.com https://domain.com {
  proxy / app1:80
}

Actual generated config file:

http://domain.com {
  proxy / app1:80
}

Issue can be reproduced with lucaslorentz/caddy-docker-proxy:test image.

3. caddy.proxy directive value is prepended to resulting proxying directive, which brakes proxying

Given compose file:

labels:
  caddy.address: http://domain.com
  caddy.proxy:   / tasks.app:80

Expected generated config file:

http://domain.com {
  proxy / tasks.app:80 
}

Actual generated config file:

http://domain.com {
  proxy / tasks.app:80 / app1
}

Issue can be reproduced with lucaslorentz/caddy-docker-proxy:test image.


All listed issues have workarounds and are not critical, but all together they contribute to a much bigger problem - huge and repeatable configurations, which may result in mistakes adding/removing directives or simply in missing configuration directives, like https redirects.
Here is what I mean. Before changes our docker-compose file looked similar to this:

services:
  proxy:
    command: ['-log', 'stdout']
    deploy:
      labels:
        caddy.address: (default)
        caddy.errors:
        caddy.header:   / -Server
        caddy.log:      / stdout "\{\"hostname\":\"{host}\",\"remote\":\"{remote}\",\"user\":\"{user}\",\"when_iso\":\"{when_iso}\",\"method\":\"{method}\",\"host\":\"{host}\",\"uri\":\"{uri}\",\"proto\":\"{proto}\",\"status\":{status},\"size\":{size},\"query\":\"{query}\",\"mitm\":\"{mitm}\",\"latency_ms\":{latency_ms},\"tls_cipher\":\"{tls_cipher}\",\"tls_version\":\"{tls_version}\",\"scheme\":\"{scheme}\",\"referer\":\"{>Referer}\",\"user_agent\":\"{>User-Agent}\",\"fragment\":\"{fragment}\"\}"
        caddy.redir.if: "{scheme} is http"
        caddy.redir./:  https://{host}{uri}
        caddy.redir:    301
        caddy.tls:      cert.pem key.pem
    image: lucaslorentz/caddy-docker-proxy:latest
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

  app1:
    deploy:
      labels:
        caddy.address:           http://app1.domain.com https://app1.domain.com
        caddy.import:            default
        caddy.proxy.transparent:
        caddy.targetport:        80
    image: brndnmtthws/nginx-echo-headers

  app2:
    deploy:
      labels:
        caddy.address:           http://app2.domain.com https://app2.domain.com
        caddy.import:            default
        caddy.proxy.transparent:
        caddy.targetport:        80
    image: brndnmtthws/nginx-echo-headers

  app3:
  ...

Which resulted in following config:

(default) {
  errors
  header / -Server
  log / stdout "\{\"hostname\":\"{host}\",\"remote\":\"{remote}\",\"user\":\"{user}\",\"when_iso\":\"{when_iso}\",\"method\":\"{method}\",\"host\":\"{host}\",\"uri\":\"{uri}\",\"proto\":\"{proto}\",\"status\":{status},\"size\":{size},\"query\":\"{query}\",\"mitm\":\"{mitm}\",\"latency_ms\":{latency_ms},\"tls_cipher\":\"{tls_cipher}\",\"tls_version\":\"{tls_version}\",\"scheme\":\"{scheme}\",\"referer\":\"{>Referer}\",\"user_agent\":\"{>User-Agent}\",\"fragment\":\"{fragment}\"\}"
  redir 301 {
    / https://{host}{uri}
    if {scheme} is http
  }
  tls cert.pem key.pem
}
http://app1.domain.com https://app1.domain.com {
  import default
  proxy / caddy_app1:80 {
    transparent
  }
}
http://app2.domain.com https://app2.domain.com {
  import default
  proxy / caddy_app2:80 {
    transparent
  }
}
...

But now, we had to rewrite it to following:

services:
  proxy:
    command: ['-log', 'stdout']
    image: lucaslorentz/caddy-docker-proxy:test
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

  app1:
    deploy:
      labels:
        caddy_1.address:           http://app1.domain.com
        caddy_1.redir:             https://{host}{uri}
        caddy_1.header:            / -Server
        caddy_1.log:               / stdout "\{\"hostname\":\"{host}\",\"remote\":\"{remote}\",\"user\":\"{user}\",\"when_iso\":\"{when_iso}\",\"method\":\"{method}\",\"host\":\"{host}\",\"uri\":\"{uri}\",\"proto\":\"{proto}\",\"status\":{status},\"size\":{size},\"query\":\"{query}\",\"mitm\":\"{mitm}\",\"latency_ms\":{latency_ms},\"tls_cipher\":\"{tls_cipher}\",\"tls_version\":\"{tls_version}\",\"scheme\":\"{scheme}\",\"referer\":\"{>Referer}\",\"user_agent\":\"{>User-Agent}\",\"fragment\":\"{fragment}\"\}"
        caddy_2.address:           https://app1.domain.com
        caddy_2.errors:
        caddy_2.header:            / -Server
        caddy_2.log:               / stdout "\{\"hostname\":\"{host}\",\"remote\":\"{remote}\",\"user\":\"{user}\",\"when_iso\":\"{when_iso}\",\"method\":\"{method}\",\"host\":\"{host}\",\"uri\":\"{uri}\",\"proto\":\"{proto}\",\"status\":{status},\"size\":{size},\"query\":\"{query}\",\"mitm\":\"{mitm}\",\"latency_ms\":{latency_ms},\"tls_cipher\":\"{tls_cipher}\",\"tls_version\":\"{tls_version}\",\"scheme\":\"{scheme}\",\"referer\":\"{>Referer}\",\"user_agent\":\"{>User-Agent}\",\"fragment\":\"{fragment}\"\}"
        caddy_2.proxy.transparent:
        caddy_2.targetport:        80
        caddy_2.tls:               cert.pem key.pem
    image: brndnmtthws/nginx-echo-headers

  app2:
    deploy:
      labels:
        caddy_1.address:           http://app2.domain.com
        caddy_1.redir:             https://{host}{uri}
        caddy_1.header:            / -Server
        caddy_1.log:               / stdout "\{\"hostname\":\"{host}\",\"remote\":\"{remote}\",\"user\":\"{user}\",\"when_iso\":\"{when_iso}\",\"method\":\"{method}\",\"host\":\"{host}\",\"uri\":\"{uri}\",\"proto\":\"{proto}\",\"status\":{status},\"size\":{size},\"query\":\"{query}\",\"mitm\":\"{mitm}\",\"latency_ms\":{latency_ms},\"tls_cipher\":\"{tls_cipher}\",\"tls_version\":\"{tls_version}\",\"scheme\":\"{scheme}\",\"referer\":\"{>Referer}\",\"user_agent\":\"{>User-Agent}\",\"fragment\":\"{fragment}\"\}"
        caddy_2.address:           https://app2.domain.com
        caddy_2.errors:
        caddy_2.header:            / -Server
        caddy_2.log:               / stdout "\{\"hostname\":\"{host}\",\"remote\":\"{remote}\",\"user\":\"{user}\",\"when_iso\":\"{when_iso}\",\"method\":\"{method}\",\"host\":\"{host}\",\"uri\":\"{uri}\",\"proto\":\"{proto}\",\"status\":{status},\"size\":{size},\"query\":\"{query}\",\"mitm\":\"{mitm}\",\"latency_ms\":{latency_ms},\"tls_cipher\":\"{tls_cipher}\",\"tls_version\":\"{tls_version}\",\"scheme\":\"{scheme}\",\"referer\":\"{>Referer}\",\"user_agent\":\"{>User-Agent}\",\"fragment\":\"{fragment}\"\}"
        caddy_2.proxy.transparent:
        caddy_2.targetport:        80
        caddy_2.tls:               cert.pem key.pem
    image: brndnmtthws/nginx-echo-headers

  app3:
  ...

setup troubles [support]

after failing to set up nginx-proxy with VIRTUAL_PATH, I tried deploying your docker image, but can't make it work.
There's a mattermost install on the server instantiated from https://github.com/mattermost/mattermost-docker.git with a docker-compose.override.yml:

version: "2"

services:
  web:
    build: web
    ports:
      - "8580:80"
      - "8543:443"
    labels:
      - caddy.address=tii.tu-dresden.de:80/mm
      - caddy.targetport=8580
      - caddy.targetpath=/
      - caddy.caddy.targetprotocol=http

but if I go to that address — nothing. How can I even debug the problem? docker exec -it {/caddy} bash doesn't work, the container log just tells me

Activating privacy features... done.
http://:2015

strange thing is, lsof -i on the docker host doesn't even show that caddy has any port opened? .. 🤔

Docker Swarm and service Tags

Looks like caddy-docker-proxy doesn't recognize services labels correctly.
If i deploy caddy-docker-proxy on each nodes then i end up having one loadbalancing for the container.
But it should be able to only run on one node (a manager).

PS:
isn't reading informations from services and container redondant?
this is not much as they should result in the same directives but maybe choosing a mode:

  • docker to look only at containers
  • swarm to only look at services
    Or retain the container ids that were already parsed trought services

Allow top-level directives

Would be nice to be able to insert top-level caddy-directives trough labels.

I try to insert a import directive to import additional caddy configuration files. I tried with caddy.import=/etc/caddy/Caddyfile*, but this results in
{ import /etc/caddy/Caddyfile* }

The import directive needs to be at the top-level. Any thoughts on that?

How to inspect the generated caddy file inside a docker container

Hi,

I have succussfully setup the proxy towards whoami services, however when I add myown service running http on port 5000 everything fails, the proxy isn't even responding.

How can I inspect the generated configuration inside the container? I have tried with docker logs without seeing anything useful. Thanks in advance.

Security problem with mounted docker.sock?

Because docker.sock ist mountet to the caddy reverse proxy it could be a security issue? What do you think about? Would it possible to split it up in a caddy reverse proxy and a second container with a shared caddyfile / caddy config?

Reverse proxy for non-swarm containers

While the Swarm mode is the way to deploy services and it might become the default at some point, many people are still deploying individual containers directly. Theoretically, the plugin should work in those situations as well, but we need to test and document it.

Tricky to add more directives to the proxy block

...
-l caddy.address=example.com \
-l caddy.targetport=8080 \
-l 'caddy.proxy=/socket backend:8080/socket' \
-l 'caddy.proxy./socket backend:8080/socket=websocket' \
...

results in:

example.com {
  proxy / 172.17.0.1:8080 {
    /socket backend:8080/socket websocket
  }
}

while I was going for:

example.com {
  proxy / 172.17.0.1:8080
  proxy /socket backend:8080/socket {
    websocket
  }
}

Preparing Release v0.2.0

Breaking changes:

  • Add default port behavior. When caddy.targetport is not specified the proxy directive is generated without port and caddy defaults it based on protocol (http: 80, https: 443). That means proxy directives are now generated just by specifying caddy.address label. If you don't want a proxy directive, like when creating snippets, use caddy label instead of caddy.address.

New features:

  • ARM arm32v6 images
  • Add label caddy.targetprotocol to specify the protocol that should be added to proxy directive
  • Avoid using ingress networks ips on proxies
  • Validate new caddyfile before replacing the current one, that improves stability when configs are wrong
  • Add configuration to change the label prefix (caddy by default). That allows you to split your cluster into multiple caddy groups. Contribution from @jtyr
  • Avoid errors when swarm is not available by skipping swarm objects inspection
  • Sort websites by their names. That makes sure caddyfile snippets are always inserted before normal websites. Contribution from @jtyr
  • Add swarm configs content to the beginning of caddyfiles. You have to add a label with the label prefix to your swarm config.
  • Increase polling interval to 30 seconds and make it configurable through CLI option docker-polling-interval or env variable CADDY_DOCKER_POLLING_INTERVAL
  • Windows images with support automatic caddy reload

Second vhost not bind to port 80

Hello @lucaslorentz ,

I try to add backends to the revproxy, but it doesn't work... I don't know why.

Generated caddyfile looks good to me (domain replaced!)

2018/04/05 14:58:38 [INFO] New CaddyFile:
2018/04/05 14:58:38 sub1.example.com {
  proxy / 172.17.0.7:2015
  tls off
}
sub2.example.com {
  proxy / 172.17.0.4:2015
}

Activating privacy features... done.
http://sub2.example.com
2018/04/05 14:58:38 http://sub2.example.com
http://sub1.example.com:2015
2018/04/05 14:58:38 http://sub1.example.com:2015
https://sub2.example.com
2018/04/05 14:58:38 https://sub2.example.com
2018/04/05 14:59:11 [INFO] sub1.example.com - No such site at :80 (Remote: <IP>, Referer: )

sub1 works fine, also with tls. sub2 doesn't work. Backend IP, Port and content is ok, but caddy proxy log says "No such site at :80"

Looks like sub2 isn't bind to port 80?! Diff is just "tls off". So it should bind to port 80 and 2015, right? Is it a plugin / caddy issue?

Publish armhf compatible image

I'm a big fan of the project and use it on my x86_64 server however I've recently invested in an armhf VPS which I'd like to use this software on. Is it possible you can publish an armhf compatible image alongside the existing x86_64 one on the docker registry?

If not, would it be possible to provide instructions for compiling under armhf?

invalid registration

When i try to create my docker swarm service:
2018/07/21 08:57:41 registration error: acme: Error 400 - urn:ietf:params:acme:error:invalidEmail - Error creating new account :: not a valid e-mail address

Here is the composer file:

version: '3.3'
services:
  caddy:
    image: lucaslorentz/caddy-docker-proxy:latest
    command: -agree -email c*@*.com
    ports:
     - 80:80
    volumes:
     - /var/run/docker.sock:/var/run/docker.sock
     - /var/volumes/caddy:/root/.caddy
    networks:
     - caddy
    logging:
      driver: json-file
networks:
  caddy:
    external: true

Compatibility to rancher server networking

I found your reverse proxy plugin and could be move from my own custom to your solution.
But to be compatible with rancher server the ip address have to be read from a custom label.

            "IpcMode": "",
                "io.rancher.cni.network": "ipsec",
                "io.rancher.container.ip": "10.42.155.12/16",

Would it possible to add this custom value or overwrite the ip variable by this one optional?

Caddy version ends up in generated Caddyfile

I am having a pretty weird issue: The caddy version ends up in the generated config and breaks it.

This only happens when reloaded, not with the Caddyfile generated on startup.
Also, the version ends up either in the beginning or between the container and service section (one line before service-1.example.com).
This also happens when only the services are loaded.

I have been using the docker container provided by this repo in the past but have switched to building the image with this command: docker build --build-arg plugins=docker,cloudflare github.com/abiosoft/caddy-docker.git.

The issue arised with the newly built container, but as this plugin prints the version within the config file, I would guess that something is wrong here. I checked the source code but could not find any clues to why this is happending.

It did work one time in between, but after lots of restarting containers and services, I was unable to reach that state again. This is really weird.

Here is the log:

2018/05/07 10:42:40 [INFO] New CaddyFile:
container.example.com {
  import /caddy/Caddy_common_headers
  log / /caddylogs/container.example.com {combined}
  proxy / 172.17.0.1:8080
  tls {
    clients /caddy/clientCA.crt
  }
}
*.container.example.com {
  import /caddy/Caddy_common_headers
  log / /caddylogs/sub_container.example.com {combined}
  proxy / 172.17.0.1:8080
  tls {
    clients /caddy/clientCA.crt
    dns cloudflare
  }
}
service-1.example.com {
  import /caddy/Caddy_common_tlsclientauth
  log / /caddylogs/service-1.example.com {combined}
  proxy / service-1:8080
}
service-2.example.com {
  import /caddy/Caddy_common_tlsclientauth
  log / /caddylogs/service-2.example.com {combined}
  proxy / service-2:8080
}
service-3.example.com {
  import /caddy/Caddy_common_tlsclientauth
  log / /caddylogs/service-3.example.com {combined}
  proxy / service-3:8080
}
Activating privacy features... done.
https://container.example.com
2018/05/07 10:42:40 https://container.example.com
https://*.container.example.com
2018/05/07 10:42:40 https://*.container.example.com
https://service-1.example.com
2018/05/07 10:42:40 https://service-1.example.com
https://service-2.example.com
2018/05/07 10:42:40 https://service-2.example.com
https://service-3.example.com
2018/05/07 10:42:40 https://service-3.example.com
http://container.example.com
2018/05/07 10:42:40 http://container.example.com
http://*.container.example.com
2018/05/07 10:42:40 http://*.container.example.com
http://service-1.example.com
2018/05/07 10:42:40 http://service-1.example.com
http://service-2.example.com
2018/05/07 10:42:40 http://service-2.example.com
http://service-3.example.com
2018/05/07 10:42:40 http://service-3.example.com
2018/05/07 10:42:40 [INFO] New CaddyFile:
0.10.14
container.example.com {
  import /caddy/Caddy_common_headers
  log / /caddylogs/container.example.com {combined}
  proxy / 172.17.0.1:8080
  tls {
    clients /caddy/clientCA.crt
  }
}
*.container.example.com {
  import /caddy/Caddy_common_headers
  log / /caddylogs/sub_container.example.com {combined}
  proxy / 172.17.0.1:8080
  tls {
    clients /caddy/clientCA.crt
    dns cloudflare
  }
}
service-1.example.com {
  import /caddy/Caddy_common_tlsclientauth
  log / /caddylogs/service-1.example.com {combined}
  proxy / service-1:8080
}
service-2.example.com {
  import /caddy/Caddy_common_tlsclientauth
  log / /caddylogs/service-2.example.com {combined}
  proxy / service-2:8080
}
service-3.example.com {
  import /caddy/Caddy_common_tlsclientauth
  log / /caddylogs/service-3.example.com {combined}
  proxy / service-3:8080
}
2018/05/07 10:42:40 [INFO] SIGUSR1: Reloading
2018/05/07 10:42:40 [INFO] Reloading
2018/05/07 10:42:40 [ERROR] SIGUSR1: :2 - Error during parsing: Unknown directive 'container.example.com'

Should we change docker/docker to moby/moby

"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/client"

A while ago, docker decided to rename their repositories and rebrand a few things under moby. While GitHub has redirects in place to maintain backward compatibility, I think we might want to update the code to reference new location by replacing docker/docker to moby/moby.

Proxy wont forward domain.com/<subdir>

Hello there, first of all, great proxy but I can't get it to work for me. I'm not the best in docker and I'm still learning so please forgive me when I ask stupid questions.

So this is my scenario, I want to proxy domain.com/wp1 to a WordPress container. The Wordpress container has two networks attached. one for the backend and one for the frontend. The backend one is used for communication between WordPress and mySQL Server. The frontend one is used to proxy the request domain.com/wp1 to the container.

I do this because I don't want that other containers can access my WordPress database.

And this approach worked somewhat. the proxy detects the container and generates a proxy directive

domain.com/wp1 {
  proxy / 172.19.0.3
}

But it, in the end, it won't work, I cannot access it.

The labels I uses are

labels:
  - caddy.address=domain.com/wp1
  - caddy.proxy.transparent

am i doing something wrong here?

[Help] Caddy use ingress network instead of custom network

I´ve got some services which are setup with docker stack deploy.
All of the necessary services are connected to a network called "proxy_network".

Caddy-docker recognize these services correctly but use the ip address of the ingress network instead of the proxy_network. Is there any way to block caddy-docker from using the ip addresses of the ingress network?

Thank you in advance

how to add nested directive like header?

for example here what we want is:

header /api {
	Access-Control-Allow-Origin  *
	Access-Control-Allow-Methods "GET, POST, OPTIONS"
	-Server
}

how to add lables for caddy-docker-proxy?
current it only support

      caddy.header_1: "/ Access-Control-Allow-Origin  *"
      caddy.header_2: '/ Access-Control-Allow-Methods "GET, POST, OPTIONS"'
      caddy.header_3: "/ -Server"

and generate:

http://test.example.com {
  header / Access-Control-Allow-Origin  *
  heaer / Access-Control-Allow-Methods "GET, POST, OPTIONS"
  header / -Server
  proxy / 172.18.0.8:80
  tls off
}

it's not nested directive.
the same problem when we want to define something from backend

proxy / web1.local:80 web2.local:90 web3.local:100 {
	policy round_robin
	health_check /health
	transparent
        header_downstream Server  {<Header}
}

how could we define them as lables format?

Lost port 80 listener after some days

Hi,

first all works fine, but after some days the port 80 listener is lost! Because of -port 80 it shouldn't bind to 2015...?

/ # netstat -punta | grep LISTEN
tcp        0      0 :::443                  :::*                    LISTEN      1/caddy
tcp        0      0 :::2015                 :::*                    LISTEN      1/caddy

Looks like a reload problem?

2018/04/23 15:04:40 [ERROR] SIGUSR1: listen tcp :80: bind: address already in use
2018/04/23 15:05:09 [ERROR] SIGUSR1: listen tcp :443: bind: address already in use
2018/04/23 15:09:28 [ERROR] SIGUSR1: listen tcp :443: bind: address already in use
2018/04/23 15:14:00 [ERROR] SIGUSR1: listen tcp :80: bind: address already in use
2018/04/23 15:36:56 [ERROR] SIGUSR1: listen tcp :443: bind: address already in use
2018/04/23 15:36:56 [INFO] New CaddyFile:
example.com www.example.com {
  errors stdout
  log stdout
  proxy / 172.17.0.3:2015 {
    transparent
  }
}
2018/04/23 15:36:56 [INFO] SIGUSR1: Reloading
2018/04/23 15:36:56 [INFO] Reloading
2018/04/23 15:36:56 [ERROR] SIGUSR1: listen tcp :443: bind: address already in use

Tried to solve with "sigusr1", but reload fails.

docker exec -ti revproxy kill -sigusr1 1

docker exec -ti revproxy netstat -punta | grep LISTEN
tcp        0      0 :::443                  :::*                    LISTEN      1/caddy
tcp        0      0 :::2015                 :::*                    LISTEN      1/caddy

But reloading fails

2018/04/26 09:11:50 [INFO] SIGUSR1: Reloading
2018/04/26 09:11:50 [INFO] Reloading
2018/04/26 09:11:50 [ERROR] SIGUSR1: listen tcp :443: bind: address already in use

Solved by a revproxy container restart

docker exec -ti revproxy netstat -punta | grep LISTEN
tcp        0      0 :::80                   :::*                    LISTEN      1/caddy
tcp        0      0 :::443                  :::*                    LISTEN      1/caddy
```

Any idea? Looks like it lost the `-port 80` and bind 2015 instead?

revproxy docker run command
```
docker run -dti --restart always --name revproxy --read-only -v /var/run/docker.sock:/var/run/docker.sock:ro -p 80:80 -p 443:443 -v revproxy_certs:/root/.caddy lucaslorentz/caddy-docker-proxy:0.1.2-alpine -email <EMAIL_ADDRESS> -agree=true -log stdout -port 80
```

Some times the website isn't available with http and https, but I haven't debugged that problem myself. So I don't know if it is related to the HTTP / port 80 issue.

Caddyfile loaded multiple times; first by docker, then by short

version: '3.7'

configs:
  caddy-basic-content:
    file: ./caddy/Caddyfile
    labels:
      caddy:

services:
  caddy:
    image: lucaslorentz/caddy-docker-proxy:alpine
    ports:
      - 80:80
      - 443:443
    networks:
      - caddy
    command: -email ${EMAIL} -log stdout
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    deploy:
      placement:
        constraints:
          - node.role == manager
      replicas: 1
      restart_policy:
        condition: any
      resources:
        reservations:
          cpus: '0.1'
          memory: 200M


  postgres:
    image: postgres
    networks:
      - backend
    env_file:
      - .env
    volumes:
      - postgres:/var/lib/postgresql/data

  pgadmin:
    image: dpage/pgadmin4
    ports:
      - 5000:80
    env_file:
      - .env
    networks:
      - caddy
      - backend

  rabbitmq:
    image: rabbitmq
    networks:
      - backend
    environment:
      RABBITMQ_VM_MEMORY_HIGH_WATERMARK: 1024MiB

  redis:
    image: redis
    networks:
      - backend
    deploy:
      resources:
        limits:
          memory: 1024M

  celery:
    image: yswtrue/careerke
    env_file:
      - .env
    networks:
      - backend
    volumes:
      - ../:/var/www/careerke_backend
    working_dir: /var/www/careerke_backend
    command:
      - celery
      - -A
      - careerke_backend
      - worker
      - -l
      - info

  careerke_backend:
    image: yswtrue/careerke
    env_file:
      - .env
    volumes:
      - ../:/var/www/careerke_backend
    command:
      - uwsgi
      - --http
      - :8080
      - --module
      - careerke_backend.wsgi
    networks:
      - caddy
      - backend
    working_dir: /var/www/careerke_backend
    deploy:
      labels:
        caddy.address: ${HOST}
        caddy.targetport: 8080

volumes:
  postgres:
  caddy:

networks:
  caddy:
    driver: overlay
  backend:
    driver: overlay

This is my docker-composer.yml, the caddy can start at first time I deploy. But when I redeploy the docker-compose.yml, the caddy container won't start. This is the logs.

careerke_caddy.1.9og35psybrg9@izuf669jm0y04a7im10qcqz    | 2018/12/06 11:18:51 Caddyfile loaded multiple times; first by docker, then by short
careerke_caddy.1.lx84chwl25k2@izuf669jm0y04a7im10qcqz    | 2018/12/06 11:19:04 Caddyfile loaded multiple times; first by docker, then by short
careerke_caddy.1.jhh5o7ffv78z@izuf669jm0y04a7im10qcqz    | 2018/12/06 11:18:45 Caddyfile loaded multiple times; first by docker, then by short
careerke_caddy.1.id8qwx4fjk77@izuf669jm0y04a7im10qcqz    | 2018/12/06 11:18:58 Caddyfile loaded multiple times; first by docker, then by short

Thanks.

proxying to https backend

hi

i need to proxy to a https backend on 443

delcaring the port as 443 causes a 400 error due to caddy trying to communicate on http

how would i proxy to a https backend?

Error while building

This line in build.sh keeps throwing this error.
Is there something missing from the docs on how to build?

➜  caddy-docker-proxy git:(master) ✗ go test -race -v $(glide novendor)

plugin/generator.go:13:2: cannot find package "github.com/docker/docker/api/types" in any of:
        /usr/local/Cellar/go/1.9.4/libexec/src/github.com/docker/docker/api/types (from $GOROOT)
        /Users/xo/code/go/src/github.com/docker/docker/api/types (from $GOPATH)
plugin/loader.go:10:2: cannot find package "github.com/docker/docker/api/types/filters" in any of:
        /usr/local/Cellar/go/1.9.4/libexec/src/github.com/docker/docker/api/types/filters (from $GOROOT)
        /Users/xo/code/go/src/github.com/docker/docker/api/types/filters (from $GOPATH)
plugin/generator.go:14:2: cannot find package "github.com/docker/docker/api/types/swarm" in any of:
        /usr/local/Cellar/go/1.9.4/libexec/src/github.com/docker/docker/api/types/swarm (from $GOROOT)
        /Users/xo/code/go/src/github.com/docker/docker/api/types/swarm (from $GOPATH)
plugin/generator.go:15:2: cannot find package "github.com/docker/docker/client" in any of:
        /usr/local/Cellar/go/1.9.4/libexec/src/github.com/docker/docker/client (from $GOROOT)
        /Users/xo/code/go/src/github.com/docker/docker/client (from $GOPATH)
main.go:7:2: cannot find package "github.com/caddyserver/dnsproviders/route53" in any of:
        /usr/local/Cellar/go/1.9.4/libexec/src/github.com/caddyserver/dnsproviders/route53 (from $GOROOT)
        /Users/xo/code/go/src/github.com/caddyserver/dnsproviders/route53 (from $GOPATH)
main.go:5:2: cannot find package "github.com/lucaslorentz/caddy-docker-proxy/plugin" in any of:
        /usr/local/Cellar/go/1.9.4/libexec/src/github.com/lucaslorentz/caddy-docker-proxy/plugin (from $GOROOT)
        /Users/xo/code/go/src/github.com/lucaslorentz/caddy-docker-proxy/plugin (from $GOPATH)
main.go:9:9: cannot find package "github.com/miekg/caddy-prometheus" in any of:
        /usr/local/Cellar/go/1.9.4/libexec/src/github.com/miekg/caddy-prometheus (from $GOROOT)
        /Users/xo/code/go/src/github.com/miekg/caddy-prometheus (from $GOPATH)

This is the only change I've made.

diff --git a/main.go b/main.go
index e3e1f66..466db20 100644
--- a/main.go
+++ b/main.go
@@ -6,6 +6,8 @@ import (

        _ "github.com/caddyserver/dnsproviders/route53"

+        _ "github.com/miekg/caddy-prometheus"
+
        // Caddy
        "github.com/mholt/caddy/caddy/caddymain"
 )

[Help] Is there a way to configure using staging environment of Let's Encrypt?

I'm using the docker image of this project as the entrance of my services.
However, I forgot to expose 80 port (exposed 443 port only) so the HTTP challenges failed, and due to the restart policy, the container restarted again and again and thus the rate limit of Let's Encrypt was hit.

I am wondering if there is a way to configure Caddy-docker-proxy using staging environment of Let's Encrypt until I make all my services work.

Thank you!

[Q] Redir path + External IP support?

Hi Guys,

I am having a bit of a problem trying to convert the following:

  # Backend (ghost)
  proxy /blog example_blog:2368 {
    transparent
    websocket
  }

  redir 302 {
    if {path} is /
    / /blog
  }

I have tried multiple variations of below but I still seem to be failing on cracking it. Does anyone know what I am doing wrong?

    deploy:
      labels:
        caddy.address: example.me.uk/blog
        caddy.targetport: 2368
        caddy.proxy.websocket:
        caddy.proxy.transparent:
        caddy.redir: 302
        caddy.redir: if {path} is /
        caddy.redir: / /blog

The other one I am still struggling with is when you have an external (non docker) reverse proxy config like:

  # Backend (vmware)
  proxy / https://172.19.255.110 {
    insecure_skip_verify
    transparent
    websocket
  }

How can this also be converted? what am I missing from the examples?

Kind Regards

[Help] How to convert complex nginx config

Hi @lucaslorentz,
I have a complex nginx reverse proxy example which should be converted to caddy-docker-proxy.

I'm not working with nginx so it's not totally clear to me how it should be to work as it should.

https://github.com/tootsuite/documentation/blob/master/Running-Mastodon/Production-guide.md

map $http_upgrade $connection_upgrade {
  default upgrade;
  ''      close;
}

server {
  listen 80;
  listen [::]:80;
  server_name example.com;
  root /home/mastodon/live/public;
  # Useful for Let's Encrypt
  location /.well-known/acme-challenge/ { allow all; }
  location / { return 301 https://$host$request_uri; }
}

server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;
  server_name example.com;

  ssl_protocols TLSv1.2;
  ssl_ciphers HIGH:!MEDIUM:!LOW:!aNULL:!NULL:!SHA;
  ssl_prefer_server_ciphers on;
  ssl_session_cache shared:SSL:10m;

  ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

  keepalive_timeout    70;
  sendfile             on;
  client_max_body_size 80m;

  root /home/mastodon/live/public;

  gzip on;
  gzip_disable "msie6";
  gzip_vary on;
  gzip_proxied any;
  gzip_comp_level 6;
  gzip_buffers 16 8k;
  gzip_http_version 1.1;
  gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

  add_header Strict-Transport-Security "max-age=31536000";

  location / {
    try_files $uri @proxy;
  }

  location ~ ^/(emoji|packs|system/accounts/avatars|system/media_attachments/files) {
    add_header Cache-Control "public, max-age=31536000, immutable";
    try_files $uri @proxy;
  }
  
  location /sw.js {
    add_header Cache-Control "public, max-age=0";
    try_files $uri @proxy;
  }

  location @proxy {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Proxy "";
    proxy_pass_header Server;

    proxy_pass http://127.0.0.1:3000;
    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    tcp_nodelay on;
  }

  location /api/v1/streaming {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
    proxy_set_header Proxy "";

    proxy_pass http://127.0.0.1:4000;
    proxy_buffering off;
    proxy_redirect off;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;

    tcp_nodelay on;
  }

  error_page 500 501 502 503 504 /500.html;
}

I would ask caddy community, but it ist caddy-docker-proxy specific question.

Some parts should be easy (tls, websocket / headers), but I don't know how the location / @Proxy part is working and should be ported.

The backends are port 3000 and 4000 of different containers selected by path / websocket connection I think?

Multiple containers per domain possible?

How would I setup multiple containers making up a single domain?

For example all containers would use example.com but with different paths /, /admin, /dashboard for each. Currently if I add multiple containers with the same domain the generator just creates multiple entries for the domain.

Negotiate API version instead of relying on DOCKER_API_VERSION alone

Using DOCKER_API_VERSION works, however the Docker client API is also able to negotiate an API version itself:

dockerPing, err := dockerClient.Ping(ctx)
// deal with err
dockerClient.NegotiateAPIVersionPing(dockerPing)

// or

dockerClient.NegotiateAPIVersion(ctx)

The ping response includes the Docker daemon's max supported API version, and the client will set the right options to make everything work. You can also call NegotiateAPIVersion, which does the same thing, but discards the errors and uses defaults if something goes wrong. The former's probably better, since you can bail out early if Ping fails (i.e. you can report a problem with the connection to Docker before trying to start using it).

It'd be a good idea to stick this into GenerateCaddyFile after you create the client.

EDIT: I should also mention that this is 100% backwards compatible, as setting DOCKER_API_VERSION causes the client to ignore negotiation and take what was specified. I'm happy to send a PR if that's easier. 😄

Plugin is always adding proxy directive

After the recent changes, the plugin is always adding the proxy directive regardless if the directive has any proxy related data defined. Before we could use:

    labels:
      caddy_1.address: http://example.lan:8080
      caddy_1.redir: https://example.lan:8443{uri}

      caddy_2.address: https://example.lan:8443
      caddy_2.tls: self_signed
      caddy_2.targetport: 80
      caddy_2.proxy.keepalive: 0

which generated:

http://example.lan:8080 {
  redir https://example.lan:8443{uri}
}
https://example.lan:8443 {
  tls self_signed
  proxy / myservice:80 {
    keepalive 0
  }
}

but now it generates:

http://example.lan:8080 {
  ### SEE THE LINE BELLOW ###
  proxy / myservice:80
  redir https://example.lan:8443{uri}
}
https://example.lan:8443 {
  tls self_signed
  proxy / myservice:80  {
    keepalive 0
  }
}

Custom config without docker swarm

What do you thing about adding a way to merge a default Caddyfile that we can mount in container so we can use this without swarm ?

This will allow to add default config like redirect http to https.

Missing proxy directive if errors / log directive added

I have a working proxy caddyfile with caddy.address=domain, but if additional directives added the proxy section is missing.

base works fine

-l "caddy.address=<DOMAIN1> <DOMAIN2>"

added

-l caddy.log=stdout -l caddy.error=stdout

result

<DOMAIN1> <DOMAIN2> {
  errors stdout
  log stdout
}

Maybe a bug? Why proxy section is missing?

Generator is using IP for only some configs?

Is there a reason the Caddyfile is now being updated with IP addresses whereas it use to have the container dns name?

All my other containers are like the portainer one.

homeassistant.domain.tld {
  proxy / 10.0.10.11:8123 {
    transparent
    websocket
  }
}
portainer.domain.tld {
  proxy / portainer:9000
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.