caddyserver / caddy-docker Goto Github PK
View Code? Open in Web Editor NEWSource for the official Caddy v2 Docker Image
Home Page: https://hub.docker.com/_/caddy
License: Apache License 2.0
Source for the official Caddy v2 Docker Image
Home Page: https://hub.docker.com/_/caddy
License: Apache License 2.0
Add Github Actions to build and push to Docker hub automatically:
latest
tags for any changes in the master
-branchv2.0.0
will build and tag a image caddy:v2.0.0-alpine
Question: Should we use the v
prefix for versions?
I think the Caddy Docker Hub image should apply for Docker Certified qualification, just like most other popular Docker Hub images.
According to the official Docker documentation:
The Docker Certification program for Containers and Plugins is designed for both technology partners and enterprise customers to recognize high-quality Containers and Plugins, provide collaborative support, and ensure compatibility with the Docker Enterprise platform. Docker Certified products give enterprises a trusted way to run more technology in containers with support from both Docker and the publisher. The Docker Technology Partner guide explains the Technology Partner program, inclusive of process and requirements to Certify Containers and Plugins.
More information:
https://docs.docker.com/docker-hub/publish/publisher_faq/#what-is-the-certification-program-for-containers-and-plugins-and-what-are-some-benefits
https://docs.docker.com/docker-hub/publish/certify-images/
In lucaslorentz/caddy-docker-proxy#176, @lucaslorentz has suggested an optimization to provide an alternate way to build a custom image, where the only changes are plugin installations.
Effectively, this approach uses a build arg to provide a list of plugins to be installed, and uses a remote Git context to reference the Dockerfile
from GitHub.
It could look something like:
$ docker build --build-arg "CADDY_PLUGINS=github.com/caddyserver/nginx-adapter github.com/hairyhenderson/caddyprom" -t mycustomcaddy:latest https://github.com/caddy/caddy-docker.git#:custom-build
Personally I'm ambivalent, since I generally prefer that people write their own Dockerfile
s so that the build is more predictable and reproducible (better to put a Dockerfile
in git than a shell script with a complex docker build
command). But in some ways this may make it slightly easier for some users to consume Caddy in Docker.
One big hitch with this approach is that this repo is currently focused on only the official caddy
images, and official images are not permitted to contain ARG
instructions (due to reproducibility requirements). So if a Dockerfile
were added to support this use-case, it would need to be made abundantly clear that there are no published images based on it.
it looks like that don't support json
config,but only support Caddyfile
?
Run caddy with the caddy
user instead of root. Also check if we can run it with all privileges dropped (See https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities).
Note: Using a port as a non-root user lower than 1024 requires the CAP_NET_BIND_SERVICE
privilege. You can circumvent this by using a port higher than 1024.
Giving the binary CAP_NET_BIND_SERVICE
on alpine:
RUN apk add --no-cache --no-progress libcap \
&& setcap 'CAP_NET_BIND_SERVICE=+pie' /usr/bin/caddy
Reading materials:
Originally when we created this image the intent was to keep things "secure by default" by running as a non-root
user.
This had its challenges originally - we have to modify the default Caddyfile
so that it doesn't try to listen to :80
.
However, there has been a certain amount of friction with this approach, since it breaks automatic TLS, and makes usage inside and outside a container considerably different.
For a point of comparison, most other webservers with official images in DockerHub run as root
by default.
See also some discussion in Slack for background.
If we decide to run as root
, we should also at least document how to run as non-root
!
Hello,
Is something wrong with the documentation, or with the image?
The doc says we should use two volumes mounted on /data
and /config
, but when I do that, the only thing that gets created inside by the container is an empty caddy
directory.
After some investigations, it appears the interesting files are instead created in other volumes, mounted on /data/caddy
and /config/caddy
.
Executing docker exec caddy mount
returns the following (output truncated for legibility):
/dev/mapper/docker-8:2-1365-c242539dd94a31a5f3a459ac0e60335ce9f5ebca2e8681424865c1e1be300904 on / type xfs
[...]
/dev/sda2 on /data type ext4 (rw,relatime,errors=remount-ro)
/dev/sda2 on /config type ext4 (rw,relatime,errors=remount-ro)
/dev/sda2 on /etc/caddy type ext4 (rw,relatime,errors=remount-ro)
/dev/sda2 on /config/caddy type ext4 (rw,relatime,errors=remount-ro)
/dev/sda2 on /data/caddy type ext4 (rw,relatime,errors=remount-ro)
/dev/sda2 on /etc/resolv.conf type ext4 (rw,relatime,errors=remount-ro)
/dev/sda2 on /etc/hostname type ext4 (rw,relatime,errors=remount-ro)
/dev/sda2 on /etc/hosts type ext4 (rw,relatime,errors=remount-ro)
[...]
That seems to confirm this.
I tried mounting /data/caddy
and /config/caddy
instead of /data
and /config/caddy
and it seems to be working...
If it's important, I'm trying to write a docker-compose.yml file, to start caddy.
(I'm still new to docker, so I'm not entirely sure I'm doing everything properly)
Unfortunately the config and data folders stay empty when mounting as volumes. I realized this when hitting rate limits after restarting my docker compose a couple of times. Is there anything I can do now to obtain the certs? I tried adding quotes around the volumes which I did not have before. What am I doing wrong?
Dockerfile
FROM caddy/caddy:alpine
COPY Caddyfile /etc/caddy/Caddyfile
my docker-compose
# run this with: " CURRENT_UID=$(id -u):$(id -g) docker-compose up ""
version: "3"
services:
percy:
# docker run -it -p 80:80 -p 443:443 -p 2019:2019 --rm --name perception_caddy perception_caddy
image: perception_caddy
container_name: percy
hostname: percy
user: root
# user: ${CURRENT_UID}
ports:
- 443:443
- 80:80
# - 2020:2020
volumes:
# Just a note - as of the latest caddy/caddy images, these locations are now /config/caddy and /data/caddy. See the (new!) docs for some details: https://github.com/caddyserver/caddy-docker#๏ธ-a-note-about-persisted-data 1
- "./caddy_secrets/caddy_lets_encrypt_storage:/data"
- "./caddy_secrets/caddy_config_storage:/config"
# sysctls:
# - net.ipv4.ip_unprivileged_port_start=0
# cap_add:
# - CAP_NET_BIND_SERVICE
Is this in such an early development that we shouldn't use it anyways?
docker pull caddy/caddy
Using default tag: latest
Error response from daemon: manifest for caddy/caddy:latest not found: manifest unknown: manifest unknown
.github/workflows/docker.yml is not update, docker version is 2.2
When trying to pull the docker image with sudo docker pull caddy /caddy, I am told the following:
Using default tag: latest
Error response from daemon: manifest for caddy/caddy: latest not found
The current Dockerfiles specify
caddy-docker/2.1/alpine/Dockerfile
Lines 44 to 45 in edea053
The problem with this is it ends up creating anonymous volumes that aren't cleaned up when the Caddy container exits, even when the container is deleted. You can see this by running docker run caddy true
followed by docker volume ls
; repeat this a few times and note that new volumes keep building up.
From #104 (comment), the stated rationale for having these VOLUME
instructions is "to prevent [config and data] files from being written to the container, and hence lost during a container restart". However, even without VOLUME
, these files actually aren't lost during a container restart; they're just stored in the container's writable layer, which will exist as long as the container isn't actually deleted. Given that these anonymous volumes wouldn't be reused when creating a new Caddy container, this doesn't seem to have much advantage over just using the writable layer for storage.
The docs already suggest creating named volumes caddy_data
and caddy_config
anyway, though this might be simpler as just a single caddy
volume mounted at /caddy
(or /srv/caddy
), with XDG_CONFIG_HOME=/caddy/config
and XDG_DATA_HOME=/caddy/data
.
It looks like Dockerhub is only showing 2.1.1, 2.2.1, and 2.3.0-rc1. Can it be updated with the 2.3.0 images?
Could you please provide an example how to use the docker and especially how I can reverse proxy requests to other docker containers?
This is the officially-supported list which we really should target:
These are not officially supported, but worth building if we can:
I'm new to Caddy so apologies if this question is obvious... I'm trying to add a plugin to my Caddy setup running on Docker. As I understand it, this is done by simply adding a new import to run.go
.
However, to make this change I require my own (almost identical) Dockerfile
which compiles a modified version of run.go
. Is there a better way to go about this or is it possible to somehow edit run.go
whilst using this Dockerfile
?
It seems that other Dockerfile
s for Caddy allow the addition of some plugins using build arguments, is there something like this that will be implemented in this repository? Thanks ๐
xcaddy
didn't exist when we first started working on the caddy:builder
image, but now that it does it doesn't really make a lot of sense to have two separate approaches to building Caddy with modules.
I propose bundling xcaddy in the caddy:builder
image, and removing the caddy-builder.sh
script.
For backwards compatibility it'd be nice to at least wrap xcaddy
so existing users aren't broken, but it would be good to log a warning in stderr to let users know they should switch to calling xcaddy
directly.
Also the docs (for this image, for xcaddy, and for Caddy itself) should be updated to help users of the Docker image discover this better.
See also caddyserver/xcaddy#29 for some related conversation (and a bit of confusion due to caddy:builder
not being obvious).
Hi, I am struggling with getting error and access logs for caddy.
What I would like to have is just all access logs to /var/log/caddy/access.log and same for error.log
Can you please provide some help in your example?
An example for with common options for a standard server would be great.
I was also searching for a while how to use my email to get an account at letsencrypt so I get the expire emails.
It is very nice that you have all these minimal examples which show how easy it is. Anyways once you try to use it the struggling starts.
Thanks for your help and maybe an update on the dockerhub "readme" to show real world examples.
Would be nice to have caddy run as a non-root user inside the container. This is especially useful for rootfull containers (e.g. Docker), where the uid=0 inside the container is the same as uid=0 outside. Additionally, since the /data
dir and the /config
have been specified as a VOLUME
, others are unable to extend the container and change the ownership of those directories. I'm currently using the below Dockerfile to run caddy as a non-root user. I've changed it slightly to suit my needs but much of it should be re-usable for this repo.
FROM caddy:2-builder AS builder
RUN caddy-builder github.com/caddy-dns/cloudflare
FROM caddy:2-alpine AS deps
# We cannot use FROM scratch because, despite adding cap_net_bind_service to the binary
# it still won't run. Presuming because libcap isn't available? Not sure.
FROM alpine:3.12
COPY --from=deps /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY --from=deps /etc/mime.types /etc/nsswitch.conf /etc/
COPY --from=builder /usr/bin/caddy /caddy
RUN set -eux; \
apk add --no-cache libcap; \
setcap cap_net_bind_service=ep /caddy; \
mkdir -p /config/caddy /data/caddy; \
addgroup -g 101 -S www-data; \
adduser -u 101 -D -S -G www-data www-data; \
chown -R www-data:www-data /config /data
USER www-data
ENV XDG_CONFIG_HOME=/config XDG_DATA_HOME=/data
VOLUME /config /data
EXPOSE 80
EXPOSE 443
ENTRYPOINT ["/caddy"]
CMD ["run", "--config", "/Caddyfile", "--adapter", "caddyfile"]
One thing worth considering is that this might be not be an easy upgrade for many, indeed, it may be that we'd need a temporary stop gap that runs the container as root, changes ownership of files/folders, then drops privileges. Then after perhaps a couple of versions, this stop gap can be replaced fully with a non-root user without going through the trouble of dropping privileges.
I want to run server as non-root user. Unfortunately, running as non-root causes permission errors. So I extended your image to chown data and config folders:
FROM caddy:2-alpine
RUN chown -R nobody:nobody /data /config
USER nobody
ENTRYPOINT ["caddy", "run", "--config", "/Caddyfile", "--adapter", "caddyfile"]
BUT docker documentation states that:
Changing the volume from within the Dockerfile: If any build steps change the data within the volume after it has been declared, those changes will be discarded.
Not sure if this is specific to the docker image. My docker containers automatically update (using ouroboros), and since the last caddy update, I now get an error for every request to caddy. I'm mainly using it as a reverse proxy to a bunch of other sites.
2020/04/11 12:05:42 http2: panic serving x.x.x.x:33884: runtime error: invalid memory address or nil pointer dereference
goroutine 117 [running]:
net/http.(*http2serverConn).runHandler.func1(0xc0001281b0, 0xc0008e3f8e, 0xc000256d80)
net/http/h2_bundle.go:5713 +0x16b
panic(0x144d380, 0x2470800)
runtime/panic.go:969 +0x166
github.com/caddyserver/caddy/v2/modules/caddyhttp.(*Server).ServeHTTP(0xc0007d0900, 0x192e1e0, 0xc0001281b0, 0xc00070f800)
github.com/caddyserver/caddy/[email protected]/modules/caddyhttp/server.go:203 +0x932
net/http.serverHandler.ServeHTTP(0xc0005e0380, 0x192e1e0, 0xc0001281b0, 0xc000244400)
net/http/server.go:2807 +0xa3
net/http.initALPNRequest.ServeHTTP(0x19335a0, 0xc00065ac90, 0xc000115c00, 0xc0005e0380, 0x192e1e0, 0xc0001281b0, 0xc000244400)
net/http/server.go:3381 +0x8d
net/http.(*http2serverConn).runHandler(0xc000256d80, 0xc0001281b0, 0xc000244400, 0xc000aa7360)
net/http/h2_bundle.go:5720 +0x8b
created by net/http.(*http2serverConn).processHeaders
net/http/h2_bundle.go:5454 +0x4e1
I tried deleting the config/data directories and forcing a reset, but the error remains.
It would be nice if there was an option (maybe an ENV variable) to enable QUIC.
Note that in this case, we'll also need to EXPOSE 443/udp
.
Is it planned and/or worth considering that Caddy offers an official Debian-based image?
Thanks!
It would be helpful if the README at https://hub.docker.com/r/caddy/caddy listed the tags one should use. I.e. :2, :2.0, :2.0.20, etc
I have the following docker file for building a react app:
Dockerfile.local
# First stage (build the base image)
FROM node:15.14.0-alpine3.10 AS builder
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
ARG BASE_URL
ENV REACT_APP_BASE_URL=${BASE_URL}
RUN npm run build
# Second stage (build the web server)
FROM caddy:2.3.0-alpine
ARG CADDYFILE
COPY ${CADDYFILE} /etc/caddy/Caddyfile
COPY --from=builder /usr/src/app/build/ /srv
EXPOSE 3000
Caddyfile.local
http://localhost:3000 {
root * /srv
route {
reverse_proxy /api* api-server:9000
try_files {path} {path}/ /index.html
file_server
}
}
Here's my command when building the image:
docker build \
-t react-app-production:local \
--build-arg CADDYFILE=Caddyfile.local \
--build-arg BASE_URL=http://localhost:9000/api \
-f Dockerfile.local .
When running localhost:3000 on the computer the container is running on, I get my built react app just fine.
However when I try to open http://192.168.1.5:3000 on another computer in the network (the IP of the machine the container is running on) I don't get any output. The page is blank and there is nothing in the console (no errors, no 404), only the favicon.ico is showing in the title bar.
Note that I have an express api running on http://192.168.1.5:9000 and it returns a json response as intended when I open it.
I'm trying to follow the https://hub.docker.com/_/caddy/ docs "Adding custom Caddy modules" section. I have two plugins (www.github.com/lucaslorentz/caddy-docker-proxy and www.github.com/caddy-dns/lego-deprecated) that I'm trying to use. caddy-docker-proxy
says:
You can build caddy using xcaddy or caddy docker builder.
Use module name github.com/lucaslorentz/caddy-docker-proxy/plugin/v2 to add this plugin to your build.
I'm having trouble doing this. I started with:
docker-compose.yaml:
version: "3.7"
services:
caddy:
image: lucaslorentz/caddy-docker-proxy:ci-alpine
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
labels: # Global options
...
# Proxy to container
whoami0:
image: jwilder/whoami
labels:
caddy: whoami-internal.mydomain.com
caddy.reverse_proxy: "{{upstreams 8000}}"
caddy.tls: "internal"
Works great. If I try to add in the lego plugin, though, I just get default caddy and this plugin stops working:
./docker-compose.yaml:
version: "3.7"
services:
caddy:
# use a custom build instead
# image: lucaslorentz/caddy-docker-proxy:ci-alpine
build: ./custom-caddy-build
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
labels: # Global options
...
# Proxy to container
whoami0:
image: jwilder/whoami
networks:
- docker_web
labels:
caddy: whoami-internal.mydomain.com
caddy.reverse_proxy: "{{upstreams 8000}}"
caddy.tls: "internal"
./custom-caddy-build/Dockerfile:
FROM caddy:2-builder-alpine AS builder
RUN xcaddy build \
--with github.com/lucaslorentz/caddy-docker-proxy/plugin/v2 \
--with github.com/caddy-dns/lego-deprecated
FROM caddy:2-alpine
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
But this just gives me a stock Caddyfile. Could we get a fleshed-out example for how to use xcaddy with plugins? @hairyhenderson? I'm posting here since this isn't caddy-docker-proxy specific and is just a generic "how do I build plugins" question.
Expected:
I set enforce_origin to false, and I can reach the admin API at any origin. (https://caddyserver.com/docs/json/admin/#origins isn't explicit on the matter, but that's what I expect)
Actual:
I set enforce_origin to false, and I can't reach the admin API at any address other than the listening address.
Temporary solution:
Rolled back to caddy/caddy:2.0.0-rc.2-alpine
and confirmed I could again reach the admin API remotely.
Where does Caddy store all the cert info, where I can copy and paste it outside the container?
Now that https://github.com/caddyserver/caddy/releases/tag/v2.3.0-beta.1 has been published, would it be possible to publish these as images to the hub as well?
I'm testing this image at the moment and while I think I've got it working correctly it would be nice to have a little bit of guidance on the best way to set it up. Maybe add a couple of examples to the readme? :)
I have been trying to debug Caddy for some time now without any luck. My requests are coming from Cloudflare and I see the requests coming into my container, but I have not been any to successfully connect to any of my services through Caddy. Any help would be greatly appreciated.
# /srv/docker-compose/services.yml
---
version: "3.4"
services:
caddy:
build:
context: .
dockerfile: /srv/docker-compose/Dockerfile.caddy
container_name: caddy
restart: always
ports:
- 80:80
- 443:443
- 2019:2019
volumes:
- /srv/caddy/config:/config
- /srv/caddy/data:/data
- /srv/caddy/Caddyfile:/etc/caddy/Caddyfile
environment:
- CLOUDFLARE_API_TOKEN=token
- [email protected]
- MY_DOMAIN=example.xyz
- CADDY_BASIC_AUTH_USERNAME=username
- CADDY_BASIC_AUTH_PASSWORD=password
karaoke-forever:
image: david510c/karaoke-forever
container_name: karaoke-forever
volumes:
- /mnt/data/karaoke:/cdgfiles
ports:
- 56701:80
networks:
default:
external:
name: caddy_net
FROM caddy:2-builder AS builder
RUN caddy-builder \
github.com/caddy-dns/cloudflare
FROM caddy:2
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
sing.example.xyz {
reverse_proxy karaoke-forever:56701
tls {
dns cloudflare {env.CLOUDFLARE_API_TOKEN}
}
}
caddy | {"level":"info","ts":1601677707.2191145,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
caddy | {"level":"info","ts":1601677707.2207422,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["localhost:2019","[::1]:2019","127.0.0.1:2019"]}
caddy | 2020/10/02 22:28:27 [INFO][cache:0xc0006a27e0] Started certificate maintenance routine
caddy | {"level":"info","ts":1601677707.2209167,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
caddy | {"level":"info","ts":1601677707.2209299,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
caddy | {"level":"info","ts":1601677707.222789,"logger":"tls","msg":"cleaned up storage units"}
caddy | {"level":"debug","ts":1601677707.2228253,"logger":"http","msg":"starting server loop","address":"[::]:443","http3":false,"tls":true}
caddy | {"level":"debug","ts":1601677707.2228403,"logger":"http","msg":"starting server loop","address":"[::]:80","http3":false,"tls":false}
caddy | {"level":"info","ts":1601677707.2228448,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["tunes.example.xyz","vault.example.xyz","flix.example.xyz","sing.example.xyz","git.example.xyz","recipes.example.xyz","cloud.example.xyz"]}
caddy | {"level":"info","ts":1601677707.2303352,"msg":"autosaved config","file":"/config/caddy/autosave.json"}
caddy | {"level":"info","ts":1601677707.2303436,"msg":"serving initial configuration"}
caddy | {"level":"debug","ts":1601677732.3537588,"logger":"http.handlers.reverse_proxy","msg":"upstream roundtrip","upstream":"karaoke-forever:56701","request":{"method":"GET","uri":"/","proto":"HTTP/2.0","remote_addr":"162.245.206.242:59094","host":"sing.example.xyz","headers":{"User-Agent":["Mozilla/5.0 (iPhone; CPU iPhone OS 13_6_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1.2 Mobile/15E148 Safari/604.1"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"],"Accept-Language":["en-us"],"Accept-Encoding":["gzip, deflate, br"],"X-Forwarded-For":["162.245.206.242"],"X-Forwarded-Proto":["https"]},"tls":{"resumed":false,"version":772,"ciphersuite":4865,"proto":"h2","proto_mutual":true,"server_name":"sing.example.xyz"}},"duration":0.00144153,"error":"dial tcp 172.18.0.10:56701: connect: connection refused"}
caddy | {"level":"error","ts":1601677732.353876,"logger":"http.log.error","msg":"dial tcp 172.18.0.10:56701: connect: connection refused","request":{"method":"GET","uri":"/","proto":"HTTP/2.0","remote_addr":"162.245.206.242:59094","host":"sing.example.xyz","headers":{"Accept-Encoding":["gzip, deflate, br"],"User-Agent":["Mozilla/5.0 (iPhone; CPU iPhone OS 13_6_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1.2 Mobile/15E148 Safari/604.1"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"],"Accept-Language":["en-us"]},"tls":{"resumed":false,"version":772,"ciphersuite":4865,"proto":"h2","proto_mutual":true,"server_name":"sing.example.xyz"}},"duration":0.001703033,"status":502,"err_id":"7wf365cn9","err_trace":"reverseproxy.(*Handler).ServeHTTP (reverseproxy.go:411)"}
$ docker exec -it caddy /bin/sh
/srv # ping 172.18.0.10:56701
PING 172.18.0.10:56701 (172.18.0.10): 56 data bytes
64 bytes from 172.18.0.10: seq=0 ttl=64 time=0.290 ms
64 bytes from 172.18.0.10: seq=1 ttl=64 time=0.133 ms
64 bytes from 172.18.0.10: seq=2 ttl=64 time=0.122 ms
64 bytes from 172.18.0.10: seq=3 ttl=64 time=0.131 ms
64 bytes from 172.18.0.10: seq=4 ttl=64 time=0.130 ms
64 bytes from 172.18.0.10: seq=5 ttl=64 time=0.130 ms
64 bytes from 172.18.0.10: seq=6 ttl=64 time=0.121 ms
^C
--- 172.18.0.10:56701 ping statistics ---
7 packets transmitted, 7 packets received, 0% packet loss
round-trip min/avg/max = 0.121/0.151/0.290 ms
I had a docker-compose file as below:
caddy:
image: caddy/caddy:alpine
#restart: always
ports:
- '80:80'
- '443:443'
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
environment:
- ACME_AGREE=true
but got the error log:
loading initial config: loading new config: http app module: start: tcp: listening on :443: listen tcp :443: bind: permission denied
I'm trying to follow the getting started, but using docker. I can't access localhost:2019 :
$ sudo docker run -p 2019:2019 -p 8080:8080 --name caddy --rm caddy/caddy:alpine
2020/01/27 19:55:03.774 INFO using provided configuration {"config_file": "/etc/caddy/Caddyfile", "config_adapter": "caddyfile"}
2020/01/27 19:55:03.788 INFO admin admin endpoint started {"address": "localhost:2019", "enforce_origin": false, "origins": ["localhost:2019"]}
2020/01/27 19:55:03.788 INFO tls cleaned up storage units
2020/01/27 19:55:03 [INFO][cache:0xc000375ae0] Started certificate maintenance routine
2020/01/27 19:55:03.788 INFO autosaved config {"file": "/var/lib/caddy/.config/caddy/autosave.json"}
2020/01/27 19:55:03.788 INFO serving initial configuration
And in another terminal:
$ curl localhost:2019/config/
curl: (56) Recv failure: Connection reset by peer
$ curl localhost:2019/load \
โ -X POST \
โ -H "Content-Type: application/json" \
โ -d @caddy.json
curl: (56) Recv failure: Connection reset by peer
And I can access the page at localhost:8080:
$ curl localhost:8080 -D - | head
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 12256
Content-Type: text/html; charset=utf-8
Etag: "q4ofap9gg"
Last-Modified: Sat, 25 Jan 2020 18:56:49 GMT
Server: Caddy
Date: Mon, 27 Jan 2020 20:18:43 GMT
<!DOCTYPE html>
I find it confusing that the dockerhub caddy name project is not consistent with the github project.
caddyserver VS caddy.
Do you plan to stream line this? Or maybe had a note in the README about this fact.
Cheers!
Starting from v0.11, Caddy had telemetry, and there were commands to opt out/in. There's no mention of this for Caddy v2, could there be some explicit statement e.g. in the Readme?
Add LABELS instructions. Checkout https://github.com/opencontainers/image-spec
I propose an example of how to build additional modules into the Docker container, similar to v1 had plugin args.
Hi,
I run Caddy v1.0.3 in prod as a basic web server for a while now.
But I wasn't able to run V2. Any example to share?
Here is my present setup:
version: "3.3"
services:
caddy:
image: abiosoft/caddy:1.0.3-no-stats
container_name: caddy
hostname: caddy
restart: unless-stopped
volumes:
- /mnt/webapps/blue:/srv
labels:
#### core configs
- "traefik.enable=true"
# - "traefik.http.routers.caddy.service=caddy" # swarm
- "traefik.http.routers.caddy.rule=Host(`devkiwi.club`) && PathPrefix(`/caddy`)"
- "traefik.http.services.caddy.loadbalancer.server.port=2015"
#### set TLS (https)
- "traefik.http.routers.caddy.entrypoints=websecure"
- "traefik.http.routers.caddy.tls=true"
- "traefik.http.routers.caddy.tls.certresolver=leresolver"
#### Apply rules (middlewares)
- "traefik.http.routers.caddy.middlewares=RuleGrpMain"
#### https://twitter.com/askpascalandy
There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.
Error type: undefined. Note: this is a nested preset so please contact the preset author if you are unable to fix it yourself.
For the sake of playing with the tools and learn how to bundle an application composed of its frontend and backend all bundled together I am building a SPA which I would like to serve with Caddy in an easy to deploy image.
As of now I am only trying to deploy the frontend.
Here is the Dockerfile which builds the image:
# Nuxt frontend
FROM node:11.13.0-alpine as nuxt-builder
WORKDIR /app/frontend
COPY /frontend .
RUN npm install && npm run build
# Build result is placed in /dist
# Caddy webserver
FROM caddy:2.0.0
COPY --from=nuxt-builder /app/frontend/dist /usr/share/caddy/
COPY Caddyfile /etc/caddy/Caddyfile
EXPOSE 80
Caddyfile
root /usr/share/caddy/
tls off
/usr/share/caddy does contain the generated index.html and additional files.
However when running the image there is no response:
C:\Users\maxir> docker run -p 8080:80 -d test-caddy
C:\Users\maxir>curl localhost:8080
curl: (52) Empty reply from server
Am I doing it wrong?
Wondering how to use caddy-docker with Nextcloud in docker (and OnlyOffice document server, Firefox Sync, FileRun also in docker and Syncthing running bare metal).
I am following the example posted here:
nextcloud.example.com {
root * /var/www/nextcloud
file_server
log {
output file /var/log/caddy/nextcloud.log
format single_field common_log
}
php_fastcgi 127.0.0.1:9000
header {
# enable HSTS
Strict-Transport-Security max-age=31536000;
}
redir /.well-known/carddav /remote.php/dav 301
redir /.well-known/caldav /remote.php/dav 301
# .htaccess / data / config / ... shouldn't be accessible from outside
@forbidden {
path /.htaccess
path /data/*
path /config/*
path /db_structure
path /.xml
path /README
path /3rdparty/*
path /lib/*
path /templates/*
path /occ
path /console.php
}
respond @forbidden 404
}
But with caddy-docker, do I still need to define each docker container that I want to expose to a customname.mydomain.com
in the Caddyfile? Or can I purely rely on labels in docker-compose?
If I have nextcloud working on files.mydomain.com, I'm happy and I'm sure I can figure out the other services with Nextcloud as example.
You can use caddy command as ENTRYPOINT
and give default options in CMD
that would simplify running with other options and also remove duplicate caddy (one for image name one for executable name) in examples.
Original ticket by @Conan-Kudo:
There's a great, high-quality set of base images with good commercial support that we should leverage for the official Caddy container images: The Red Hat Universal Base Image. Using ubi8 as the base gives us an excellent, well-supported stack that will work well for community and commercial users alike.
@fatherlinux has been working on this for a while now, and I think he'd be pleased as punch if we used it for caddy. And if there's anything missing we need, we can ask him about fixing it. It would also let us leverage the official Caddy RPMs that @carlwgeorge has made...
1. Caddy version (caddy version):
Caddy v2.2.0-builder
2. How I run Caddy:
Docker on Raspberry OS
a. System environment:
Raspberry Pi 3b+
3. The problem Iโm having:
Built Caddy using xcaddy to add the lego-deprecated plugin:
FROM arm32v6/caddy:2.2.0-builder AS builder
RUN ./xcaddy build \
--with github.com/caddy-dns/lego-deprecated
FROM arm32v6/caddy:2.2.0
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
Running docker-compose like the following example:
https://github.com/sosandroid/docker-bitwarden_rs-caddy-synology/blob/master/docker-compose_bitwarden-caddy.yml
Updated ports and environment variables for Duck dns provider.
I am getting the following log messages and am unsure what they mean:
{"level":"info","ts":1601355235.9482868,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"},
{"level":"info","ts":1601355235.9691277,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["localhost:2019","[::1]:2019","127.0.0.1:2019"]},
{"level":"info","ts":1601355235.970322,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0x3038690"},
{"level":"info","ts":1601355235.975181,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"},
{"level":"info","ts":1601355235.9815211,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["*.duckdns.org"]},
{"level":"info","ts":1601355235.9998298,"msg":"autosaved config","file":"/config/caddy/autosave.json"},
{"level":"info","ts":1601355236.00009,"msg":"serving initial configuration"},
{"level":"info","ts":1601355236.0032415,"logger":"tls","msg":"cleaned up storage units"},
My terminal then seems to just get stuck at this point and nothing else happens. I had all this working in Caddy V1, but I'm trying to update it with Caddy v2. Any insight into what the issue could be or any suggestions would be greatly appreciated. Thanks!
Add some badges to the readme that point to Docker Hub and MicroBadger.
centos7 , docker cmd:
docker run --rm -v /root/caddy:/config caddy/caddy
print logs
2020/03/10 17:21:28.594 INFO using provided configuration {"config_file": "/etc/caddy/Caddyfile", "config_adapter": "caddyfile"}
run: loading initial config: loading new config: starting caddy administration endpoint: listen tcp 37.152.88.55:2019: bind: cannot assign requested address
Feature Request.
Is it on the roadmap to add the DNS provider plugin modules to the official caddy docker images? If I'm not mistaken each DNS provider plugin is only maybe 100KB so hopefully people wouldn't think they add too much bloat?
It would be great to be able to use the latest official images, but if not possible for this use case, is anyone aware of a caddy docker image that does have DNS provider plugin modules other than the abiosoft/caddy on docker hub? That image has the cloudflare module (fortunately the one I need) but it's Caddy 1.0.3.
I trying to build caddy with custom modules in docker. but i get a error like:
exec: "gcc": executable file not found in $PATH
% gcc --version
Configured with: --prefix=/Library/Developer/CommandLineTools/usr --with-gxx-include-dir=/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/4.2.1
Apple clang version 11.0.3 (clang-1103.0.32.29)
Target: x86_64-apple-darwin19.5.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
Here is my Dockerfile follow https://hub.docker.com/_/caddy:
FROM caddy:2.1.1-builder AS builder
RUN caddy-builder \
github.com/caddyserver/nginx-adapter \
github.com/abiosoft/caddy-json-parse
FROM caddy:2.1.1
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
When i build this dockerfile, it throw the error like:
% docker build -t cairolee:test .
Sending build context to Docker daemon 3.072kB
Step 1/4 : FROM caddy:2.1.1-builder AS builder
---> fae42fca94e0
Step 2/4 : RUN caddy-builder github.com/caddyserver/nginx-adapter github.com/abiosoft/caddy-json-parse
---> Running in 718da7e6f21a
+ go get github.com/caddyserver/nginx-adapter github.com/abiosoft/caddy-json-parse
go: downloading github.com/abiosoft/caddy-json-parse v0.0.0-20200527143927-f6a50fd2b066
go: downloading github.com/caddyserver/nginx-adapter v0.0.2
go: github.com/abiosoft/caddy-json-parse upgrade => v0.0.0-20200527143927-f6a50fd2b066
go: github.com/caddyserver/nginx-adapter upgrade => v0.0.2
go: downloading github.com/caddyserver/ntlm-transport v0.1.1-0.20200409193839-5d99ab17e974
# github.com/DataDog/zstd
exec: "gcc": executable file not found in $PATH
The command '/bin/sh -c caddy-builder github.com/caddyserver/nginx-adapter github.com/abiosoft/caddy-json-parse' returned a non-zero code: 2
How to fix this? is it my operate mistake? Please help~~
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.