Code Monkey home page Code Monkey logo

docker-ipv6nat's People

Contributors

bephinix avatar chrislevi avatar geektoor avatar j7an avatar robbertkl avatar sangwa avatar zhangyoufu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-ipv6nat's Issues

Docker 1.13

Hi,

I had this setup running for some time now.
Lately I resetup this container with docker 1.13.

The container exits pretty quickly after start and the log is full with this message:

2017/01/20 14:19:49 exit status 2: iptables v1.6.0: Couldn't load target `DOCKER':No such file or directory

Try `iptables -h' or 'iptables --help' for more information.

Do you have any idea where this might be comming from? I'm pretty much out of Ideas.

IPv6 stop working on host

Information:

  • Debian 9.4 x64
  • Docker version 18.03.0-ce, build 0520e24
  • docker-compose version 1.20.1, build 5d8c71b
  • docker-ipv6nat.amd64: v0.3.2
  • robbertkl/ipv6nat:latest

How to test:

Configure ipv6 on the default bridge network adapter:
/etc/docker/daemon.json

{
  "ipv6": true,
  "fixed-cidr-v6": "fd02:97f2:b360:c5a6::/64"
}

Execute on the host:
ping6 google.com

You will see the ping working properly.

Execute the ipv6nat container:
docker run -d --restart=always -v /var/run/docker.sock:/var/run/docker.sock:ro -v /lib/modules:/lib/modules:ro --privileged --net=host robbertkl/ipv6nat

Ping still works.

Now execute a container that make use of ipv6nat:
docker container run -d --rm -it -p 80:80 navossoc/ipaddress

Browse to: http://[your_ipv6]/

It works and you will see the ipv6nat working properly.

Note that ping stopped working as soon the container started using ipv6nat.

I tried a few other cenarios with docker-compose, custom networks, container or binary, etc. All have the same result.

Do you have any idea what that might be?
I'm missing something?

Besides that, all containers with ipv6 works properly.

PS: I need to reboot the host to get ipv6 working on the host again.
PS2: I did a quick test on Ubuntu 16.04/17.10 and it seems to work properly.

[]'s

Support for incoming requests

Your solution seems to work fine for enabling outgoing ipv6 requests using NAT, but I encounter some issues regarding incoming requests. When connecting from another server to a docker-exposed port, the docker container will get the request from the NATed ipv4 address:

Requests from another server

ipv4: nc -4 example.com 25
ipv6: nc -6 example.com 25

Incoming requests

ipv4: connect from otherserver.net[xx.xx.xx.xx]
ipv6: connect from unknown[172.18.0.1]

Is this a known problem? This prevents the service in the docker container to actually know the connecting machines. For some services this is essential (e.g. verifying RDNS records for mail services)

I kind of assumed that it's possible to also implement this using ip6tables e.g. mirroring the FORWARD rules that docker places in iptables.

Can you comment on that? If you think this is doable (and makes sense), I'm willing to invest some time in a pull-request.

Dealing with containers that map specific IP address

Hi there,

I recently added an extra IP address to some of my servers so I can have multiple containers listening on the same port. Is there a way to expose one of them via IPv6. Previously, I used to have -p 443:443, which worked fine, now I have -p 1.2.3.4:443:443, which no longer gets exposed via IPv6.

Any hints are much appreciated!

Allow mapping IPv4 listen addresses to IPv6

When port mapping is defined in Docker, it might be specifying a HostIP in addition to the HostPort. If the HostIp is defined, it's generally an IPv4 address. it might be great to have docker-ipv6nat be able to convert automatically the IPv4 listen addresses to IPv6 based on some mapping.

The use case is to work with the Nomad scheduler which automatically detect the HostIP based on network detection on the host. it does not support IPv6 there, and it might not be in the future as docker itself does not support IPv6 port mapping.

Suggested CLI flag: -map-ipv4=192.168.3.0/24=2001:912:1480:11::226,... (for example)

I can provide a PR for that

Raspbian: Couldn't load target `DOCKER':No such file or directory

Hi there,

I have been successfully running ipv6nat on Ubuntu 18.04 for a while now and wanted to also enable it on my Raspberry Pis running Debian/Raspbian 10. I am running the ipv6nat Docker container with the following settings (which work on Ubuntu):

      image: robbertkl/ipv6nat
      name: ipv6nat
      cap_drop:
        - ALL
      capabilities:
        - NET_RAW
        - NET_ADMIN
        - SYS_MODULE
      memory: 64MB
      network_mode: host
      read_only: yes
      tmpfs:
        - /run
      volumes:
        - /lib/modules:/lib/modules:ro
        - /var/run/docker.sock:/var/run/docker.sock:ro

Unfortunately, on Raspbian the container terminates with the following error:

ln: /sbin/iptables: File exists
ln: /sbin/iptables-save: File exists
ln: /sbin/iptables-restore: File exists
ln: /sbin/ip6tables: File exists
ln: /sbin/ip6tables-save: File exists
ln: /sbin/ip6tables-restore: File exists
2020/03/25 21:56:44 running [/sbin/iptables -t nat -C OUTPUT -m addrtype --dst-type LOCAL -j DOCKER --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target `DOCKER':No such file or directory

Try `iptables -h' or 'iptables --help' for more information.

Any ideas?

Thanks,
Thilo

# docker image ls
REPOSITORY                                              TAG                 IMAGE ID            CREATED             SIZE
[...]
robbertkl/ipv6nat                                       latest              c096474d0f3c        3 months ago        16.9MB
# cat /etc/docker/daemon.json 
{
  "dns": ["x.x.x.x"],
  "experimental": true,
  "fixed-cidr-v6": "fd00:dead:beef::/48",
  "ipv6": true,
  "live-restore": true,
  "log-driver": "json-file",
  "log-opts": {
    "max-file": "2",
    "max-size": "256m"
  },
  "metrics-addr": "x.x.x.x:9323",
  "storage-driver": "overlay2",
  "userland-proxy": false
}
# docker --version
Docker version 19.03.8, build afacb8b
ip6tables-save     
# Generated by xtables-save v1.8.2 on Wed Mar 25 22:05:59 2020
*filter
:INPUT DROP [0:0]
:FORWARD ACCEPT [795:49300]
:OUTPUT ACCEPT [61:5636]
:ufw6-before-logging-input - [0:0]
:ufw6-before-logging-output - [0:0]
:ufw6-before-logging-forward - [0:0]
:ufw6-before-input - [0:0]
:ufw6-before-output - [0:0]
:ufw6-before-forward - [0:0]
:ufw6-after-input - [0:0]
:ufw6-after-output - [0:0]
:ufw6-after-forward - [0:0]
:ufw6-after-logging-input - [0:0]
:ufw6-after-logging-output - [0:0]
:ufw6-after-logging-forward - [0:0]
:ufw6-reject-input - [0:0]
:ufw6-reject-output - [0:0]
:ufw6-reject-forward - [0:0]
:ufw6-track-input - [0:0]
:ufw6-track-output - [0:0]
:ufw6-track-forward - [0:0]
:ufw6-logging-deny - [0:0]
:ufw6-logging-allow - [0:0]
:ufw6-skip-to-policy-input - [0:0]
:ufw6-skip-to-policy-output - [0:0]
:ufw6-skip-to-policy-forward - [0:0]
:ufw6-user-input - [0:0]
:ufw6-user-output - [0:0]
:ufw6-user-forward - [0:0]
:ufw6-user-logging-input - [0:0]
:ufw6-user-logging-output - [0:0]
:ufw6-user-logging-forward - [0:0]
:ufw6-user-limit - [0:0]
:ufw6-user-limit-accept - [0:0]
-A INPUT -j ufw6-before-logging-input
-A INPUT -j ufw6-before-input
-A INPUT -j ufw6-after-input
-A INPUT -j ufw6-after-logging-input
-A INPUT -j ufw6-reject-input
-A INPUT -j ufw6-track-input
-A FORWARD -j ufw6-before-logging-forward
-A FORWARD -j ufw6-before-forward
-A FORWARD -j ufw6-after-forward
-A FORWARD -j ufw6-after-logging-forward
-A FORWARD -j ufw6-reject-forward
-A FORWARD -j ufw6-track-forward
-A OUTPUT -j ufw6-before-logging-output
-A OUTPUT -j ufw6-before-output
-A OUTPUT -j ufw6-after-output
-A OUTPUT -j ufw6-after-logging-output
-A OUTPUT -j ufw6-reject-output
-A OUTPUT -j ufw6-track-output
-A ufw6-before-input -i lo -j ACCEPT
-A ufw6-before-input -m rt --rt-type 0 -j DROP
-A ufw6-before-input -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A ufw6-before-input -p ipv6-icmp -m icmp6 --icmpv6-type 129 -j ACCEPT
-A ufw6-before-input -m conntrack --ctstate INVALID -j ufw6-logging-deny
-A ufw6-before-input -m conntrack --ctstate INVALID -j DROP
-A ufw6-before-input -p ipv6-icmp -m icmp6 --icmpv6-type 1 -j ACCEPT
-A ufw6-before-input -p ipv6-icmp -m icmp6 --icmpv6-type 2 -j ACCEPT
-A ufw6-before-input -p ipv6-icmp -m icmp6 --icmpv6-type 3 -j ACCEPT
-A ufw6-before-input -p ipv6-icmp -m icmp6 --icmpv6-type 4 -j ACCEPT
-A ufw6-before-input -p ipv6-icmp -m icmp6 --icmpv6-type 128 -j ACCEPT
-A ufw6-before-input -p ipv6-icmp -m icmp6 --icmpv6-type 133 -m hl --hl-eq 255 -j ACCEPT
-A ufw6-before-input -p ipv6-icmp -m icmp6 --icmpv6-type 134 -m hl --hl-eq 255 -j ACCEPT
-A ufw6-before-input -p ipv6-icmp -m icmp6 --icmpv6-type 135 -m hl --hl-eq 255 -j ACCEPT
-A ufw6-before-input -p ipv6-icmp -m icmp6 --icmpv6-type 136 -m hl --hl-eq 255 -j ACCEPT
-A ufw6-before-input -p ipv6-icmp -m icmp6 --icmpv6-type 141 -m hl --hl-eq 255 -j ACCEPT
-A ufw6-before-input -p ipv6-icmp -m icmp6 --icmpv6-type 142 -m hl --hl-eq 255 -j ACCEPT
-A ufw6-before-input -s fe80::/10 -p ipv6-icmp -m icmp6 --icmpv6-type 130 -j ACCEPT
-A ufw6-before-input -s fe80::/10 -p ipv6-icmp -m icmp6 --icmpv6-type 131 -j ACCEPT
-A ufw6-before-input -s fe80::/10 -p ipv6-icmp -m icmp6 --icmpv6-type 132 -j ACCEPT
-A ufw6-before-input -s fe80::/10 -p ipv6-icmp -m icmp6 --icmpv6-type 143 -j ACCEPT
-A ufw6-before-input -p ipv6-icmp -m icmp6 --icmpv6-type 148 -m hl --hl-eq 255 -j ACCEPT
-A ufw6-before-input -p ipv6-icmp -m icmp6 --icmpv6-type 149 -m hl --hl-eq 255 -j ACCEPT
-A ufw6-before-input -s fe80::/10 -p ipv6-icmp -m icmp6 --icmpv6-type 151 -m hl --hl-eq 1 -j ACCEPT
-A ufw6-before-input -s fe80::/10 -p ipv6-icmp -m icmp6 --icmpv6-type 152 -m hl --hl-eq 1 -j ACCEPT
-A ufw6-before-input -s fe80::/10 -p ipv6-icmp -m icmp6 --icmpv6-type 153 -m hl --hl-eq 1 -j ACCEPT
-A ufw6-before-input -p ipv6-icmp -m icmp6 --icmpv6-type 144 -j ACCEPT
-A ufw6-before-input -p ipv6-icmp -m icmp6 --icmpv6-type 145 -j ACCEPT
-A ufw6-before-input -p ipv6-icmp -m icmp6 --icmpv6-type 146 -j ACCEPT
-A ufw6-before-input -p ipv6-icmp -m icmp6 --icmpv6-type 147 -j ACCEPT
-A ufw6-before-input -s fe80::/10 -d fe80::/10 -p udp -m udp --sport 547 --dport 546 -j ACCEPT
-A ufw6-before-input -d ff02::fb/128 -p udp -m udp --dport 5353 -j ACCEPT
-A ufw6-before-input -d ff02::f/128 -p udp -m udp --dport 1900 -j ACCEPT
-A ufw6-before-input -j ufw6-user-input
-A ufw6-before-output -o lo -j ACCEPT
-A ufw6-before-output -m rt --rt-type 0 -j DROP
-A ufw6-before-output -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A ufw6-before-output -p ipv6-icmp -m icmp6 --icmpv6-type 1 -j ACCEPT
-A ufw6-before-output -p ipv6-icmp -m icmp6 --icmpv6-type 2 -j ACCEPT
-A ufw6-before-output -p ipv6-icmp -m icmp6 --icmpv6-type 3 -j ACCEPT
-A ufw6-before-output -p ipv6-icmp -m icmp6 --icmpv6-type 4 -j ACCEPT
-A ufw6-before-output -p ipv6-icmp -m icmp6 --icmpv6-type 128 -j ACCEPT
-A ufw6-before-output -p ipv6-icmp -m icmp6 --icmpv6-type 129 -j ACCEPT
-A ufw6-before-output -p ipv6-icmp -m icmp6 --icmpv6-type 133 -m hl --hl-eq 255 -j ACCEPT
-A ufw6-before-output -p ipv6-icmp -m icmp6 --icmpv6-type 136 -m hl --hl-eq 255 -j ACCEPT
-A ufw6-before-output -p ipv6-icmp -m icmp6 --icmpv6-type 135 -m hl --hl-eq 255 -j ACCEPT
-A ufw6-before-output -p ipv6-icmp -m icmp6 --icmpv6-type 134 -m hl --hl-eq 255 -j ACCEPT
-A ufw6-before-output -p ipv6-icmp -m icmp6 --icmpv6-type 141 -m hl --hl-eq 255 -j ACCEPT
-A ufw6-before-output -p ipv6-icmp -m icmp6 --icmpv6-type 142 -m hl --hl-eq 255 -j ACCEPT
-A ufw6-before-output -s fe80::/10 -p ipv6-icmp -m icmp6 --icmpv6-type 130 -j ACCEPT
-A ufw6-before-output -s fe80::/10 -p ipv6-icmp -m icmp6 --icmpv6-type 131 -j ACCEPT
-A ufw6-before-output -s fe80::/10 -p ipv6-icmp -m icmp6 --icmpv6-type 132 -j ACCEPT
-A ufw6-before-output -s fe80::/10 -p ipv6-icmp -m icmp6 --icmpv6-type 143 -j ACCEPT
-A ufw6-before-output -p ipv6-icmp -m icmp6 --icmpv6-type 148 -m hl --hl-eq 255 -j ACCEPT
-A ufw6-before-output -p ipv6-icmp -m icmp6 --icmpv6-type 149 -m hl --hl-eq 255 -j ACCEPT
-A ufw6-before-output -s fe80::/10 -p ipv6-icmp -m icmp6 --icmpv6-type 151 -m hl --hl-eq 1 -j ACCEPT
-A ufw6-before-output -s fe80::/10 -p ipv6-icmp -m icmp6 --icmpv6-type 152 -m hl --hl-eq 1 -j ACCEPT
-A ufw6-before-output -s fe80::/10 -p ipv6-icmp -m icmp6 --icmpv6-type 153 -m hl --hl-eq 1 -j ACCEPT
-A ufw6-before-output -j ufw6-user-output
-A ufw6-before-forward -m rt --rt-type 0 -j DROP
-A ufw6-before-forward -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A ufw6-before-forward -p ipv6-icmp -m icmp6 --icmpv6-type 1 -j ACCEPT
-A ufw6-before-forward -p ipv6-icmp -m icmp6 --icmpv6-type 2 -j ACCEPT
-A ufw6-before-forward -p ipv6-icmp -m icmp6 --icmpv6-type 3 -j ACCEPT
-A ufw6-before-forward -p ipv6-icmp -m icmp6 --icmpv6-type 4 -j ACCEPT
-A ufw6-before-forward -p ipv6-icmp -m icmp6 --icmpv6-type 128 -j ACCEPT
-A ufw6-before-forward -p ipv6-icmp -m icmp6 --icmpv6-type 129 -j ACCEPT
-A ufw6-before-forward -j ufw6-user-forward
-A ufw6-after-input -p udp -m udp --dport 137 -j ufw6-skip-to-policy-input
-A ufw6-after-input -p udp -m udp --dport 138 -j ufw6-skip-to-policy-input
-A ufw6-after-input -p tcp -m tcp --dport 139 -j ufw6-skip-to-policy-input
-A ufw6-after-input -p tcp -m tcp --dport 445 -j ufw6-skip-to-policy-input
-A ufw6-after-input -p udp -m udp --dport 546 -j ufw6-skip-to-policy-input
-A ufw6-after-input -p udp -m udp --dport 547 -j ufw6-skip-to-policy-input
-A ufw6-track-output -p tcp -m conntrack --ctstate NEW -j ACCEPT
-A ufw6-track-output -p udp -m conntrack --ctstate NEW -j ACCEPT
-A ufw6-track-forward -p tcp -m conntrack --ctstate NEW -j ACCEPT
-A ufw6-track-forward -p udp -m conntrack --ctstate NEW -j ACCEPT
-A ufw6-skip-to-policy-input -j DROP
-A ufw6-skip-to-policy-output -j ACCEPT
-A ufw6-skip-to-policy-forward -j ACCEPT
-A ufw6-user-input -p tcp -m tcp --dport 22 -j ACCEPT
-A ufw6-user-input -p udp -m multiport --dports 60000:61000 -j ACCEPT
-A ufw6-user-input -i eth0 -p tcp -m tcp --dport 53 -j ACCEPT
-A ufw6-user-input -i eth0.2 -p tcp -m tcp --dport 53 -j ACCEPT
-A ufw6-user-input -i eth0.3 -p tcp -m tcp --dport 53 -j ACCEPT
-A ufw6-user-input -i eth0.4 -p tcp -m tcp --dport 53 -j ACCEPT
-A ufw6-user-input -i eth0.5 -p tcp -m tcp --dport 53 -j ACCEPT
-A ufw6-user-input -i eth0.6 -p tcp -m tcp --dport 53 -j ACCEPT
-A ufw6-user-input -i eth0.7 -p tcp -m tcp --dport 53 -j ACCEPT
-A ufw6-user-input -i eth0.8 -p tcp -m tcp --dport 53 -j ACCEPT
-A ufw6-user-input -i eth0.10 -p tcp -m tcp --dport 53 -j ACCEPT
-A ufw6-user-input -i eth0.180 -p tcp -m tcp --dport 53 -j ACCEPT
-A ufw6-user-input -i docker0 -p tcp -m tcp --dport 53 -j ACCEPT
-A ufw6-user-input -i eth0 -p udp -m udp --dport 53 -j ACCEPT
-A ufw6-user-input -i eth0.2 -p udp -m udp --dport 53 -j ACCEPT
-A ufw6-user-input -i eth0.3 -p udp -m udp --dport 53 -j ACCEPT
-A ufw6-user-input -i eth0.4 -p udp -m udp --dport 53 -j ACCEPT
-A ufw6-user-input -i eth0.5 -p udp -m udp --dport 53 -j ACCEPT
-A ufw6-user-input -i eth0.6 -p udp -m udp --dport 53 -j ACCEPT
-A ufw6-user-input -i eth0.7 -p udp -m udp --dport 53 -j ACCEPT
-A ufw6-user-input -i eth0.8 -p udp -m udp --dport 53 -j ACCEPT
-A ufw6-user-input -i eth0.10 -p udp -m udp --dport 53 -j ACCEPT
-A ufw6-user-input -i eth0.180 -p udp -m udp --dport 53 -j ACCEPT
-A ufw6-user-input -i docker0 -p udp -m udp --dport 53 -j ACCEPT
-A ufw6-user-input -i eth0 -p udp -m udp --dport 67 -j ACCEPT
-A ufw6-user-input -i eth0.2 -p udp -m udp --dport 67 -j ACCEPT
-A ufw6-user-input -i eth0.3 -p udp -m udp --dport 67 -j ACCEPT
-A ufw6-user-input -i eth0.4 -p udp -m udp --dport 67 -j ACCEPT
-A ufw6-user-input -i eth0.5 -p udp -m udp --dport 67 -j ACCEPT
-A ufw6-user-input -i eth0.6 -p udp -m udp --dport 67 -j ACCEPT
-A ufw6-user-input -i eth0.7 -p udp -m udp --dport 67 -j ACCEPT
-A ufw6-user-input -i eth0.8 -p udp -m udp --dport 67 -j ACCEPT
-A ufw6-user-input -i eth0.10 -p udp -m udp --dport 67 -j ACCEPT
-A ufw6-user-input -i eth0.180 -p udp -m udp --dport 67 -j ACCEPT
-A ufw6-user-input -i docker0 -p udp -m udp --dport 67 -j ACCEPT
-A ufw6-user-input -i eth0 -p udp -m udp --dport 69 -j ACCEPT
-A ufw6-user-input -i eth0.7 -p udp -m udp --dport 69 -j ACCEPT
-A ufw6-user-input -i eth0.8 -p udp -m udp --dport 69 -j ACCEPT
-A ufw6-user-input -i eth0.10 -p udp -m udp --dport 69 -j ACCEPT
-A ufw6-user-input -i eth0 -p udp -m udp --dport 123 -j ACCEPT
-A ufw6-user-input -i eth0.2 -p udp -m udp --dport 123 -j ACCEPT
-A ufw6-user-input -i eth0.3 -p udp -m udp --dport 123 -j ACCEPT
-A ufw6-user-input -i eth0.4 -p udp -m udp --dport 123 -j ACCEPT
-A ufw6-user-input -i eth0.5 -p udp -m udp --dport 123 -j ACCEPT
-A ufw6-user-input -i eth0.6 -p udp -m udp --dport 123 -j ACCEPT
-A ufw6-user-input -i eth0.7 -p udp -m udp --dport 123 -j ACCEPT
-A ufw6-user-input -i eth0.8 -p udp -m udp --dport 123 -j ACCEPT
-A ufw6-user-input -i eth0.10 -p udp -m udp --dport 123 -j ACCEPT
-A ufw6-user-input -i eth0.180 -p udp -m udp --dport 123 -j ACCEPT
-A ufw6-user-input -i eth0 -p tcp -j ACCEPT
-A ufw6-user-input -i docker0 -p tcp -j ACCEPT
-A ufw6-user-logging-input -j RETURN
-A ufw6-user-logging-output -j RETURN
-A ufw6-user-logging-forward -j RETURN
-A ufw6-user-limit -j REJECT --reject-with icmp6-port-unreachable
-A ufw6-user-limit-accept -j ACCEPT
COMMIT
# Completed on Wed Mar 25 22:05:59 2020

com.docker.network.bridge.host_binding_ipv6 without effect

I'm trying to setup ipv6nat on my machine.

I need a dedicated IPv6 address for the NAT as I want my main machine to be reachable on port 22 at its own IP.

If I'm reading the documentation correctly I can only define this on a user-defined network. Using the default bridge would probably easier but I don't see how i could set com.docker.network.bridge.host_binding_ipv6 there.

Now I created a user-defined network as documented in Option B. But when I do a ip6tables -L -t nat -v I see the following result:

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RETURN     all      br-d152d505a932 any     anywhere             anywhere            
    0     0 DNAT       tcp      !br-d152d505a932 any     anywhere             anywhere             tcp dpt:ssh to:[fd00:dead:babe::2]:22

If I compare it to the ipv4 entry I see in the destination that there should be the defined host binding IP instead of anywhere.

If I inspect the created network I see that "com.docker.network.bridge.host_binding_ipv6" is set to the correct IP.

Any idea what could be wrong?

Removal of ip6table Rules Upon Stopping Container

Hi there! Thanks for this really awesome utility! I've been using this for few days now and seems to work really well!

One feature enhancement if you can consider, is to remove all the ip6table rules when the container is stopped. I tried to manually delete the rules myself externally after stopping the container but this seems a bit complex.

Perhaps you know the different chains and rules that are being created when the container is started - Perhaps you can have a script that can be run which can clean-up/remove these rules?

DNAT unsupported revision

This is a new install of ubuntu 18.04 with the kernal updated to 5.3 and docker 19.03.06

The revision error appears to be due to the host system running iptables 1.6.1 and ipv6nat using 1.8.3.
as mentioned in moby/moby#40428

0 0 DNAT tcp !bridge-tn30 * ::/0 2001:570:1a18:202::30 tcp dpt:19030 DNAT [unsupported revision]

If this is the cause, how should it be fixed? Can the version in ipv6nat be downgraded? or do I need to upgrade ubuntu to 19.10?
Thanks!

Cannot resolve host error inside container

Hi,

I am not able to get IPv6 to work inside a container with docker-ipv6nat. The host machine has IPv6 support, and even the containers seem to support IPv6 when run with --net=host. This is what I did:

I created a user-defined network according to the README file as below:

sudo docker network create --ipv6 --subnet=fd00:dead:beef::/48 ipv6net

Then I started the docker-ipv6nat container:

sudo docker run -d --restart=always -v /var/run/docker.sock:/var/run/docker.sock:ro --privileged --net=host robbertkl/ipv6nat

Then I ran my own container:

sudo docker run --net=ipv6net -it ipv6 bash

Within this container, when I ping6 or curl -6 an IPv6-supporting site, I get the following errors:

root@e81edd040b76:/# curl -6 google.com
curl: (6) Could not resolve host: google.com
root@e81edd040b76:/# ping6 google.com
unknown host

Apparently, ping and curl are failing to resolve URLs too. It looks like something is going wrong with the container's DNS when using this particular combination of user-defined network and docker-ipv6nat. Please help if you have any insight about this.

ip6tables rules (silently) not added due to invalid index

If firewall.EnsureRules(...) detects an existing rule, the corresponding ruleCounters entry is not incremented. When a docker container disconnects from a network later on, firewall.RemoveRules decrements the counter, making it negative.

go-iptables reports an error, which unfortunately gets ignored there:

w.handleEvent(event)

Because errors are ignored, some rules are not added correctly, which led to seemingly random connectivity issues when I recreated some docker containers.

When propagating the error on that line, ipv6nat terminates with messages like these:

2019/04/11 01:34:54 running [/sbin/ip6tables -t nat -I POSTROUTING -1 -s fd00:ca7e:d09e::4 -d fd00:ca7e:d09e::4 -p tcp -m tcp --dport 143 -j MASQUERADE --wait]: exit status 2: ip6tables v1.6.0: unknown option "-1"

There also seems to be some scenario where the index becomes too large:

2019/04/11 01:37:47 running [/sbin/ip6tables -t filter -I DOCKER 7 -d fd00:ca7e:d09e::2 ! -i br-bc678bb0c0ae -o br-bc678bb0c0ae -p tcp -m tcp --dport 443 -j ACCEPT --wait]: exit status 1: ip6tables: Index of insertion too big.

Maybe fw.ipt.Append(...) could be used to append the non-prepend rules. This would eliminate the need for ruleCounters. Keeping the counters in sync if there are pre-existing rules seems pretty complicated.

Internal network rules for IPv6 not working?

Hi there,
i just moved a few container into a new internal network (created with --internal) and would have expected to see the isolation rules to be created analog to IPv4
IPv4

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination         
DROP       all  -- !192.168.5.0/24       anywhere            
DROP       all  --  anywhere            !192.168.5.0/24      
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere            
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere 

However the ip6tables show no such rule?
IPv6

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination         
DROP       all      anywhere             anywhere            
DROP       all      anywhere             anywhere            
DOCKER-ISOLATION-STAGE-2  all      anywhere             anywhere            
RETURN     all      anywhere             anywhere   

README Question: Using image/image inside

Hello,

Thanks for putting so much effort into the README. It explains both the problem and solution, and quite a bit about how all this actually works.

I've been trying to set up pihole with v6 support for a few weeks now (I tinker with it at night), and have mostly been stunned at how underdeveloped docker's v6 support is--even the documentation is incomplete.

I was definitely very happy to find your software and instructions, and I'd like to try to roll it out with pihole.

I do have a couple questions about the documentation.

  • I'm on Manjaro ARM. I'm planning to use ipv6nat in conjunction with docker-compose (I'm a docker-noob, so I'm very much into compose files right now). Could you please add an example compose file equivalent to the example docker run command? At the moment, I'm trying to figure out how to roll this into my compose file for pihole. I'm studying this example: https://palant.info/2018/01/05/getting-published-docker-container-ports-to-work-with-ipv6/ .
  • When would I need to install and enable the system service? My understanding is the preferred way to do it is to use docker run or load the image into a multi-image container in docker-compose. I'm less clear on the advantage of installing the system service--I'm guessing it obviates the need to manually set up the v6 routing in the run command or the docker-compose file? Somehow? (I've also never used the AUR, so there's that.)

Thanks!

ip6tables error

I tried to use ipv6nat with docker-ce 18.09.63-0debian-buster on a current Debian Testing.
A sample nginx container is running on an IPv6 enabled network configured as fd01::/64.

When I start ipv6nat (using docker-compose up) I get the following error:

ipv6nat_1 | 2019/05/29 19:52:44 running [/usr/local/bin/ip6tables -t filter -I DOCKER 1 -d fd01::2 ! -i br-4895d4b90f94 -o br-4895d4b90f94 -p tcp -m tcp --dport 80 -j ACCEPT --wait]: exit status 1: iptables: Invalid argument. Run `dmesg' for more information.

dmesg shows:

[613582.357457] x_tables: ip6_tables: tcp match: only valid for protocol 6

I am running nftables on the host machine.
Stock docker IPv4 NAT works fine.

Ports bond to localhost are not ignored

Adding a Container with

docker run -p ":::5000:5000" -it alpine /bin/ash

leads to the following ip6table rules:

Chain DOCKER (5 references)
target     prot opt source               destination         
ACCEPT     tcp      ::/0                 fd00:1::2            tcp dpt:5000

However being bond to localhost, the port should not be exposed like that.

EDIT: Problem is probably at state.go#L240

Setting IP Address to docker0 interface

Hi,

thanks for fixing #1 so fast.

I got another related one.

When I use the com.docker.network.bridge.host_binding_ipv6 option the ip6tables rules are now created fine. But my IP is not reachable from the outside.

I took a look how this handled in the IPv4 case.
When I use "com.docker.network.bridge.host_binding_ipv4"="1.2.3.4" the ip address 1.2.3.4 is assigned to the docker0 interface. I guess something equivalent should be done when using the com.docker.network.bridge.host_binding_ipv6 option.

Of course I can add the ip address manually to the docker0 interface. But if I restart docker the IP is gone, I guess it flushes the assigned IPs on start (after service docker stop it's still there).

Do you think this is in the scope of this progress? Or any other idea how to achieve this?

IPv6 gateway

I'm currently testing out ipv6nat. It seems to work well. I noticed the docker network command does not specify a gateway docker network create --ipv6 --subnet=fd00:dead:beef::/48 mynetwork. So only the IPv4 subnet has a gateway when I run docker network inspect mynetwork

Should a gateway be specified for IPv6? Would Nginx container see a visitor's real IPv6 address without a IPv6 gateway defined?

And how do I route host's IPv6 to Docker container's IPv6?

No ICC for IPv6 nework if "--internal" is used

Problem/Bug

Running a fresh Docker CE system without the default bridge and only a custo network created with the following command:

sudo docker network create --ipv6 --subnet 172.22.99.0/24 --subnet fdef:0:0:99::/64 --internal my99

iptables (IPv4) FORWARD (and referenced) chains:

Chain FORWARD (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    0     0 DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  br-d403f56263c1 br-d403f56263c1  0.0.0.0/0            0.0.0.0/0

Chain DOCKER (0 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DROP       all  --  *      br-d403f56263c1 !172.30.99.0/24       0.0.0.0/0           
    0     0 DROP       all  --  br-d403f56263c1 *       0.0.0.0/0           !172.30.99.0/24      
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0    

ip6tables (IPv6) FORWARD (and referenced) chains:

Chain FORWARD (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DOCKER-ISOLATION  all      *      *       ::/0                 ::/0

Chain DOCKER-ISOLATION (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DROP       all      *      br-d403f56263c1 !fddd:0:0:99::/64     ::/0                
    0     0 DROP       all      br-d403f56263c1 *       ::/0                !fddd:0:0:99::/64    
    0     0 RETURN     all      *      *       ::/0                 ::/0

As you can see, ip6tables is missing a rule to allow traffic between containers on such an "--internal" network.

Solution/Fix

We need to change this part of manager.go:

	if network.internal {
		return &Ruleset{
			NewPrependRule(TableFilter, ChainDockerIsolation,
				"!", "-s", network.subnet.String(),
				"-o", network.bridge,
				"-j", "DROP"),
			NewPrependRule(TableFilter, ChainDockerIsolation,
				"!", "-d", network.subnet.String(),
				"-i", network.bridge,
				"-j", "DROP"),
		}
	}

	iccAction := "ACCEPT"
	if !network.icc {
		iccAction = "DROP"
	}

We have to check for icc flag before creating the ruleset for an internal network.

If you would set icc to false for this internal network, FORWARD will contain the following rule:

    0     0 DROP     all  --  br-d403f56263c1 br-d403f56263c1  0.0.0.0/0            0.0.0.0/0

So we only need to move the icc check before the internal ruleset generation and always create a rule, wich will use iccAction as its action.

If there is any plan for support podman/crio

When using podman/crio instead of docker, this way cann't work well for pure IPv6

There is no docker0 interface and not DOCKER related ip6table chains

When run this container, get following issue:
2020/01/14 06:21:10 running [/sbin/iptables -t nat -C OUTPUT -m addrtype --dst-type LOCAL -j DOCKER --wait]: exit status 2: iptables v1.8.3 (legacy): Couldn't load target `DOCKER':No such file or directory

Platform: CentOS8

[root@henry-1921-cs-01 ~]# ip a s cni0
4: cni0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 96:1c:1d:bc:a7:60 brd ff:ff:ff:ff:ff:ff
inet6 fd00:4::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::941c:1dff:febc:a760/64 scope link
valid_lft forever preferred_lft forever

[root@henry-1921-cs-01 ~]# podman info
host:
BuildahVersion: 1.6-dev
Conmon:
package: Unknown
path: /usr/libexec/crio/conmon
version: 'conmon version 2.0.1, commit: HEAD'
Distribution:
distribution: '"centos"'
version: "8"
MemFree: 6477934592
MemTotal: 8191897600
OCIRuntime:
package: containerd.io-1.2.10-3.2.el7.x86_64
path: /usr/bin/runc
version: |-
runc version 1.0.0-rc8+dev
commit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
spec: 1.0.1-dev
SwapFree: 0
SwapTotal: 0
arch: amd64
cpus: 4
hostname: henry-1921-cs-01
kernel: 4.18.0-80.11.2.el8_0.x86_64
os: linux
rootless: false
uptime: 7h 7m 26.76s (Approximately 0.29 days)
insecure registries:
registries: []
registries:
registries:

  • registry.redhat.io
  • quay.io
  • docker.io
    store:
    ConfigFile: /etc/containers/storage.conf
    ContainerStore:
    number: 0
    GraphDriverName: overlay
    GraphOptions: null
    GraphRoot: /data0/storage
    GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
    ImageStore:
    number: 2
    RunRoot: /var/run/containers/storage

[root@henry-1921-cs-01 ~]# ip6tables -nvL
.....
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 CNI-FORWARD all * * ::/0 ::/0 /* CNI firewall plugin rules */

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain CNI-FORWARD (1 references)
pkts bytes target prot opt in out source destination
0 0 CNI-ADMIN all * * ::/0 ::/0 /* CNI firewall plugin rules */

Chain CNI-ADMIN (1 references)
pkts bytes target prot opt in out source destination

Possible deprecation of docker-ipv6nat

With the merge of moby/libnetwork#2572 we're finally 1 step closer to having IPv6 NAT built into Docker!

I'm creating this issue to track the release of this feature, and to figure out if there are any remaining use cases for this tool. If not, we can deprecate this tool in favor of the built-in functionality.

No NAT rules are created for container(s)

Probably I'm making some huge mistake somewhere, but I can't figure out how to get this working.

/etc/docker/daemon.json:

{
  "userland-proxy": false,
  "ipv6": true,
  "fixed-cidr-v6": "fd00:dead:beef::/48"
}

docker0:

docker0   Link encap:Ethernet  HWaddr 02:42:cf:6e:ee:b7  
          inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:cfff:fe6e:eeb7/64 Scope:Link
          inet6 addr: fd00:dead:beef::1/48 Scope:Global
          inet6 addr: fe80::1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:12693 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13603 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2862361 (2.7 MiB)  TX bytes:13289824 (12.6 MiB)

I started ipv6nat:

docker run -d --restart=always -v /var/run/docker.sock:/var/run/docker.sock:ro -v /lib/modules:/lib/modules:ro --name ipv6nat --privileged --net=host robbertkl/ipv6nat

After that I started another container attached to docker0 (provisioned via Ansible):

52b3441c6883        nginx:1.9-alpine                         "nginx -g 'daemon off"   11 minutes ago      Up 10 minutes       0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   reverse-proxy

I would have expected that a new forwarding rule shows up in ip6tables -L. But the DOCKER chain remains empty:

Chain DOCKER (1 references)
target     prot opt source               destination     
docker inspect
[...]

        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "f076c0d0a1379b4d37430c73a465821f938ebe81e9641f8afb5e38b7d34534d7",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "443/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "443"
                    }
                ],
                "80/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "80"
                    }
                ]
            },
            "SandboxKey": "/var/run/docker/netns/f076c0d0a137",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "7aece246a32dbf00056381e8c881dcd9dfc26f6367ff4b252fa3d5f1c3442add",
            "Gateway": "172.17.0.1",
            "GlobalIPv6Address": "fd00:dead:beef::242:ac11:2",
            "GlobalIPv6PrefixLen": 48,
            "IPAddress": "172.17.0.2",
            "IPPrefixLen": 16,
            "IPv6Gateway": "fd00:dead:beef::1",
            "MacAddress": "02:42:ac:11:00:02",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "33790e2ed9a403b63f10ebe9954f3183bbac83292228d51f3481f2ee04a30ca9",
                    "EndpointID": "7aece246a32dbf00056381e8c881dcd9dfc26f6367ff4b252fa3d5f1c3442add",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "fd00:dead:beef::1",
                    "GlobalIPv6Address": "fd00:dead:beef::242:ac11:2",
                    "GlobalIPv6PrefixLen": 48,
                    "MacAddress": "02:42:ac:11:00:02"
                }
            }
        }

I also tried sending the container a SIGHUP to trigger a regenerate to no avail.

docker --version
Docker version 1.11.2, build b9f10c9

Edit: When I start a new container via docker run, an entry mysteriously appears:

docker run --rm -p 5000:80 nginx

ip6tables -L

[...]

Chain DOCKER (1 references)
target     prot opt source               destination         
ACCEPT     tcp      anywhere             fd00:dead:beef::242:ac11:5  tcp dpt:http
docker inspect
[...]

        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "be37b889423c15d3b2c07e22ee324b3a604f3ceea4096d33393405d48a4529ac",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "443/tcp": null,
                "80/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "5000"
                    }
                ]
            },
            "SandboxKey": "/var/run/docker/netns/be37b889423c",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "8a41f7682e77e4119b957234b14f489c9b48e9dc3ca66ef86f5c2c01fa30a5be",
            "Gateway": "172.17.0.1",
            "GlobalIPv6Address": "fd00:dead:beef::242:ac11:5",
            "GlobalIPv6PrefixLen": 48,
            "IPAddress": "172.17.0.5",
            "IPPrefixLen": 16,
            "IPv6Gateway": "fd00:dead:beef::1",
            "MacAddress": "02:42:ac:11:00:05",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "33790e2ed9a403b63f10ebe9954f3183bbac83292228d51f3481f2ee04a30ca9",
                    "EndpointID": "8a41f7682e77e4119b957234b14f489c9b48e9dc3ca66ef86f5c2c01fa30a5be",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.5",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "fd00:dead:beef::1",
                    "GlobalIPv6Address": "fd00:dead:beef::242:ac11:5",
                    "GlobalIPv6PrefixLen": 48,
                    "MacAddress": "02:42:ac:11:00:05"
                }
            }
        }

Any ideas? What could cause some containers to trigger rule generation while others seem to be ignored?

Synology NAS: unable to detect hairpin mode (is the docker daemon running?)

I'm trying to run docker-ipv6nat as part of Mailcow dockerized on my Synology NAS DS918+. This particular docker keeps on restarting and reporting:

unable to detect hairpin mode (is the docker daemon running?)

All the other Mailcow dockers and several others run fine. I am sure this is something Synology related. I have attached system related information below. I'm not so familiar with docker and the purpose of this package in relation to mailcow dockerized, but I want to figure out what's going wrong here.

Please let me know what kind of tests I can perform to gather more information. Thanks!

$ docker -v
Docker version 17.05.0-ce, build 9f07f0e-synology
$ docker-compose -v
docker-compose version 1.14.0, build c7bdf9e
$ iptables -V
iptables v1.6.0
$ uname -a
Linux shardik 4.4.59+ #23824 SMP PREEMPT Tue Dec 25 18:27:56 CST 2018 x86_64 GNU/Linux synology_apollolake_918+

Add go.mod file

To allow development outside a GOPATH, a go.mod file could be added.

Update docker-ipv6nat dockerhub Image

I wanted to ask when the docker-ipv6nat will be refreshed?
The "latest" image has not been built for over 4 months (4 Dec 2019), it is still based on Alpine 3.10.3, currently 3.10.4 and golang 1.13.4. Unfortunately, docker images will only install current security updates if you rebuild the image.

Images:
current: golang:1.13.4-alpine3.10
- golang:1.13.4 -> outdated
- alpine:3.10.3(current docker-ipv6nat image) -> outdated

option 1:
- update to golang:1.13.10-alpine3.10
- golang: current dot release
- alpine: current dot release

option 2:
update to golang:1.13.10-alpine3.11
- golang: current dot release
- alpine: current version

option 3:
update to golang:1.14.2-alpine3.11
- golang: current version
- alpine: current version

Issue with swarm mode using docker_gwbridge bridge

Hi ! Thanks a lot for your work, very surprising now in end-2020 we still have to fight in order to have in a consistent behaviour between IPV4 and IPV6 within docker..

So, I followed your documentation, and it works well for containers sitting in docker, I can see the rules added in the debug mode. Now I try to use it with swarm mode, so I enabled IPV6 on docker_gwbridge :

docker network create \
 --ipv6 \
 --subnet 172.25.0.0/16 \
 --gateway 172.25.0.1 \
 --gateway fdd0:4cab:5070:357f::1 \
 --subnet fdd0:4cab:5070:357f::/64 \
 --opt com.docker.network.bridge.name=docker_gwbridge \
 --opt com.docker.network.bridge.enable_icc=true \
 --opt com.docker.network.bridge.enable_ip_forwarding=true \
 --opt com.docker.network.bridge.enable_ip_masquerade=true \
 docker_gwbridge

And then I launched the container :

docker run -d --name Ipv6nat --privileged --network host --restart unless-stopped -v /var/run/docker.sock:/var/run/docker.sock:ro -v /lib/modules:/lib/modules:ro robbertkl/ipv6nat -cleanup -debug -retry

Now I can see the container is able to see the network docker_gwbridge because I see this in the container logs:

2020/11/24 16:07:12 rule added: -t filter -A FORWARD 11 -o docker_gwbridge -j DOCKER
2020/11/24 16:07:12 rule added: -t filter -A FORWARD 12 -o docker_gwbridge -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
2020/11/24 16:07:12 rule added: -t filter -A FORWARD 13 -i docker_gwbridge ! -o docker_gwbridge -j ACCEPT
2020/11/24 16:07:13 rule added: -t filter -A FORWARD 14 -i docker_gwbridge -o docker_gwbridge -j ACCEPT
2020/11/24 16:07:13 rule added: -t nat -A DOCKER 1 -i docker_gwbridge -j RETURN
2020/11/24 16:07:13 rule added: -t nat -A POSTROUTING 1 -s fdd0:4cab:5070:357f::/64 ! -o docker_gwbridge -j MASQUERADE
2020/11/24 16:07:13 rule added: -t nat -A POSTROUTING 1 -o docker_gwbridge -m addrtype --dst-type LOCAL -j MASQUERADE
2020/11/24 16:07:13 rule added: -t filter -A DOCKER-ISOLATION-STAGE-2 1 -o docker_gwbridge -j DROP
2020/11/24 16:07:13 rule added: -t filter -A DOCKER-ISOLATION-STAGE-1 1 -i docker_gwbridge ! -o docker_gwbridge -j DOCKER-ISOLATION-STAGE-2

But I cannot see any automatic rule like I see for simple containers. If I do manually this (fdd0:4cab:5070:357f::5 is the IP of a container within a swarm stack):

ip6tables -t filter -A DOCKER -d fdd0:4cab:5070:357f::5 ! -i docker0 -o docker0 -p tcp -m tcp --dport 443 -j ACCEPT
ip6tables -t nat -A DOCKER -d 0/0 -p tcp -m tcp --dport 443 -j DNAT --to-destination [fdd0:4cab:5070:357f::5]:443 ! -i docker0

then it works...so it seems there is an issue in order to detected the container when it's within a swarm.

Do someone know if I miss something ? IPV6 seems to be working fine, I am able to ping6 external IP from my containers, including those within the swarm.

Thanks again a lot !!

Container hangs on startup on iptables-legacy system

Hi,

I have been fiddling around with this for a while now and finally found out, that the command iptables-nft -L in docker-ipv6nat-compat hangs indefinitely on our Ubuntu System. This is what I see when I log on to the container:

PID   USER     TIME  COMMAND
    1 root      0:00 {docker-ipv6nat-} /bin/sh /docker-ipv6nat-compat -cleanup -debug
    7 root      0:04 iptables-nft -L
    8 root      0:00 grep -q Chain DOCKER
   14 root      0:00 /bin/sh

When I kill PID 7 ipv6nat starts properly and everything seems to work. The container looks like this:

PID   USER     TIME  COMMAND
    1 root      0:00 /docker-ipv6nat -cleanup -debug
   14 root      0:00 /bin/sh

We run Ubuntu 18.04.3 LTS here that has Docker version 19.03.4, build 9013bf583a installed.

Cheers,
T.

NAT stop working for incomming connections after restart

I wrote Caddy and ipv6nat in a docker-compose file, It works very well. But after I executed the docker-compose restart command, I am still able to ping other v6 addresses from the Caddy container but can't receive any more incoming requests (It was able to receive when userland-proxy enabled, but Caddy couldn't get the real IP address).

I think it's probably similar to #14 , probably we need to remove iptables when container stopping?

ip6tables -L:

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
DOCKER-USER  all      anywhere             anywhere            
DOCKER-ISOLATION-STAGE-1  all      anywhere             anywhere            
DOCKER     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere            
DOCKER     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere            
DOCKER     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere            
DOCKER     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere            
DOCKER     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain DOCKER (5 references)
target     prot opt source               destination         
ACCEPT     tcp      anywhere             fd00:beef::3         tcp dpt:https
ACCEPT     tcp      anywhere             fd00:beef::3         tcp dpt:http

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination         
DOCKER-ISOLATION-STAGE-2  all      anywhere             anywhere            
RETURN     all      anywhere             anywhere            

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target     prot opt source               destination         
DROP       all      anywhere             anywhere            
RETURN     all      anywhere             anywhere            

Chain DOCKER-USER (1 references)
target     prot opt source               destination         
RETURN     all      anywhere             anywhere 

docker-compose.yml:

version: '2.1'
services:
  caddy:
    image: abiosoft/caddy
    container_name: caddy
    ports:
      - "80:80"
      - "443:443"
    environment:
      - CADDYPATH=/caddy
      - ACME_AGREE=true
    volumes:
      - ./Caddyfile:/etc/Caddyfile
      - caddyacme:/caddy/acme
    restart: always
    networks:
      v6net:
    depends_on:
      - ipv6nat
  ipv6nat:
    image: robbertkl/ipv6nat
    restart: always
    network_mode: "host"
    cap_add:
      - NET_ADMIN
      - SYS_MODULE
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
networks:
  v6net:
    driver: bridge
    enable_ipv6: true
    ipam:
      driver: default
      config:
      - subnet: 172.20.0.0/16
        gateway: 172.20.0.1
      - subnet: fd00:beef::/80
volumes:
  caddyacme:

Docker Environment:

Client: Docker Engine - Community
 Version:           19.03.2
 API version:       1.40
 Go version:        go1.12.8
 Git commit:        6a30dfc
 Built:             Thu Aug 29 05:29:11 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.2
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.8
  Git commit:       6a30dfc
  Built:            Thu Aug 29 05:27:45 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.6
  GitCommit:        894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc:
  Version:          1.0.0-rc8
  GitCommit:        425e105d5a03fabd737a126ad93d62a9eeede87f
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Docker Compose:

docker-compose version 1.24.1, build 4667896b
docker-py version: 3.7.3
CPython version: 3.6.8
OpenSSL version: OpenSSL 1.1.0j  20 Nov 2018

ipv6nat stop working after docker update

Hi,

I have got this error since my docker upgrade:

ipv6nat_1              | Try `iptables -h' or 'iptables --help' for more information.
ipv6nat_1              | 2019/05/25 21:20:31 running [/sbin/iptables -t nat -C OUTPUT -m addrtype --dst-type LOCAL -j DOCKER --wait]: exit status 2: iptables v1.6.2: Couldn't load target `DOCKER':No such file or directory

My docker version:

docker version
Client:
 Version:           18.09.6
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        481bc77
 Built:             Sat May  4 02:36:01 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.6
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       481bc77
  Built:            Sat May  4 01:59:36 2019
  OS/Arch:          linux/amd64
  Experimental:     false

My nat table:

sudo iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE *** (all my rules here)

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  anywhere            !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain DOCKER (2 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere
RETURN     all  --  anywhere             anywhere
DNAT *** (all my ipv4 dnat here)
# Warning: iptables-legacy tables present, use iptables-legacy to see them

I have disabled ipv6 for now. Can I do something to fix this?

thanks

NAT does not work for incoming connections.

Scenario

Debian 8
Docker version 17.05.0-ce, build 89658be

docker.service:

ExecStart=/usr/bin/dockerd -H fd:// --storage-driver=overlay2 --experimental --live-restore

Steps

  1. deployed ipv6nat container:

Privileged, IPv6 enabled, host net, module+ docker socket mounted:

[
    {
        "Id": "854b19ba0f1df3318b72e068a39c640de98c627970ff1354a3ae48462e89a676",
        "Created": "2017-07-21T10:19:17.394043216Z",
        "Path": "/docker-ipv6nat",
        "Args": [
            "--retry"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 13753,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2017-07-21T10:19:17.718707332Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:24c47013b0c763ab748c7e7fcdc0656ff8a603c8ae6d72183f1e17ae52deb0d8",
        "ResolvConfPath": "/srv/docker/containers/854b19ba0f1df3318b72e068a39c640de98c627970ff1354a3ae48462e89a676/resolv.conf",
        "HostnamePath": "/srv/docker/containers/854b19ba0f1df3318b72e068a39c640de98c627970ff1354a3ae48462e89a676/hostname",
        "HostsPath": "/srv/docker/containers/854b19ba0f1df3318b72e068a39c640de98c627970ff1354a3ae48462e89a676/hosts",
        "LogPath": "/srv/docker/containers/854b19ba0f1df3318b72e068a39c640de98c627970ff1354a3ae48462e89a676/854b19ba0f1df3318b72e068a39c640de98c627970ff1354a3ae48462e89a676-json.log",
        "Name": "/ipv6nat",
        "RestartCount": 0,
        "Driver": "overlay2",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [
                "/var/run/docker.sock:/var/run/docker.sock:ro",
                "/lib/modules:/lib/modules:ro"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "host",
            "PortBindings": {},
            "RestartPolicy": {
                "Name": "always",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "Dns": [],
            "DnsOptions": null,
            "DnsSearch": [],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": true,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": [
                "label=disable"
            ],
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DeviceCgroupRules": null,
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": 0,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/srv/docker/overlay2/fa7d03291f8dbc77c3a550ebf6a2202629e0a9b21fb0a550da38263c2e16f783-init/diff:/srv/docker/overlay2/f127866263e2029eaac0e9b355091084bd462be474b434ffbe681c153f7314e5/diff:/srv/docker/overlay2/eb38d2362b9668c267a56e9b66ed9926acd10196fa20c24892c1f9a9e730310a/diff:/srv/docker/overlay2/2a29b881a2dba9223e04f1293abe3013e4eb5ad6186471c5107ae864b9232191/diff:/srv/docker/overlay2/5aa2c96976c412b28ba46dbd24556899ffe9383c394f4940d2049df812560deb/diff",
                "MergedDir": "/srv/docker/overlay2/fa7d03291f8dbc77c3a550ebf6a2202629e0a9b21fb0a550da38263c2e16f783/merged",
                "UpperDir": "/srv/docker/overlay2/fa7d03291f8dbc77c3a550ebf6a2202629e0a9b21fb0a550da38263c2e16f783/diff",
                "WorkDir": "/srv/docker/overlay2/fa7d03291f8dbc77c3a550ebf6a2202629e0a9b21fb0a550da38263c2e16f783/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [
            {
                "Type": "bind",
                "Source": "/var/run/docker.sock",
                "Destination": "/var/run/docker.sock",
                "Mode": "ro",
                "RW": false,
                "Propagation": ""
            },
            {
                "Type": "bind",
                "Source": "/lib/modules",
                "Destination": "/lib/modules",
                "Mode": "ro",
                "RW": false,
                "Propagation": ""
            }
        ],
        "Config": {
            "Hostname": "chef01",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "DOCKER_IPV6NAT_VERSION=v0.2.4"
            ],
            "Cmd": [
                "--retry"
            ],
            "ArgsEscaped": true,
            "Image": "robbertkl/ipv6nat:latest",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": [
                "/docker-ipv6nat"
            ],
            "OnBuild": null,
            "Labels": {}
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "a7374238868989a41a72086b26aa3ef978fd7da1b25290707b421dbe9552846a",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {},
            "SandboxKey": "/var/run/docker/netns/default",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {
                "host": {
                    "IPAMConfig": {},
                    "Links": null,
                    "Aliases": [],
                    "NetworkID": "730ae4f6e4ec43bc1e6f39965deb7eabead6e6772b51c2ff625898b61b634cc4",
                    "EndpointID": "b3e421e9b395e48f5ec311a4b9ff20c9609d9481c4397f04dfc90d3267152222",
                    "Gateway": "",
                    "IPAddress": "",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": ""
                }
            }
        }
    }
]
  1. created and internal net with IPv6 and ULA range

(container appears after step 3)

[
    {
        "Name": "corp-net",
        "Id": "b026b9fadf56848e67421503bdad88056acba1327ab4990a2129be52a69cdd75",
        "Created": "2017-07-21T10:19:18.284990364Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": true,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                },
                {
                    "Subnet": "fd00:dead:beef::/48",
                    "Gateway": "fd00:dead:beef::1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "Containers": {
              ...
  
            "933ea8c487ca213c8d9a5f7a4a7e0904482f5688e178a03c91177415dd6a8725": {
                "Name": "corp-chef-nginx",
                "EndpointID": "41d772b5a903de156d334efa70b1d73918e832e56bcdd7961e0c83f8be71c756",
                "MacAddress": "02:42:ac:12:00:07",
                "IPv4Address": "172.18.0.7/16",
                "IPv6Address": "fd00:dead:beef::7/48"
            },
            ...
        },
        "Options": {},
        "Labels": {}
    }
]
  1. launch container
[
    {
        "Id": "933ea8c487ca213c8d9a5f7a4a7e0904482f5688e178a03c91177415dd6a8725",
        "Created": "2017-07-21T11:14:45.774908667Z",
        "Path": "nginx",
        "Args": [
            "-g",
            "daemon off;"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 19472,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2017-07-21T11:14:46.831241413Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:c9deecae67990851544e03d1403649d123922b4a13c6380b08d6e189b18994d8",
        "ResolvConfPath": "/srv/docker/containers/933ea8c487ca213c8d9a5f7a4a7e0904482f5688e178a03c91177415dd6a8725/resolv.conf",
        "HostnamePath": "/srv/docker/containers/933ea8c487ca213c8d9a5f7a4a7e0904482f5688e178a03c91177415dd6a8725/hostname",
        "HostsPath": "/srv/docker/containers/933ea8c487ca213c8d9a5f7a4a7e0904482f5688e178a03c91177415dd6a8725/hosts",
        "LogPath": "/srv/docker/containers/933ea8c487ca213c8d9a5f7a4a7e0904482f5688e178a03c91177415dd6a8725/933ea8c487ca213c8d9a5f7a4a7e0904482f5688e178a03c91177415dd6a8725-json.log",
        "Name": "/corp-chef-nginx",
        "RestartCount": 0,
        "Driver": "overlay2",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [
              ...
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "chef-server",
            "PortBindings": {
                "8080/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "80"
                    }
                ],
                "8443/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "443"
                    }
                ]
            },
            "RestartPolicy": {
                "Name": "unless-stopped",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "Dns": [],
            "DnsOptions": null,
            "DnsSearch": [],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 134217728,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DeviceCgroupRules": null,
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 268435456,
            "MemorySwappiness": 0,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/srv/docker/overlay2/82c4a39e99a8e8667dd3a8bd9baf2126c5d9d84ae982dfe18b645f18daf5bee8-init/diff:/srv/docker/overlay2/e0080e5dfea5a3a8cdd18ac1123a690375d246f7a4e0a51b259cc1b076bedb7f/diff:/srv/docker/overlay2/5a470b81dfce3d10be43543f8dc2cbf25e878e1e2054cf7da8ca43c49e9359c0/diff:/srv/docker/overlay2/9676600d6022a3fdff09d47865bcc67e2ea6e867c4aac4624230dfd5ca995c29/diff:/srv/docker/overlay2/6dbeef38558bab5665a737469664ad3b6c3ca664a312de228ba7128b8e72cc9c/diff:/srv/docker/overlay2/e04103d14cf427b7e7cf247ca8a6527bb61d3786bfece1d5f83287c9a7060f70/diff:/srv/docker/overlay2/926612703de4a445fb7d5e10d58fecbafb685cb65a6a19cbd9b6d6dbaf23375a/diff:/srv/docker/overlay2/f51050f91076ea622a25da6eb9e5b68d243d5114851368812e92d6c4da633983/diff:/srv/docker/overlay2/f7ce377ed0931dbf790acc2fd547adc913c298504d41f13735b1bf139fa7fdf8/diff:/srv/docker/overlay2/fca69021fe3bf2cb1e1f8188ebe8515a3a73cf524384eb0281271724287ef41e/diff:/srv/docker/overlay2/dc280e9215f01253ccd7aa4f4082b1d6a87b6ca0acc0679ba4332a151a9fbd07/diff:/srv/docker/overlay2/37e6827a37c0909bffbc2c684e4b2e60601d851ec82e174b030bbdc13bf25be3/diff:/srv/docker/overlay2/257902a0f76eca3bf9a80141825d1947fb2223ba88d637c76c7be797d3b53a6b/diff:/srv/docker/overlay2/da6cd3ba41a2b0ae93622daa930ed3714dd656ed3fb71dc30eea34e427541fab/diff:/srv/docker/overlay2/5fa8b42cb1d3f60cf044b78bd0ac3ee22bb93b94b86ccc89c697e336e66760dd/diff",
                "MergedDir": "/srv/docker/overlay2/82c4a39e99a8e8667dd3a8bd9baf2126c5d9d84ae982dfe18b645f18daf5bee8/merged",
                "UpperDir": "/srv/docker/overlay2/82c4a39e99a8e8667dd3a8bd9baf2126c5d9d84ae982dfe18b645f18daf5bee8/diff",
                "WorkDir": "/srv/docker/overlay2/82c4a39e99a8e8667dd3a8bd9baf2126c5d9d84ae982dfe18b645f18daf5bee8/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [
          ...
        ],
        "Config": {
            "Hostname": "chef01",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "80/tcp": {},
                "8080/tcp": {},
                "8443/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "NGINX_VERSION=1.12.1"
            ],
            "Cmd": [
                "nginx",
                "-g",
                "daemon off;"
            ],
            "ArgsEscaped": true,
            "Image": "corp-chef-nginx:latest",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": {},
            "StopSignal": "SIGTERM"
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "f95afbc662eaa24a0fabe4ceb7c28ea8401604916c6449b9f1fd088a09aae459",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "80/tcp": null,
                "8080/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "80"
                    }
                ],
                "8443/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "443"
                    }
                ]
            },
            "SandboxKey": "/var/run/docker/netns/f95afbc662ea",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {
                "corp-net": {
                    "IPAMConfig": {},
                    "Links": null,
                    "Aliases": [
                        "933ea8c487ca"
                    ],
                    "NetworkID": "b026b9fadf56848e67421503bdad88056acba1327ab4990a2129be52a69cdd75",
                    "EndpointID": "41d772b5a903de156d334efa70b1d73918e832e56bcdd7961e0c83f8be71c756",
                    "Gateway": "172.18.0.1",
                    "IPAddress": "172.18.0.7",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "fd00:dead:beef::1",
                    "GlobalIPv6Address": "fd00:dead:beef::7",
                    "GlobalIPv6PrefixLen": 48,
                    "MacAddress": "02:42:ac:12:00:07"
                }
            }
        }
    }
]

As you can see the container is in the IPv6-enabed network. However the ports are not reachable.

ipv6tables -L on the host:

ip6tables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
DOCKER-ISOLATION  all      anywhere             anywhere            
DOCKER     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere            
DOCKER     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain DOCKER (2 references)
target     prot opt source               destination         

Chain DOCKER-ISOLATION (1 references)
target     prot opt source               destination         
DROP       all      anywhere             anywhere            
DROP       all      anywhere             anywhere            
RETURN     all      anywhere             anywhere         

curl -6 requests to the nginx container still come through docker's IPv4 NAT:

172.18.0.1 - - [21/Jul/2017:11:56:02 +0000] "GET / HTTP/1.1" 200 2490 "-" "curl/7.51.0" "-"

ambiguous debug log

excerpt from iptables man-page:

-A, --append chain rule-specification
-I, --insert chain [rulenum] rule-specification
-D, --delete chain rulenum

-A does not accept rulenum

Enable support for link-local address

I saw, according to the docs that docker-ipv6nat

defaults to ::, i.e. all IPv6 addresses

But I can't get it to work with link-local addresses. It binds and works fine with global unicast addresses, but not unique local ones. I tried setting com.docker.network.bridge.host_binding_ipv6 to a link-local address, and the logs even show it correctly, but it does not work.

This is what the logs showed (addresses and identifiers were changed on purpose for anonimity):

2020/11/10 01:47:16 rule added: -t filter -A DOCKER 5 -d fd00:dead:beef::100 ! -i br-49cdda3f1234 -o br-49cdda3f1234 -p tcp -m tcp --dport 80 -j ACCEPT
2020/11/10 01:47:16 rule added: -t nat -A POSTROUTING 9 -s fd00:dead:beef::100 -d fd00:dead:beef::100 -p tcp -m tcp --dport 80 -j MASQUERADE
2020/11/10 01:47:16 rule added: -t nat -A DOCKER 5 -d fe80::aaaa:aaaa:aaaa:dead -p tcp -m tcp --dport 80 -j DNAT --to-destination [fd00:dead:beef::100]:80

Do you there could be a way to implement it?

Possibility of getting this working with nftables

I've been using docker-ipv6nat for a while on my old server and am pretty happy with it. On a new server I intended to use nftables, which is basically working fine, using this guide.

The problem I have is that I can't use static assigned IP addresses, since this isn't supported in docker-compose. (at least in version 3 files, version 2 apparently works)
This kind of forces me to fall back to "the old way" and let Docker handle the IP assignment, but then I can't put in static nftables rules for the exposed containers.

I'm fine with running this workaround again, but it obviously only supports iptables. I don't really know Go and a quick check of the code made my head spin, so I'd like to use this issue as a discussion if nftables support is even possible for this project.

The only thing that needs to happen is to create a forward rule when a container comes up, nothing else. The rest of the setup is already handled in the configured nftables config (as far as I can see). This is pretty much the same thing docker-ipv6nat is already doing, just with other commands.

There are iptables-compat tools that use the old iptables syntax but creates nftables rules, but I don't know if this could work with docker-ipv6nat.

DOCKER-USER chain is missing and pushed to end if created

Hi there,
just started using this, thanks for the great work.
I noticed, that the DOCKER-USER chain for custom ip6tables rules is missing. It would be the first in the FORWARD chain.
Normally I create the rules before starting docker or this container.However with this container the ipv6nat chains get inserted before the existing DOCKER-USER chain.
Would be nice, if the DOCKER-USER could stay first and untouched in the FORWARD chain, like Docker handles it in ipv4

ADDRTYPE rules are missing in POSTROUTING NAT chain

Current IPv4 POSTROUTING chain:

Chain POSTROUTING (policy ACCEPT 463 packets, 28696 bytes)
 pkts bytes target      prot opt in     out         source              destination         
    2    88 MASQUERADE  all  --  *      docker0     0.0.0.0/0           0.0.0.0/0            ADDRTYPE match src-type LOCAL
    0     0 MASQUERADE  all  --  *      !docker0    172.25.1.0/24       0.0.0.0/0           
    3   128 MASQUERADE  all  --  *      dckrMyNet   0.0.0.0/0           0.0.0.0/0            ADDRTYPE match src-type LOCAL
    0     0 MASQUERADE  all  --  *      !dckrMyNet  172.25.2.0/24       0.0.0.0/0           

Current IPv6 POSTROUTING chain:

Chain POSTROUTING (policy ACCEPT 74 packets, 5932 bytes)
 pkts bytes target      prot opt in     out           source              destination         
    0     0 MASQUERADE  all      *      !dckrMyNet    fddd:0:0:2::/64     ::/0                
    0     0 MASQUERADE  all      *      !dckrDefault  fddd:0:0:1:/64      ::/0  

As you can see, for each masqueraded network the ADDRTYPE match src-type LOCAL rule is missing.

These missing roules will masquerade packets when they enter the network bridge. We should copy this, so Docker's IPv4 and Docker's IPv6 share the same behavior.

Not able to make it work

Sorry for bothering, but I'm just not able to get this work.
I'm on CentOS7 with Docker 1.13.1.

My /etc/docker/daemon.json looks like this:

{
"ipv6": true,
"fixed-cidr-v6": "fd00:172:17:1::/64"
}

I've created the the ipv6nat like this:

docker create \
	--name ipv6nat \
	-v /var/run/docker.sock:/var/run/docker.sock:ro \
	-v /lib/modules:/lib/modules:ro \
	--net=host \
	--cap-add=NET_ADMIN \
	--cap-add=SYS_MODULE \
	--restart unless-stopped \
	robbertkl/ipv6nat

Is this enough? Should this now work out of the box?
I don't see any differences. What am I supposed to do to make the natting work?

If I create a container like this, nothing changes compared to without ipv6nat:

docker create \
	--name=somedocker\
	-p 1199:8888 \
	--restart unless-stopped \
	something/fromdocker

Swarm mode support

This is working great so far for single-node docker services. Thanks!

Is there any chance to get this to work with Docker Swarm mode's ingress network?

Container has always latest added ipvt

Thanks Robbert for the tool it's really really great and it help me a lot.
I run several containers and each gets a unique network and should connect to the outside world with it a dedicated ipv6 from the host. When I run your container all my other container use the latest ipv6 from the host.
To fix that I had to run
sudo ip6tables -t nat -I POSTROUTING -s fd00:dead:ab::/48 -j SNAT --to-source [ipv6 address] then it worked.
Maybe you can have a look at it.


here the command I used to create the network
docker network create --ipv6 --subnet fd00:dead:ab::/48 --gateway fd00:dead:ab::1 -o "com.docker.network.bridge.host_binding_ipv6"="[ipv6 address" bridge_container1_ipv6

App not available when activating ipv6nat

I am trying to use ipv6nat, as it seems that it would allow to have ebtter IPv6 support with docker.
Indeed currently I got a generic IPv4 address in X-FORWADED-FR instead of IPv6.

I have an issue when I launch several services and it happens only for php services behind NGINX (but not for all services).

My configuration is the following:

Traefik <= "web" docker network => NGINX reverse proxy <= "default" docker network=> PHP-FPM app

Traefik is the only front end connected to the WWW on port 80 and 443.

ipv6nat is defined as below:

services:
  ipv6nat:
    image: robbertkl/ipv6nat
    privileged: true
    network_mode: "host"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /lib/modules:/lib/modules:ro

I can see the following in my logs:

web_1    | 2020/04/13 22:18:16 [error] 7#7: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.5, server: , request: "GET /apps/files/?dir=/&fileid=2 HTTP/1.1", upstream: "fastcgi://[fd00:dead:beef::9]:9000", host: "cloud.localhost"
web_1    | 2020/04/13 22:18:16 [warn] 7#7: *1 upstream server temporarily disabled while connecting to upstream, client: 172.18.0.5, server: , request: "GET /apps/files/?dir=/&fileid=2 HTTP/1.1", upstream: "fastcgi://[fd00:dead:beef::9]:9000", host: "cloud.localhost"
web_1    | 2020/04/13 22:18:16 [error] 7#7: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.5, server: , request: "GET /apps/files/?dir=/&fileid=2 HTTP/1.1", upstream: "fastcgi://[fd00:dead:beef::8]:9000", host: "cloud.localhost"
web_1    | 2020/04/13 22:18:16 [warn] 7#7: *1 upstream server temporarily disabled while connecting to upstream, client: 172.18.0.5, server: , request: "GET /apps/files/?dir=/&fileid=2 HTTP/1.1", upstream: "fastcgi://[fd00:dead:beef::8]:9000", host: "cloud.localhost"
web_1    | 2020/04/13 22:18:16 [error] 7#7: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.5, server: , request: "GET /apps/files/?dir=/&fileid=2 HTTP/1.1", upstream: "fastcgi://[fd00:dead:beef::6]:9000", host: "cloud.localhost"
web_1    | 2020/04/13 22:18:16 [warn] 7#7: *1 upstream server temporarily disabled while connecting to upstream, client: 172.18.0.5, server: , request: "GET /apps/files/?dir=/&fileid=2 HTTP/1.1", upstream: "fastcgi://[fd00:dead:beef::6]:9000", host: "cloud.localhost"
web_1    | 2020/04/13 22:18:16 [error] 7#7: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.5, server: , request: "GET /apps/files/?dir=/&fileid=2 HTTP/1.1", upstream: "fastcgi://172.18.0.9:9000", host: "cloud.localhost"
web_1    | 2020/04/13 22:18:16 [warn] 7#7: *1 upstream server temporarily disabled while connecting to upstream, client: 172.18.0.5, server: , request: "GET /apps/files/?dir=/&fileid=2 HTTP/1.1", upstream: "fastcgi://172.18.0.9:9000", host: "cloud.localhost"
web_1    | 2020/04/13 22:18:16 [error] 7#7: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.5, server: , request: "GET /apps/files/?dir=/&fileid=2 HTTP/1.1", upstream: "fastcgi://172.18.0.8:9000", host: "cloud.localhost"
web_1    | 2020/04/13 22:18:16 [warn] 7#7: *1 upstream server temporarily disabled while connecting to upstream, client: 172.18.0.5, server: , request: "GET /apps/files/?dir=/&fileid=2 HTTP/1.1", upstream: "fastcgi://172.18.0.8:9000", host: "cloud.localhost"
web_1    | 2020/04/13 22:18:16 [error] 7#7: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.5, server: , request: "GET /apps/files/?dir=/&fileid=2 HTTP/1.1", upstream: "fastcgi://172.18.0.6:9000", host: "cloud.localhost"
web_1    | 2020/04/13 22:18:16 [warn] 7#7: *1 upstream server temporarily disabled while connecting to upstream, client: 172.18.0.5, server: , request: "GET /apps/files/?dir=/&fileid=2 HTTP/1.1", upstream: "fastcgi://172.18.0.6:9000", host: "cloud.localhost"
web_1    | 172.18.0.5 - - [13/Apr/2020:22:18:16 +0000] "GET /apps/files/?dir=/&fileid=2 HTTP/1.1" 502 157 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:75.0) Gecko/20100101 Firefox/75.0" "172.18.0.1"

It does not happen on all the containers working on the same schema mentioned above.
I have never seen this issue when NGINX and PHP-FPM are in the same container (and therefore Traefik is directly connected to the NGINX/PHP-FPM container).

I can see that the web docker network is mentioned in the ip6tables in the FORWARD chain

How can I debug this ?

Binding to wireguard interface not working

Hi,

I have a wireguard interface like this:

3: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none 
    inet6 fd60:1141:233e:2977:d0e:b410:d5b1:171b/64 scope global 
       valid_lft forever preferred_lft forever

And I am attempting to publish a container's port on this interface for the other wireguard peers.

So I start ipv6nat like this:
docker run -d --restart=always -v /var/run/docker.sock:/var/run/docker.sock:ro -v /lib/modules:/lib/modules:ro --privileged --net=host robbertkl/ipv6nat

Then I create a network like this:
docker network create --ipv6 --subnet=fd00:dead:beef::/48 -o "com.docker.network.bridge.host_binding_ipv6"=fd60:1141:233e:2977:d0e:b410:d5b1:171b my-network
As you can see, I am binding the network to the address of the wireguard interface.

Next, I run an nginx just for testing:
docker run -d -p 8080:80 --network my-network nginx

I expect to be able to to use curl [fd60:1141:233e:2977:d0e:b410:d5b1:171b]:8080 from a wireguard peer and see the nginx default page. Instead, curl throws this error:
curl: (7) Failed to connect to fd60:1141:233e:2977:d0e:b410:d5b1:171b port 8080: Connection refused

Pinging through wireguard works, so this does not seem to be the problem.

Also netstat -tulpn lists:

tcp6       0      0 :::8080          :::*           LISTEN      4116/docker-proxy

Seems like the port is not bound to the correct IP/interface.

I don't know if it helps: I am on Ubuntu 4.15.0-1021-aws on an aws lightsail instance.

I have exhausted my knowledge here, so I am hoping for some help here.

Thanks!

Running ipv6nat from Docker image removes IPv6 addresses from all interfaces on Debian 8

When I run docker run -d --restart=always -v /var/run/docker.sock:/var/run/docker.sock:ro -v /lib/modules:/lib/modules:ro --privileged --net=host robbertkl/ipv6nat on Debian 8 (Kernel 4.9.0-0.bpo.2-amd64 from jessie-backports, Docker 17.04.0), all inet6 addresses, including the link-local ones, are removed from all interfaces and thus IPv6 communication ceases to work. To recover, the container needs to be stopped and removed and the host needs to be rebooted or have its inet6 addresses manually restored.

If instead I download the release version and run it directly on the host, the above does not happen and the address translation is performed correctly.

iptables v1.6.1: Couldn't load target `DOCKER'

Hello,

i have tried to get this working, but the ipv6nat container is in restarting state with the log message:
2017/06/19 07:21:03 exit status 2: iptables v1.6.1: Couldn't load target `DOCKER':No such file or directory

Try `iptables -h' or 'iptables --help' for more information.

help would be nice.
What am i doing wrong?

Kind Regards
Sebastian

not working on custom network

Hi,

To be honest, it's super sad that this even needs to exist. Which i can only blame docker for as their ipv6 support is so horrendously crappy...

Which makes me very happy that this project is there to help people like me :)

I followed your guide and a command like this works:
docker run --rm -t busybox ping6 -c 4 google.com

But if i create a custom network:
docker network create --ipv6 --subnet fd00:dead:beef::/48 test

And then run a container in that network:
docker run --network test --rm -t busybox ping6 -c 4 google.com

It doesn't ping.

I might very well be missing something but i have no clue what that might be?
Or am i trying something that is unsupported?

Best regards,
Mark

Fixed ipv6 address for outgoing connections

Thank you very much for your work!

Is it possible to define the outbound ipv6 address for a network / container?

The inbound definition works fine, but if my container connects to a remote host, it seems the first ipv6 address found is used to create the connection.
This is unfortunate, because in case of a mail server, the sending ip might not get validated by a SPF record.

This is solveable by adjusting the SPF record (permitting all assigned ipv6 addresses or the whole subnet), but it would be nice to have control over which ip address is used for outgoing traffic.

Firewall rule issue

First, big thanks for creating this project, it's crazy that it's not built into docker by default.

I'm running Fedora 25, fully patched and up to date. After starting your container, the forwarding rules worked on the local host, but not remotely (even on the same local network). On checking the firewall forwarding chain, I found the issue. Your docker rules were added after the default deny, so were never actually reached:

dperson@tundro$ sudo ip6tables -S FORWARD                                                           
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i lo -j ACCEPT
-A FORWARD -j FORWARD_direct
-A FORWARD -j FORWARD_IN_ZONES_SOURCE
-A FORWARD -j FORWARD_IN_ZONES
-A FORWARD -j FORWARD_OUT_ZONES_SOURCE
-A FORWARD -j FORWARD_OUT_ZONES
-A FORWARD -m conntrack --ctstate INVALID -j DROP
-A FORWARD -j REJECT --reject-with icmp6-adm-prohibited
-A FORWARD -o docker1 -j DOCKER
-A FORWARD -o docker1 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker1 ! -o docker1 -j ACCEPT
-A FORWARD -i docker1 -o docker1 -j ACCEPT

I ran the following 2 commands:

dperson@tundro$ sudo ip6tables -D FORWARD -j REJECT --reject-with icmp6-adm-prohibited              
dperson@tundro$ sudo ip6tables -A FORWARD -j REJECT --reject-with icmp6-adm-prohibited

Then things worked with the following FORWARD chain:

dperson@tundro$ sudo ip6tables -S FORWARD                                                           
-P FORWARD ACCEPT
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i lo -j ACCEPT
-A FORWARD -j FORWARD_direct
-A FORWARD -j FORWARD_IN_ZONES_SOURCE
-A FORWARD -j FORWARD_IN_ZONES
-A FORWARD -j FORWARD_OUT_ZONES_SOURCE
-A FORWARD -j FORWARD_OUT_ZONES
-A FORWARD -m conntrack --ctstate INVALID -j DROP
-A FORWARD -o docker1 -j DOCKER
-A FORWARD -o docker1 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker1 ! -o docker1 -j ACCEPT
-A FORWARD -i docker1 -o docker1 -j ACCEPT
-A FORWARD -j REJECT --reject-with icmp6-adm-prohibited

If there are REJECT or DROP rules in the chain before the rules you add, can you have your container automatically -D (delete) and -A (append them), or insert your rules before them in the first place? Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.