Code Monkey home page Code Monkey logo

ufw-docker's People

Contributors

anuragpeshne avatar chaifeng avatar drallgood avatar erakli avatar ifurther avatar kronthto avatar rklos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ufw-docker's Issues

Can't connect from different docker network to published docker service

I have tested the following:
containera with networka has a webservice running at port 80, which is published to any with ufw-docker.
containerb with networkb should now be able to connect to http://containera:80, like it is without ufw, but unfortunately it doesn't work.
Connecting to the host with port 80 from outside works as expected.

Did I do something wrong or is there something missing?

Teamspeak doesn't show correct client address.

Hello, I'm hosting a teamspeak server using this panel. When I connect to the teamspeak server my client ip is 172.18.0.1 instead of my public ip. Is there maybe a solution to this problem or could anyone point me in the right direction. I'm by no mean an expert but willing to learn all about new topics.

Output of ifconfig:

pterodactyl0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.18.0.1  netmask 255.255.0.0  broadcast 172.18.255.255
        inet6 fdba:17c8:6c94::1011  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::42:32ff:fe43:8a1  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::1  prefixlen 64  scopeid 0x20<link>
        ether 02:42:32:43:08:a1  txqueuelen 0  (Ethernet)
        RX packets 1653  bytes 167191 (163.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1658  bytes 133446 (130.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Added to after.rules

# BEGIN UFW AND DOCKER
*filter
:ufw-user-forward - [0:0]
:DOCKER-USER - [0:0]
-A DOCKER-USER -j RETURN -s 10.0.0.0/8
-A DOCKER-USER -j RETURN -s 172.18.0.0/12
-A DOCKER-USER -j RETURN -s 192.168.0.0/16
-A DOCKER-USER -p udp -m udp --sport 53 --dport 1024:65535 -j RETURN

-A DOCKER-USER -j ufw-user-forward

-A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 192.168.0.0/16
-A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 10.0.0.0/8
-A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 172.18.0.0/12
-A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d 192.168.0.0/16
-A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d 10.0.0.0/8
-A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d 172.18.0.0/12
-A DOCKER-USER -j RETURN
COMMIT
# END UFW AND DOCKER

Image of teamspeak

Does this work with Docker swarm mode?

If so, how? I have a service with multiple published ports and I have no idea how configure UFW at this point.

On a single Docker node this little helper works fine, btw. 🙂

after.rules not reloaded unless reboot

Happy new year!

First, thanks for the information. It works great in a swarm cluster.

But I have one small issue, how to completely disable firewall after these changes?

I tried ufw disable still not able to access container from public network.

Removed the new stuff added in /etc/ufw/after.rules, followed by a ufw reload and ufw disable did not work.

The only thing worked is to remove the new stuff in after.rules, and ufw disable and ufw disable then reboot.

Any quick way to turn off these rules without a reboot?

thanks

Deny outgoing connections from containers

Thank you for this great repo. I want to prevent containers to connect to certain IP addresses, i.e. I want ufw deny out to <dest> to stop connections from containers to dest. I tried with ufw route deny out on docker0 to <dest>, but it does not work. I also tried to change the interface to eth0 and all other interfaces I have, but connections are still allowed. These are all the related rules I have currently:

<dest>                DENY OUT    Anywhere
<dest> on docker0     DENY FWD    Anywhere
<dest> on eth0        DENY FWD    Anywhere
<dest> on lo          DENY FWD    Anywhere
<dest> on veth4ca42e2 DENY FWD    Anywhere
<dest> on vethe1dfed2 DENY FWD    Anywhere

This blocks pings from the host, but not from docker containers.

Container with multiple IPs override their own rules instead of adding them

I have a Traefik container that is connected to 3 networks.
ufw-docker correctly detects all 3 IPs associated with the container but when adding the ufw rules for the 2 ports the previous set rule gets immediately deleted.
Check image below (note: the command also starts with removing "old" routes, those are just 2 from a prior run of the same command)

image

Publish multiple docker with same port

Hi,
I've followed you rinstruction and I was able to block all the port except the ones I needed, but now...I need to expose a second webserver on a different port, how do I do this?

Dns packets are dropped by 50%

By adding those rules I had 50% change to get dns answers from an upsteam dns server.

How to reproduce.

  • start a dnsmasq container that forwards all queries to 1.1.1.1 or 8.8.8.8
  • shell into dnsmasq container
  • dig www.google.com @127.0.0.1

You have 50% chance to get an answer and 50% chance to have a timeout

tcpdump from the host says that answer from upstream (1.1.1.1 or 8.8.8.8) is correctly received

By flushing the DOCKER-USER table the problem is gone

rationale for choosing networks

Hi,

-A DOCKER-USER -j RETURN -s 10.0.0.0/8

what is the rationale behind the chosen networks?
This does work in most cases but might need adaption in certain cases?

For example I have changed the address-pool in my docker start file like so:

  --default-address-pool=base=10.120.0.0/16,size=24 

So its seems I was just lucky enough to see ufw-docker working due to picking a network that is part of the default setup.

Would be nice to mention this somewhere and have some way to change the networks so ufw-docker check and ufw-docker install would fit an adapted docker setup.

Cannot access service on host from container

Should the following setup work after installing ufw-docker?:

  1. a database service runs on the host
  2. a docker container on the same host needs access to the database (1)

The host is not on a private subnet, but on a public IP. That is why UFW is essential for this host.

After 'ufw disable', access from the container to the host is possible.

After 'ufw enable' I am getting the following lines in syslog when I try to connect from the container to the database on the host:

[UFW BLOCK] IN=docker0 OUT= PHYSIN=vethc149a32 MAC=xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx SRC=172.17.0.2 DST=yy.yy.yy.yy LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=58585 DF PROTO=TCP SPT=45360 DPT=5432 WINDOW=29200 RES=0x00 SYN URGP=0

yy.yy.yy.yy is the (public) IP of my host

I added the following ufw allow rules, but still cannot connect:

sudo ufw status
Status: active

To                         Action      From
--                         ------      ----
Anywhere           ALLOW       172.16.0.0/16
yy.yy.yy.yy           ALLOW       172.16.0.0/16
5432                    ALLOW       172.16.0.0/16

5432                    ALLOW FWD   172.16.0.0/16
yy.yy.yy.yy           ALLOW FWD   172.16.0.0/16

Is it possible to somehow prevent the [UFW BLOCK] (see log) from happening?

Thanks,

关于Clash容器的问题

这个解决方案很有用,但是同时我的ubuntu里跑了一个clash的容器做网络代理,然后代理就挂了,这个可能的原因和解决方案是啥。clash容器的网络是host模式,也设置了http_proxy.

Add another solution to the Readme.md

I have an additional solution that would easily solve the problem without workarounds.
In Docker and docker-compose you are able to map ports the follwing way:
127.0.0.1:80 : 80
That would open the port only for the local Machine. It enables reverse proxy to the Service without opening the Port to everyone.

Maybe you can add this to the Readme.

Not able to connect to container through WAN

I have installed this amazing project but I don't seem to be able to connect to my container.

The rule is added (port 9999 mapped):

10.0.1.2 80/tcp ALLOW FWD Anywhere # whoami:3f4ca202aaffe2ec4e8c151a4085346a9515e4f808921141f53de17e00d0136a

How can I debug this ?

Containers cannot reach the external IP of their own server

I have manually added the after.rules settings that are listed here and all containers that have not been exposed via ufw route allow are no longer reachable from external networks. The only container that is routed and reachable from outside is the web proxy, which is what I have expected so far.

One of the internal containers is providing an SSO API endpoint that is proxied and available on an external address https://openid.domain.com/auth/realms/domain.com/.well-known/openid-configuration. This endpoint must be called on the external address to return the proper results, which is also working with the above configuration.

But unfortunately, the other containers are not able to access the external API of the same server, while they can access e.g. google.com. Once they call the API on their own server and use its external address, the requests time out.

Do you have any idea about why this might not work?

service allow from ip

Hi,

Thanks for this amazing project. It works real good in swarm mode !
I was wondering if it is possible to restrain opening a service port for only a custom IP ?

eg something like sudo ufw-docker service allow my-service from 123.45.67.89 1234/tcp ?

Moreover, is it possible to open multiple ports at the same for a service ?
Something like sudo ufw-docker service allow my-service 1234:1238/tcp ?

Not working from container to host

I'm trying to access a service running on the host from a container.
I'm testing with nc -l 9900 on the host, and nc 172.18.0.1 9900 in the container.
It works with ufw disabled.
It doesn't work with ufw enabled. Here is the DOCKER-USER chain:

$ sudo iptables -L DOCKER-USER
Chain DOCKER-USER (1 references)
target     prot opt source               destination
RETURN     all  --  10.0.0.0/8           anywhere
RETURN     all  --  172.16.0.0/12        anywhere
RETURN     all  --  192.168.0.0/16       anywhere
RETURN     udp  --  anywhere             anywhere             udp spt:domain dpts:1024:65535
ufw-user-forward  all  --  anywhere             anywhere
DROP       tcp  --  anywhere             192.168.0.0/16       tcp flags:FIN,SYN,RST,ACK/SYN
DROP       tcp  --  anywhere             10.0.0.0/8           tcp flags:FIN,SYN,RST,ACK/SYN
DROP       tcp  --  anywhere             172.16.0.0/12        tcp flags:FIN,SYN,RST,ACK/SYN
DROP       udp  --  anywhere             192.168.0.0/16       udp dpts:0:32767
DROP       udp  --  anywhere             10.0.0.0/8           udp dpts:0:32767
DROP       udp  --  anywhere             172.16.0.0/12        udp dpts:0:32767
RETURN     all  --  anywhere             anywhere

That looks to me like it should work. Which means the RETURN lines aren't matching for some reason? If I look through all of iptables -L, I don't see 172.16 mentioned anywhere else.

Ubuntu 18.04.3.
Docker 18.09.7

Any ideas?

Constant literal 172.16.0.x / docker0 network --> auto-detect

On my various Ubuntu 20.04 systems, the default docker0 network seems to be 172.17.0.1/16, while this script is hard-coded to 172.16.0.0/12

I've seen other sources suggest finding this via:

ip addr show docker0

Or grabbing the host address via:

HOST_IP=$(ip addr show docker0 | grep "inet " | awk '{print $2}' | awk -F/ '{print $1}')
# 172.17.0.1

Doesn't work for me on 18.04

Ubuntu 18.04 stock docker.io package, LXD and libvirt installed and working.

I tried adding the rules manually AND with the ufw-docker script.

Doing iptables -I FORWARD -i br0 -o br0 -j ACCEPT is enough to get things working properly

I rebooted and tried to clear all iptables rules prior doing it:

iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -t nat -F
iptables -t mangle -F
iptables -F
iptables -X

/etc/netplan/01-netcfg.yaml

# This file describes the network interfaces available on your system
# For more information, see netplan(5).
network:
  version: 2
  renderer: networkd
  ethernets:
    eno1:
      dhcp4: yes
      dhcp6: no
  bridges:
    br0:
      dhcp4: no
      dhcp6: no
      addresses:
        - 10.0.14.2/24
      gateway4: 10.0.14.1
      nameservers:
        addresses: 
        - 10.0.14.6
      interfaces:
        - eno1

# LANG=C ufw-docker/ufw-docker check

########## iptables -n -L DOCKER-USER ##########
Chain DOCKER-USER (1 references)
target     prot opt source               destination         
RETURN     all  --  10.0.0.0/8           0.0.0.0/0           
RETURN     all  --  172.16.0.0/12        0.0.0.0/0           
RETURN     all  --  192.168.0.0/16       0.0.0.0/0           
RETURN     udp  --  0.0.0.0/0            0.0.0.0/0            udp spt:53 dpts:1024:65535
ufw-user-forward  all  --  0.0.0.0/0            0.0.0.0/0           
DROP       tcp  --  0.0.0.0/0            192.168.0.0/16       tcp flags:0x17/0x02
DROP       tcp  --  0.0.0.0/0            10.0.0.0/8           tcp flags:0x17/0x02
DROP       tcp  --  0.0.0.0/0            172.16.0.0/12        tcp flags:0x17/0x02
DROP       udp  --  0.0.0.0/0            192.168.0.0/16       udp dpts:0:32767
DROP       udp  --  0.0.0.0/0            10.0.0.0/8           udp dpts:0:32767
DROP       udp  --  0.0.0.0/0            172.16.0.0/12        udp dpts:0:32767
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           


########## diff /etc/ufw/after.rules ##########

Check done.

# ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 52:54:00:19:7e:6c brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe19:7e6c/64 scope link 
       valid_lft forever preferred_lft forever
3: eno1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq master br0 state DOWN group default qlen 1000
    link/ether 04:92:26:b7:b2:ef brd ff:ff:ff:ff:ff:ff
4: enp8s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 04:92:26:b7:b2:f0 brd ff:ff:ff:ff:ff:ff
5: enp9s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 04:92:26:b7:b2:f1 brd ff:ff:ff:ff:ff:ff
6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 7e:ce:8a:44:bf:9d brd ff:ff:ff:ff:ff:ff
    inet 10.0.14.2/24 brd 10.0.14.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::7cce:8aff:fe44:bf9d/64 scope link 
       valid_lft forever preferred_lft forever
7: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fe:01:d8:d0:9a:3d brd ff:ff:ff:ff:ff:ff
    inet 10.158.192.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::7ce8:48ff:fe91:c23d/64 scope link 
       valid_lft forever preferred_lft forever
8: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:e9:3f:be brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
9: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:e9:3f:be brd ff:ff:ff:ff:ff:ff
11: vethXTE297@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether fe:01:d8:d0:9a:3d brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::fc01:d8ff:fed0:9a3d/64 scope link 
       valid_lft forever preferred_lft forever
13: vethSNA55U@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether fe:23:4e:45:ee:31 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::fc23:4eff:fe45:ee31/64 scope link 
       valid_lft forever preferred_lft forever
15: vethDL9SLE@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether fe:7b:b8:c7:8e:5d brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::fc7b:b8ff:fec7:8e5d/64 scope link 
       valid_lft forever preferred_lft forever
17: vethXWYAMQ@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
    link/ether fe:e8:ca:b2:be:0a brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::fce8:caff:feb2:be0a/64 scope link 
       valid_lft forever preferred_lft forever
18: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:a0:8d:c4 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fea0:8dc4/64 scope link 
       valid_lft forever preferred_lft forever
20: veth487J41@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether fe:c0:f1:e1:dc:35 brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::fcc0:f1ff:fee1:dc35/64 scope link 
       valid_lft forever preferred_lft forever
22: vethXKT7BH@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
    link/ether fe:d6:0c:c1:99:65 brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::fcd6:cff:fec1:9965/64 scope link 
       valid_lft forever preferred_lft forever
24: vethVUQQ2D@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether fe:22:de:ca:42:39 brd ff:ff:ff:ff:ff:ff link-netnsid 4
    inet6 fe80::fc22:deff:feca:4239/64 scope link 
       valid_lft forever preferred_lft forever
25: macvtap0@enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 500
    link/ether 52:54:00:19:7e:6c brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe19:7e6c/64 scope link 
       valid_lft forever preferred_lft forever
26: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:db:12:2d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fedb:122d/64 scope link 
       valid_lft forever preferred_lft forever
27: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:40:61:be:de brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:40ff:fe61:bede/64 scope link 
       valid_lft forever preferred_lft forever
29: vethd6281d7@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 7e:ef:8b:f8:66:94 brd ff:ff:ff:ff:ff:ff link-netnsid 5
    inet6 fe80::7cef:8bff:fef8:6694/64 scope link 
       valid_lft forever preferred_lft forever

Doesn't work with IPV6

So I took a shot at trying to get this to work with IP V6 addresses with no luck. If I disable the ufw service i'm able to access my site using the V6 address, but when I enable ufw the connection times out. I found /etc/ufw/after6.rules and tried to modify it to work, but I must be doing something wrong. My guess is it's something with the V6 subnet.

# BEGIN UFW AND DOCKER
*filter
:ufw6-user-forward - [0:0]
:DOCKER-USER - [0:0]
-A DOCKER-USER -j RETURN -s fe80::/10
-A DOCKER-USER -p udp -m udp --sport 53 --dport 1024:65535 -j RETURN

-A DOCKER-USER -j ufw6-user-forward

-A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d fe80::/10
-A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d fe80::/10

-A DOCKER-USER -j RETURN
COMMIT
# END UFW AND DOCKER

Here is my ifconfig:

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::1  prefixlen 64  scopeid 0x20<link>
        ether 02:42:0e:75:6d:9a  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ufw status

Status: active
Logging: off
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW IN    Anywhere
22/tcp (v6)                ALLOW IN    Anywhere (v6)

80/tcp                     ALLOW FWD   Anywhere
443/tcp                    ALLOW FWD   Anywhere
80/tcp (v6)                ALLOW FWD   Anywhere (v6)
443/tcp (v6)               ALLOW FWD   Anywhere (v6)

Feature request: Automate ufw rules with docker engine?

Hi, I've been a huge fan of this script for a while now. One thing that boggles my mind is it's pretty manual and it could be improved.
For example this nginx-proxy software runs as regular docker image and updates nginx rules as needed based on other docker image's ENVs. It solves manual rule update issue and less error prone.
I think it maybe possible to follow similar approach. For example, ufw-docker-agent will run as regular docker container on the host and will be connected to docker socket. It'll update ufw rules based on new container's ENVs. Containers should run with special ENVs if we want to expose them to public internet. like: docker run -d -e UFW_ALLOW_PORT=80 nginx

Another example is Traefik. Traefik runs as regular container on the host and connected to docker socket. It scans new containers to see if it includes special "labels" and updates its rules based on container labels.

Does anyone think such implementation is possible?

EDIT: Okay, I hacked together some crap and got working code, here is the repo. All you have to do is run the container with special label UFW_MANAGED=TRUE. Please let me know your feedbacks. Of course code is crap, but hopefully it works.

iptables migration to nftables on debian

Recently debian has switched to nftables by defaut (buster/unstable)

Debian is using builtin alternatives system to provide iptables command by either iptables-nft or iptables-legacy.

Upstream docker/libnetwork has incorporated this by updating libnetwork to use 'iptables-legacy' if available moby/libnetwork#2285

I updated ufw-docker to use iptables-legacy and it seems to work. Otherwise it would not detect the DOCKER related chains as they'd be hidden in iptables-nft.

More on the docker story in regards to iptables/nftables can be found on this issue: moby/moby#26824 it seems distros are slowly picking up nftables causing docker some troubles ....

Docker can access internet after: ufw default deny outgoing

I am trying to create a jailed machine to prevent the docker containers run there from accessing internet other than whitelisted hosts.

To that end i tried to use
ufw default deny outgoing

I found this bug report #12 and have applied the first option described there but it still does not prevent outgoing commections.

I thested this by running (image downloaded earlier)
docker -it debian:buster bash
$ apt-get update

and the image still could access the apt servers.

Is there any solution to fix this using ufw-docker or do I have to bite the bullet and use raw iptables?

ufw-docker status error!

Hi friends:

I got an error as follows, after installed ufw-docker and run ufd-docker status...

/usr/local/bin/ufw-docker: line 276: files_to_be_deleted[@]: unbound variable

My linux version is: Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-100-generic x86_64)
And the bash version: GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)

Any comment would be appreciated. Thanks!

How should nmap look?

If I default deny using ufw and then just allow port 443 & 80, for example, then shouldn't all other ports be closed if I nmap externally? I've used your exact configuration with docker & ufw - but I'm not sure how this is supposed to behave - or if my services are really being protected by ufw. I just used the typical "ufw default deny" - perhaps that must be changed in this configuration?
Thanks.

Examples and documentation is incomprehensive

Examples and documentation is incomprehensive and relies on analogies from the main ufw command.
Also, the documentation totally ignored mentioning docker-compose and how to work with it.

Doesn't with other locale

The script won't work with e.g. german locale. This is due to ufw status printing e.g. Status: Aktiv instead of Status: active which is expected in the script. Not sure if this is the only instance where the locale matters right now.

Is there an elegent way to use with docker-compose?

Hello and thank you for the good library,
I solved the problem of a docker container as a docker-compose service to access the local services exposed on the host server by using the following command
sudo ufw allow in on br-122345655 from {docker_sub_net_cidr} to {host_server_ip} port {service_port}
But I wonder if there is a more elegant way to achieve the same result.
The ufw was blocking the port before adding the command.

Docker inside docker

Hello, I have installed docker inside docker.
Let's call the root docker a and the inner docker b.
docker b is open despite ufw and ufw-docker settings in a.
a docker connected ports 1031 and 1032 (docker creation command)
b docker connects 1032 (docker creation command), a docker installs ufw income deny and ufw-docker and can access 1031 despite not allowing 1031 port.

containers are still accessible publicly

I have run ufw-docker install but containers are still accessible publicly without opening any ports via ufw.
I guess the issue is that I have two nics. One public and one local. I added the public ip to the after.rules but that did not fix anything.
I tried resetting ufw and rebooting after adding the rules but still accessible.

Anyone got any idea on what the issue might be?

########## iptables -n -L DOCKER-USER ##########
Chain DOCKER-USER (0 references)
target     prot opt source               destination         
ufw-user-forward  all  --  0.0.0.0/0            0.0.0.0/0           
RETURN     all  --  10.0.0.0/8           0.0.0.0/0           
RETURN     all  --  172.16.0.0/12        0.0.0.0/0           
RETURN     all  --  192.168.0.0/16       0.0.0.0/0           
RETURN     udp  --  0.0.0.0/0            0.0.0.0/0            udp spt:53 dpts:1024:65535
ufw-docker-logging-deny  tcp  --  0.0.0.0/0            192.168.0.0/16       tcp flags:0x17/0x02
ufw-docker-logging-deny  tcp  --  0.0.0.0/0            10.0.0.0/8           tcp flags:0x17/0x02
ufw-docker-logging-deny  tcp  --  0.0.0.0/0            172.16.0.0/12        tcp flags:0x17/0x02
ufw-docker-logging-deny  tcp  --  0.0.0.0/0            XX.XXX.XXX.X         tcp flags:0x17/0x02
ufw-docker-logging-deny  udp  --  0.0.0.0/0            192.168.0.0/16       udp dpts:0:32767
ufw-docker-logging-deny  udp  --  0.0.0.0/0            10.0.0.0/8           udp dpts:0:32767
ufw-docker-logging-deny  udp  --  0.0.0.0/0            172.16.0.0/12        udp dpts:0:32767
ufw-docker-logging-deny  udp  --  0.0.0.0/0            XX.XXX.XXX.X         udp dpts:0:32767
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           


########## diff /etc/ufw/after.rules ##########
--- /etc/ufw/after.rules	2021-01-22 22:03:42.366124108 +0100
+++ /tmp/tmp.VUESxreQu9	2021-01-23 23:44:16.423945619 +0100
@@ -44,11 +44,9 @@
 -A DOCKER-USER -j ufw-docker-logging-deny -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 192.168.0.0/16
 -A DOCKER-USER -j ufw-docker-logging-deny -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 10.0.0.0/8
 -A DOCKER-USER -j ufw-docker-logging-deny -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 172.16.0.0/12
--A DOCKER-USER -j ufw-docker-logging-deny -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d XX.XXX.XXX.X
 -A DOCKER-USER -j ufw-docker-logging-deny -p udp -m udp --dport 0:32767 -d 192.168.0.0/16
 -A DOCKER-USER -j ufw-docker-logging-deny -p udp -m udp --dport 0:32767 -d 10.0.0.0/8
 -A DOCKER-USER -j ufw-docker-logging-deny -p udp -m udp --dport 0:32767 -d 172.16.0.0/12
--A DOCKER-USER -j ufw-docker-logging-deny -p udp -m udp --dport 0:32767 -d XX.XXX.XXX.X
 
 -A DOCKER-USER -j RETURN
Status: active

     To                         Action      From
     --                         ------      ----
[ 1] 1001                       ALLOW IN    Anywhere                   # ssh
[ 2] 192.168.0.3 4695 on enp3s0 ALLOW FWD   Anywhere                   (out) # vnc
[ 3] Samba on enp3s0            ALLOW IN    Anywhere                   # smb
[ 4] 172.20.0.3 443/tcp         ALLOW FWD   Anywhere                   # allow traefik 443/tcp
[ 5] 172.20.0.3 80/tcp          ALLOW FWD   Anywhere                   # allow traefik 80/tcp
[ 6] 172.20.0.16 49161/tcp      ALLOW FWD   Anywhere                   # allow rtorrent 49161/tcp
[ 7] 172.20.0.16 49161/udp      ALLOW FWD   Anywhere                   # allow rtorrent 49161/udp
[ 8] 172.20.0.6 32400/tcp       ALLOW FWD   Anywhere                   # allow plex 32400/tcp

[Question] Exposing host ports

First of all thank you for creating this guide. Running into this problem and trying to find a good solution is very challenging as many other sources say to disable Dockers IPTables support which doesn't seem like a good solution. Having a good explanation of what you're trying to achieve and how it works is very helpful.

I want to have my server setup so that when I run ufw allow 80 that opens the hosts port 80 not the containers. Reading the The reason for choosing ufw-user-forward, not ufw-user-input section it seems like I could use ufw-user-input instead to achieve this but I'm not sure if that is correct.

My Qeuestion: If I use ufw-user-input will that mean that when I run a command like ufw allow 80 that would allow external connections to the service running on port 80 and not the any containers running on 80?

Edit

For clarity, I have my container setup using -p 8080:80 and I have Nginx running on the host forwarding external requests on port 80 to port 8080. I don't want anyone externally to be able to access 8080 directly, only port 80.

ERROR: UFW is disabled or you are not root user

Even everything is running and enable I keep on receiving this error so the script is not usable
● ufw.service - Uncomplicated firewall
Loaded: loaded (/lib/systemd/system/ufw.service; enabled; vendor preset: enabled)
Active: active (exited) since Sun 2020-01-05 13:56:12 CET; 6min ago
Docs: man:ufw(8)
Process: 9017 ExecStart=/lib/ufw/ufw-init start quiet (code=exited, status=0/SUCCESS)
Main PID: 9017 (code=exited, status=0/SUCCESS)

gen 05 13:56:12 CORNERWS systemd[1]: Starting Uncomplicated firewall...
gen 05 13:56:12 CORNERWS ufw-init[9017]: Firewall already started, use 'force-reload'
gen 05 13:56:12 CORNERWS systemd[1]: Started Uncomplicated firewall.

Docker default IP

Hello,

the Howto didn't work for me because adding 172.16.0.0/12 in /etc/ufw/after.rules didn't allow the containers to communicate with each other.

adding 172.17.0.0/12 into the after.rules instead of 172.16.0.0/12 worked for me.

Am i missing something or why is everyone using 172.16.0.0/12?

Best regards

ufw/iptables and containers connected to multiple networks

This is more of a question rather than a specific bug or issue with the ufw-docker script, but I'm hoping this can be solved using ufw-docker.

When I want to expose a port from a container connected to multiple networks, the iptables rules created by docker don't seem to update correctly.

For example, here are the network settings for a test container connected to a default network created by compose (compose_default), and also manually connected to a network called frontend:

"NetworkSettings": {
            "Bridge": "",
            "SandboxID": "07088f91dd0c58e26a33389ca25903516e34552e1fb9f441ac3e89b342920d02",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "5000/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "5000"
                    },
                    {
                        "HostIp": "::",
                        "HostPort": "5000"
                    }
                ]
            },
            "SandboxKey": "/var/run/docker/netns/07088f91dd0c",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {
                "compose_default": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": [
                        "test",
                        "eb2326805f45"
                    ],
                    "NetworkID": "21ecdc8b989d86537fd4b87e09b4ebab4041de60261d83b4f4c64541e0a6a919",
                    "EndpointID": "a19da7d18d5ed659b2df167245f324cbc333ebd92c132fc91a0e224914b44dc9",
                    "Gateway": "172.18.0.1",
                    "IPAddress": "172.18.0.47",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": ####,
                    "DriverOpts": null
                },
                "frontend": {
                    "IPAMConfig": {},
                    "Links": null,
                    "Aliases": [
                        "eb2326805f45"
                    ],
                    "NetworkID": "dda768c0a205587c3da94511181c5c106c8ce7f5c3b2ed5b27f8f493825c53ca",
                    "EndpointID": "4d114c0866ac97c0b034f9cfdaca9ea917cc2cd5ba20167dee93f2add14ef3d8",
                    "Gateway": "172.19.0.1",
                    "IPAddress": "172.19.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": ####,
                    "DriverOpts": {}
                }
            }
        }
    }

I have already modified the existing ufw-docker script to be able to specify which network to grab the ip address from. So I can add a rule with sudo ufw-docker allow test 5000/tcp frontend (note the network specification as the last argument), resulting in the following rule being added:

ufw route allow proto tcp from any to 172.19.0.2 port 5000 comment allow test 5000/tcp

However, while I initially assumed this would open up port 5000 for external connections (as it would normally), this is not the case:

Jul  9 11:22:19 #### kernel: [257879.761018] [UFW DOCKER BLOCK] IN=eno1 OUT=br-21ecdc8b989d MAC=##### SRC=##### DST=172.18.0.47 LEN=52 TOS=0x00 PREC=0x00 TTL=118 ID=26924 DF PROTO=TCP SPT=28173 DPT=5000 WINDOW=64240 RES=0x00 SYN URGP=0
Jul  9 11:22:20 #### kernel: [257880.756708] [UFW DOCKER BLOCK] IN=eno1 OUT=br-21ecdc8b989d MAC=##### SRC=##### DST=172.18.0.47 LEN=52 TOS=0x00 PREC=0x00 TTL=118 ID=26925 DF PROTO=TCP SPT=28173 DPT=5000 WINDOW=64240 RES=0x00 SYN URGP=0
Jul  9 11:22:22 #### kernel: [257882.755631] [UFW DOCKER BLOCK] IN=eno1 OUT=br-21ecdc8b989d MAC=##### SRC=##### DST=172.18.0.47 LEN=52 TOS=0x00 PREC=0x00 TTL=118 ID=26928 DF PROTO=TCP SPT=28173 DPT=5000 WINDOW=64240 RES=0x00 SYN URGP=0
Jul  9 11:22:26 #### kernel: [257886.761023] [UFW DOCKER BLOCK] IN=eno1 OUT=br-21ecdc8b989d MAC=##### SRC=##### DST=172.18.0.47 LEN=52 TOS=0x00 PREC=0x00 TTL=118 ID=26935 DF PROTO=TCP SPT=28173 DPT=5000 WINDOW=64240 RES=0x00 SYN URGP=0

This is because for whatever reason, the packets are being directed to the container's ip within the compose_default network, which are then obviously dropped because no rules were added for that IP. I believe the issue lies not necessarily within ufw-docker, but rather docker's iptables rules itself. Looking at the DOCKER chain, there are only ACCEPT rules for the compose_default network IP, not for the frontend network:

Chain DOCKER (9 references)
pkts bytes target     prot opt in     out     source               destination         
.....
0     0 ACCEPT     tcp  --  !br-21ecdc8b989d br-21ecdc8b989d  0.0.0.0/0            172.18.0.47          tcp dpt:5000
....

So what should be the way to go here? Should I add a rule ufw rule for each network the container is connected to? Or is there a way to tell docker which network it should create its iptables rules for? Or is there a way within ufw-docker to do this?

How to block outgoing traffic from container

I have a container which is listening on port 80. But I don't want the container to be able to establish connections to this port. Or any other port for that matter. Is this the right place to ask this question? Sorry if it is not, I'm rather new to Docker, UFW and TCP security.

Logging of forwarded connections

Hi,

when adding the "workaround" for docker from your manual, I was unable to get any logging whats going wrong there.

I've replaced it like this:

# BEGIN UFW AND DOCKER
*filter
:ufw-user-forward - [0:0]
:ufw-logging-route-deny - [0:0]
:DOCKER-USER - [0:0]
:DOCKER-USER-DENY - [0:0]
-A DOCKER-USER -s 172.16.0.0/12 -j RETURN
...
-A DOCKER-USER -j ufw-user-forward

-A DOCKER-USER -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 172.16.0.0/12 -j ufw-logging-route-deny
-A DOCKER-USER -p udp -m udp --dport 0:32767 -d 172.16.0.0/12 -j ufw-logging-route-deny

-A DOCKER-USER -j RETURN

-A ufw-logging-route-deny -m limit --limit 3/min --limit-burst 10 -j LOG --log-prefix "[UFW ROUTE BLOCK] "
-A ufw-logging-route-deny -j DROP

COMMIT
# END UFW AND DOCKER

(and a similar one for ipv6)

like this I get

[UFW ROUTE BLOCK] IN=enp0s3 OUT=br-...

messages if someone tries to access a "dockererized" port and where I forgot to add a proper ufw route allow rule and not silently ignore it.

Perhaps this is helpful for someone or can even integrated into the Readme?

Allow traffic on specific subnet/interface

Hello!

I'm a user of Tailscale, and I need to connect to my container only from within the Tailscale network. I usually achieve this by only allowing connections to the tailscale0 which is the interface it uses.

This is my goto command to achieve this
sudo ufw allow in on tailscale0 to any port 9003

I don't use it, but Tailscale works with IPs from 100.64.0.0/10 subnet[1], so this would also work
sudo ufw allow from 100.64.0.0/10 to any port 9003

How could I achieve this with ufw-docker?
Thank you.

[1] What are these 100.x.y.z addresses?

Block specific ip from accessing port 443

I have a service on port 443,
How I can block specific IP from accessing port 443

I tried this:

ufw insert 1 deny from 89.179.6.129 to any port 443
iptables -A INPUT -s 89.179.6.129 -p tcp --destination-port 443 -j DROP

the user can still access the service exposed with:

ufw route allow proto tcp from any to any port 443

thanks for the help?

Changing subnet doesn't "take"

My Docker i/f sits at 172.18.0.1, not 172.16. But I didn't notice this until after running ufw-docker and seeing it not work. If I edit /etc/ufw/after.rules and run ufw reload, I still get this from iptables -L DOCKER-USER

Chain DOCKER-USER (1 references)
target     prot opt source               destination         
RETURN     all  --  10.0.0.0/8           anywhere            
RETURN     all  --  172.16.0.0/12        anywhere            
RETURN     all  --  192.168.0.0/16       anywhere            
RETURN     udp  --  anywhere             anywhere             udp spt:domain dpts:1024:65535
ufw-user-forward  all  --  anywhere             anywhere            
DROP       tcp  --  anywhere             192.168.0.0/16       tcp flags:FIN,SYN,RST,ACK/SYN
DROP       tcp  --  anywhere             10.0.0.0/8           tcp flags:FIN,SYN,RST,ACK/SYN
DROP       tcp  --  anywhere             172.16.0.0/12        tcp flags:FIN,SYN,RST,ACK/SYN
DROP       udp  --  anywhere             192.168.0.0/16       udp dpts:0:32767
DROP       udp  --  anywhere             10.0.0.0/8           udp dpts:0:32767
DROP       udp  --  anywhere             172.16.0.0/12        udp dpts:0:32767
RETURN     all  --  anywhere             anywhere            

I saw your note in README.md about this, and rebooted... same thing.

Any ideas why? I'm wondering if it has something to do with the netmask being 12 bits instead of 16 (so therefore not covering the .16. This probably is more a ufw/iptables issue, but I'd appreciate any insights.

IP addresses in ufw after.rules

In ufw after.rules you use 3 specific IP addresses. Is this global for Docker or I need to change to fit my setup?
Atm, I'm on VPS that has direct public IP on eth0. Any modifications required if not on local network behind a router?

Thanks

UFW is disabled or you are not root user

Hi, in a test server with Ubuntu 18.04 I've tried to use your tool and I've got this error:

root@stage:/opt/ufw-docker/ufw-docker# ufw-docker install
ERROR: UFW is disabled or you are not root user.

Ufw is enabled with these rules:
Anywhere ALLOW 79.8.127.0
Anywhere ALLOW 192.168.169.0/24

Could you help me?
Thnx

Does not work after reboot ?

Hi @chaifeng and thank you for your work.
The trick seems to work until you reboot the server.

I applied the modifications to after.rules, enabled the firewall and limited access to a httpd container (added ufw route allow 80 + ssh => no problems here
However once i reboot the server, despite ufw being still active (systemctl Active: active (exited) and ufw Status: active) the container is exposed to public traffic
Host ports are still protected

any ideas ?
docker@ 19.03.3
ufw @ 0.36

Container restart requires new rules

Restarting a container (system reboot, docker restart, etc) causes containers to start with different ips and so the rules stop working. This means that if a service fails and has to restart, unsupervised, this will cause downtime.

The point of containers is being ephemeral, so hopefully there's a workaround?

Allow mapped port

Service has port 5566 mapped to 22. -p 22:5566 How do I go about allow 22 thru the firewall?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.