Implements
- Libreswan IPsec L2tp and Xauth server
- Libreswan IPsec Payment gateway client
- OpenVPN Server
The image sets up the following on debian stretch docker image
The image is adapted from
- https://www.github.com/adamwalach/docker-openvpn
- https://github.com/hwdsl2/docker-ipsec-vpn-server/blob/master/Dockerfile
https://hub.docker.com/r/kgathi2/ipsec-ovpn-server
- docker-compose.yaml
- vpn.env file for environmental variables
This server/node will be running Open VPN as well as Libreswan. Below are the requirements
- CPU requirments is high due to encryption and decryption. As a rule of thumb you should assume that on a modern CPU with AES-NI chipset, for every megabit per second of data traffic (in one direction) you need about 20MHz
- Memory dependent on number of connected devices. Start at 1GB anc could go lower
- Bandwidth requirements are completely dependent on how much data you wish to push through the VPN tunnels in total
- Disk requirements are fairly low. A minimal Linux installation could fit on even 2 gigabytes.
You may generate client certificates by running the following in the container
CLIENT_CERT=vpn-client-01
# Build the client certificate
easyrsa build-client-full $CLIENT_CERT nopass
# Get a combined certificate printed on stdout
ovpn_getclient $CLIENT_CERT combined
# Get a combined certificate saved on the server
ovpn_getclient $CLIENT_CERT combined-save
# Get a combined certificate saved on the server
ovpn_getclient $CLIENT_CERT separated
# Specify Static IP for OVPN clients
CLIENT_CERT=vpn-client-01
CLIENT_STATIC_IP=173.12.2.90
cat > ${OPENVPN}/ccd/${CLIENT_CERT} <<EOF
ifconfig-push $CLIENT_STATIC_IP $OVPN_CONF_IFCONFIG_INET
EOF
On your docker/kubectl localhost, copy the files if selected separated
or combined-save
kubectl cp <some-namespace>/<some-pod>:/etc/openvpn/clients/vpn-client-01 /tmp/clients/vpn-client-01
docker cp <container>:/etc/openvpn/clients/vpn-client-01 /tmp/clients/vpn-client-01
Logging is important for troubleshooting iptables and other things. However LOG
does not work as presecribed on the internet using rsyslog and family for iptables. Had to use netfilter log NFLOG
. Here is a good reference.
NFLOG
uses kernel modules so we need to mount the /lib/modules
on docker/kubernetes host. For Kubernetes, use an UBUNTU
node type.
Then in the container/pod install ulogd2
apt-get update && apt-get install -y ulogd2
Then copy extra/ulogd.conf
to the container ulog config file /etc/ulogd.conf
. This sets up 2 netfilter nflog
interfaces nflog:11
and nflog:12
. You can add more if needed
kubectl cp path_to/extra/ulogd.conf container_id:/etc/ulogd.conf
To log iptables
add the LOG_DROP
chain for example as follows
iptables -N LOG_DROP
iptables -A LOG_DROP -j NFLOG --nflog-prefix "[fw-inc-drop]:" --nflog-group 12
iptables -A LOG_DROP -j DROP
iptables -A FORWARD -m conntrack --ctstate INVALID -j LOG_DROP
Then you can capture the logs generated using tail -f
or tcpdump
on the netfilter interface.
# The capture tcpdump on the interface. Daemon has to be off
tcpdump -i nflog:11
tcpdump -i nflog:12
# Capture tail logs
service ulogd2 start
tail -f /var/log/ulog/firewall.log /var/log/ulog/firewall-ssh-drop.log
In order to make sure that the VPN as well as the tunnel is workng well, care must be taken to set a proper MTU value. MTU is the largest packet size that can be transmitted without fragmentation.
Discovering the correct MTU is very straightforward and can be achieved using ping. Use the respective following commands (change www.example.com to suit the PGW
ip)
On Windows
ping -n 1 -l 1500 -f www.example.com
On Linux
ping -M do -s 1500 -c 1 www.example.com
On Mac
ping -D -v -s 1500 -c 1 www.example.com
Decrease the 1500 value by ~10 each time, until the ping succeeds. Once the ping succeeds(the highest value at which the ping succeeds) the value used is the MTU you should use.
Then set the MTU value in the vpn.env
variable OVPN_MTU
Configure iptables for your usecase if needed. Important to note are the ipsec issue here and here. Also important to optimise your ipsec tunnel and iptables
for speed
Forwarding traffic between subnets. OVPN_NET <> PGW_NET as well as fix ipsec MTU configuration.
Added the following in iptables
configuration
iptables -I FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
iptables -I FORWARD 7 -s "$OVPN_NET" -d "$PGW_NET" -j ACCEPT
iptables -I FORWARD 8 -s "$PGW_NET" -d "$OVPN_NET" -j ACCEPT
iptables
are edited and take effect immediately they are saved. While logged into the terminal via kubectl
, you can run the commands and test different settings. Here are a few helpful commands
# Reset Iptables
iptables -F
iptables -X
# Reset all chains
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -t raw -F
iptables -t raw -X
iptables -t security -F
iptables -t security -X
# Set default accept policies. No firewall if these are the only rules
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
# Save IP tables
iptables-save
# List iptables stats including packet counter
iptables -L -nv --line-numbers
The following benchmarking and performance testing link is also important in helping to optimize ipsec speed for packet transfer
Here's a link for faster iptables and about stateless firewall here
Here is a link for faster OpenVPN
Disabling rf_filter
for ipsec. To check configs do
sysctl -a | grep \\.ip_forward
sysctl -a | grep \\.rp_filter
# If the flag is wrong use similar to this
sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/g' /etc/sysctl.conf
Some links useful in troubleshooting IPsec client conectivity
For Further troubleshooting please see original repos. Also for configuring aditional Lt2pD or CISCO Xauth client users here
You may run into some issues when setting up the tunnel. Here is an example. In most cases you need to analyse the raw traffic to and from your servers and clients
You may use some tools to troubleshoot like
tcpdump
ssldump
ulogd
- Wireshark or
tshark
For packet tracing use tcpdump
and ssldump
to check for issues and troubleshoot iptables
.
Examples
apt-get update && apt-get install -y tcpdump ssldump
# TCP DUMP
tcpdump -D
tcpdump -i tun0 -nn -A
tcpdump -i tun0 -c 5
tcpdump -i tun0 -nn "src host X.X.X.X" or "dst host X.X.X.X"
tcpdump -i tun0 -nn -w webserver.pcap # for analysis on wiresharl
# SSL DUMP
ssldump -i tun0 port 18423
When connecting the IPsec client behind a firewall, the IPSEC peer IP has to expect a static IP address from your pod/host configured in its firewall. Kubernetes pods SNAT the IP of the node that they are spawned in which is not consistent. So in order to have all your cluster pods have one static IP, the pods need to be behind a NAT gateway that will SNAT all pods traffic within the cluster. Attempted a couple of options
- https://github.com/nirmata/kube-static-egress-ip
- https://ritazh.com/whitelist-egress-traffic-from-kubernetes-8a3adefd94b2
- https://medium.com/google-cloud/using-cloud-nat-with-gke-cluster-c82364546d9e
- https://itnext.io/benchmark-results-of-kubernetes-network-plugins-cni-over-10gbit-s-network-36475925a560
- https://kubernetes.io/docs/concepts/cluster-administration/networking/
- https://medium.com/bluekiri/setup-a-kubernetes-cluster-on-gcp-with-cloud-nat-efe6aa5780c6
- https://medium.com/bluekiri/high-availability-nat-gateway-at-google-cloud-platform-with-cloud-nat-8a792b1c4cc4
- https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent
- https://cloud.google.com/nat/docs/using-nat
- https://cloud.google.com/nat/docs/gke-example
Eventually went for quick win with Google NAT gateway.
- Get CNI NATing working on Flannel
- Kubernetes Security best practises