Code Monkey home page Code Monkey logo

klipper-lb's Introduction

Klipper Service Load Balancer

NOTE: this repository has been recently (2020-11-18) moved out of the github.com/rancher org to github.com/k3s-io supporting the acceptance of K3s as a CNCF sandbox project.


This is the runtime image for the integrated service load balancer in klipper. This works by using a host port for each service load balancer and setting up iptables to forward the request to the cluster IP. The regular k8s scheduler will find a free host port. If there are no free host ports, the service load balancer will stay in pending.

Building

make

License

Copyright (c) 2019 Rancher Labs, Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

klipper-lb's People

Contributors

brandond avatar dependabot[bot] avatar dweomer avatar galal-hussein avatar github-actions[bot] avatar ibuildthecloud avatar macedogm avatar manuelbuil avatar matttrach avatar motoki317 avatar rohitsakala avatar thomasferrandiz avatar vadorovsky avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

klipper-lb's Issues

Alpine 3.15.4 - Release?

Hi,

I see the latest version of Klipper-LB is V0.3.5 and is using Alpine 3.12 as the base image in the DockerFile. This version of Alpine has security issues against it, namely around the busybox SSL client.

In the master branch of this project, Alpine has been bumped to V3.15.4 in the DockerFile.

When could we expect to see a new version of Klipper-LB which is using this bumped version of Alpine?

Thanks
Chris

Bind to specific interface?

is it possible to configure kliper to bind to a specific host interface? so it exposes only in a specific interface?

How to disable KlipperLB from Nginx Ingress?

Hi, I installed Nginx Ingress by using helm chart to my k3s cluster. It enabled KlipperLB on all of my node and I would like to disable it so that I can use ingress with my external LB. Please let me know what info I need to provide to able to get further assistance?

No balancing when node goes down

Hello,

I have an issue when one node is suddently not available and the load balancer is just not working for any deployments on the cluster. Even if the deployments run on other nodes, they are unreachable until the node is back up.

I am running a k3s cluster of 5 Raspberry Pi.

I don't know what to provide for the issue, but I am available to diagnose

DaemonSet does not get deleted when unused, stale iptables entries

(I use k3os with klipper-lb installed by default)

I just had this issue where i had to restart my node 2 times and flush iptables both times:

The problem

I kept constantly connecting to an internal pod's ssh port because of a LoadBalancer service that hooked it up to that port.

No problem, I just set that service's type to ClusterIP or NodePort.

I still kept connecting to the pod's sshd service.

I dug into the iptables, doing iptables-save and grepping through that file to see that it still displayed the rules required to go from "hostport" to that service.

Fine, I'll take note of that and do iptables -F, and then immidiately restart.

After restarting, I could quickly but temporarily connect to the "right" sshd service, but after disconnecting and reconnecting again i found myself at the warning of a "wrong" host key, signaling that the node had somehow highjacked my ssh port again.

At this stage i looked at my pods, and found that the "sidecar" pod for the once-loadbalancer-service still existed, i deleted it, but because it was owned by a daemonset, i had to delete that too.

I tested the port again, it was still in the iptables, I flushed and restarted.

This time, the port finally connected properly even after the pods went up.

Obeservation

klipper-lb creates "sidecar" daemonset pods to every loadbalancer service observed, but it does not remove these once they're not required, nor does klipper take care of the leftover iptables.

kliper-lb IPv6 compliance issue

Context
Trying to build a k3os / k3s single node IPv6 only cluster

Describe the issue
As stated in k3s IPv6 issue k3s-io/k3s#284

It seems that svclb-traefik-xxx pod fails to set clean iptables rules for IPv6, if I believe the container logs

# kubectl logs -n kube-system --all-containers pod/svclb-traefik-g4tfl 
+ trap exit TERM INT
/usr/bin/entry: line 6: can't create /proc/sys/net/ipv4/ip_forward: Read-only file system
+ echo 1
+ true
+ cat /proc/sys/net/ipv4/ip_forward
+ '[' 1 '!=' 1 ]
+ iptables -t nat -I PREROUTING '!' -s 2001:db8:3111:80:5e45:1ce5:0:42a8/32 -p TCP --dport 80 -j DNAT --to 2001:db8:3111:80:5e45:1ce5:0:42a8:80
iptables v1.6.2: Invalid port:port syntax - use dash

Try `iptables -h' or 'iptables --help' for more information.
+ trap exit TERM INT
/usr/bin/entry: line 6: can't create /proc/sys/net/ipv4/ip_forward: Read-only file system
+ echo 1
+ true
+ cat /proc/sys/net/ipv4/ip_forward
+ '[' 1 '!=' 1 ]
+ iptables -t nat -I PREROUTING '!' -s 2001:db8:3111:80:5e45:1ce5:0:42a8/32 -p TCP --dport 443 -j DNAT --to 2001:db8:3111:80:5e45:1ce5:0:42a8:443
iptables v1.6.2: Invalid port:port syntax - use dash

Try `iptables -h' or 'iptables --help' for more information

2001:db8:3111:80:5e45:1ce5:0:42a8:80 and 2001:db8:3111:80:5e45:1ce5:0:42a8:443 are not correct !! Still the same misunderstanding regarding the syntax of IPv6 address when specifying the port in the IPv6 address. Address should be bracket enclosed [2001:db8:3111:80:5e45:1ce5:0:42a8]:443 !! or here maybe --dport explicitly specified; and IPv4 prefix length of "/32" in "-s 2001:db8:3111:80:5e45:1ce5:0:42a8/32" is stangely short for an IPv6 address ?? Shouldn't it be set to "/128" or "/64" ??

klipper-lb doesn't set clean IPv6 iptables rules as IPv6 addresses have to be bracket enclosed when port is specified within the address.

Describe alternatives you've considered
cloning klipper-lb, I built my own klipper-lb image and tried to manage IPv6 address case patching entry script

--- entry.orig  2020-08-13 17:11:23.306338739 +0200
+++ entry       2020-08-18 09:45:20.694567832 +0200
@@ -3,15 +3,36 @@
 
 trap exit TERM INT
 
-echo 1 > /proc/sys/net/ipv4/ip_forward || true
+# 20200818 IPv6 address compliance
+# try to manage IPv6 address case where address has to be square bracket enclosed when specifiying DEST_PORT
+# Nota : bash regex to determine IP address type was found at https://helloacm.com/how-to-valid-ipv6-addresses-using-bash-and-regex/
+#        and translated from bash to sh with example found at https://stackoverflow.com/questions/30647654/how-to-write-and-match-regular-expressions-in-bin-sh-script
+#        alternative could be to use sipcalc and egrep v4 or v6, [ `sipcalc $DEST_IP | egrep v6` ], probably more robust to validate IP address version,
+#        but need to install sipcalc and egrep apk packages in the container image
 
-if [ `cat /proc/sys/net/ipv4/ip_forward` != 1 ]; then
+#if [[ $DEST_IP =~ ^([0-9]{1,2}|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\.([0-9]{1,2}|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\.([0-9]{1,2}|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\.([0-9]{1,2}|1[0-9][0-9]|2[0-4][0-9]|25[0-5])$ ]]; then
+if  echo $DEST_IP | grep -Eq '^([0-9]{1,2}|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\.([0-9]{1,2}|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\.([0-9]{1,2}|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\.([0-9]{1,2}|1[0-9][0-9]|2[0-4][0-9]|25[0-5])$'; then
+    # echo IPv4 address
+    echo 1 > /proc/sys/net/ipv4/ip_forward || true
+    if [ `cat /proc/sys/net/ipv4/ip_forward` != 1 ]; then
+        exit 1
+    fi
+    iptables -t nat -I PREROUTING ! -s ${DEST_IP}/32 -p ${DEST_PROTO} --dport ${SRC_PORT} -j DNAT --to ${DEST_IP}:${DEST_PORT}
+    iptables -t nat -I POSTROUTING -d ${DEST_IP}/32 -p ${DEST_PROTO} -j MASQUERADE
+#elif [[ $DEST_IP =~ ^(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]))$ ]]; then
+elif echo $DEST_IP | grep -Eq '^(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]))$'; then
+    # echo IPv6 address
+    echo 1 > /proc/sys/net/ipv6/conf/all/forwarding || true
+    if [ `cat /proc/sys/net/ipv6/conf/all/forwarding` != 1 ]; then
+        exit 1
+    fi
+    iptables -t nat -I PREROUTING ! -s ${DEST_IP}/128 -p ${DEST_PROTO} --dport ${SRC_PORT} -j DNAT --to [${DEST_IP}]:${DEST_PORT}
+    iptables -t nat -I POSTROUTING -d ${DEST_IP}/128 -p ${DEST_PROTO} -j MASQUERADE
+else
+    echo $DEST_IP  " Neither IPv4, nor IPv6 address !!"
     exit 1
 fi
 
-iptables -t nat -I PREROUTING ! -s ${DEST_IP}/32 -p ${DEST_PROTO} --dport ${SRC_PORT} -j DNAT --to ${DEST_IP}:${DEST_PORT}
-iptables -t nat -I POSTROUTING -d ${DEST_IP}/32 -p ${DEST_PROTO} -j MASQUERADE
-
 if [ ! -e /pause ]; then
     mkfifo /pause
 fi

The script now detects IPv6 address but exit without setting IPv6 iptables rules as /proc/sys/net/ipv6/conf/all/forwarding is read-only, so svclb-traefik-g4tfl pod still in CrashLoopBockOff state.

kubectl logs -n kube-system pod/svclb-traefik-g4tfl --all-containers
+ trap exit TERM INT
+ echo 2001:db8:3111:80:5e45:1ce5:0:42a8
+ grep -Eq '^([0-9]{1,2}|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\.([0-9]{1,2}|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\.([0-9]{1,2}|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\.([0-9]{1,2}|1[0-9][0-9]|2[0-4][0-9]|25[0-5])$'
+ echo 2001:db8:3111:80:5e45:1ce5:0:42a8
+ grep -Eq '^(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]))$'
/usr/bin/entry: line 34: can't create /proc/sys/net/ipv6/conf/all/forwarding: Read-only file system
+ echo 1
+ true
+ cat /proc/sys/net/ipv6/conf/all/forwarding
+ '[' 0 '!=' 1 ]
+ exit 1
+ trap exit TERM INT
+ echo 2001:db8:3111:80:5e45:1ce5:0:42a8
+ grep -Eq '^([0-9]{1,2}|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\.([0-9]{1,2}|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\.([0-9]{1,2}|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\.([0-9]{1,2}|1[0-9][0-9]|2[0-4][0-9]|25[0-5])$'
+ echo 2001:db8:3111:80:5e45:1ce5:0:42a8
+ grep -Eq '^(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]))$'
/usr/bin/entry: line 34: can't create /proc/sys/net/ipv6/conf/all/forwarding: Read-only file system
+ echo 1
+ true
+ cat /proc/sys/net/ipv6/conf/all/forwarding
+ '[' 0 '!=' 1 ]
+ exit 1

I don't find why /proc/sys/net/ipv4/ip_forward could be read-write, as original script seems to work in IPv4 context, and /proc/sys/net/ipv6/conf/all/forwarding is read-only. Help needed to investigate further...

Specifying port range

Is it possible to specify the host port range for klipper-lb? Without being able to specify the range I do not think there is a way to avoid node port conflicts with node port services that startup after load balancer services.

Possible to "bind" to multiple IPs?

I realize that klipper-lb doesn't actually bind to an interface it just create iptable rules. What I would like to know is if there is a way to configure klipper to create rules for addition IPs on a particular host. I would very much like the tailscale interface on my nodes to all route load balancer traffic.

Is this possible? If not would it be hard to add?

Nat not always working

Hello everyone.

I'm experimenting with multi node k3s on a public vps and found a strange behavior using LoadBalancer with externalTrafficPolicy: Local.

That spec, allows traffic reaching the pod balanced by a service to keep its original (public) ip address.
In my case, it works in 50% and I don't know why.

2 Nodes:
node1 (public_ip1)
node2 (public_ip2)

Every node balances port 25 to a service scheduled on node1 (only 1 replica).
Balacing is handled by klipper_lb's pods on each hosts.

When I do a tcping from a third VPS (outside K3s network, totally unrelated vm):
If I do tcping public_ip1 25 (the one with the pod receiving traffic), the pod receives a correct public ip for the third vps (not part of the ensemble)
If I do tcping public_ip2 25 (the host that doesn't have the pod receiveng traffic scheduled), the pod recevies a internal ip which corresponds to svclb-service pod ip.
Is that already addressed somehow? Could you point me to some documentation?

Thanks

iptables: review base image

There are two issues here:

  • arm builds from a specific sha256 image on arm, which diverges from the
  • very old alpine 3.8 base for amd64/arm64

CrashLoopBackOff on Calico and Canal

When using Calico og Canal as the cluster network driver, klipper-lb fails to start up pods for LoadBalancer services.

From a default k3s installation on ubuntu 20.04 with only Flannel disabled, and either Calico or Canal added:

:~$ kubectl -n kube-system get ds/svclb-traefik
NAME            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
svclb-traefik   1         1         0       1            0           <none>          13m
:~$ kubectl -n kube-system get pod -l app=svclb-traefik
NAME                  READY   STATUS             RESTARTS   AGE
svclb-traefik-85mxm   0/2     CrashLoopBackOff   14         13m
:~$ kubectl -n kube-system logs ds/svclb-traefik lb-port-80
+ trap exit TERM INT
/usr/bin/entry: line 6: can't create /proc/sys/net/ipv4/ip_forward: Read-only file system
+ echo 1
+ true
+ cat /proc/sys/net/ipv4/ip_forward
+ '[' 0 '!=' 1 ]
+ exit 1
:~$

Requests coming from zerotier-one don't preserve Source IP

I'm trying to implement IP whitelisting using traefik in my cluster for requests coming from my zerotier-one network but facing an issue - klipper-lb seems to not preserve the IP address when traffic is from the zerotier-one interface.

I have no issues on traffic coming from local network or from external home IP, those seem to be preserved fine. But when zerotier-one network is used, the X-Forwarded-For header contains the node IP address.

Using the traefik/whoami app to debug below:

Example local access:

Hostname: web-65f84c6bc4-m9js7
IP: 127.0.0.1
IP: ::1
IP: 10.42.1.78
IP: fe80::5051:71ff:fe04:34e
RemoteAddr: 10.42.0.131:40592
GET / HTTP/1.1
Host: whoami.k3s.local
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:124.0) Gecko/20100101 Firefox/124.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.5
Dnt: 1
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
Sec-Gpc: 1
Te: trailers
Upgrade-Insecure-Requests: 1
X-Forwarded-For: 192.168.2.12
X-Forwarded-Host: whoami.k3s.local
X-Forwarded-Port: 443
X-Forwarded-Proto: https
X-Forwarded-Server: traefik-68ccf99dcd-7lttf
X-Real-Ip: 192.168.2.12

Example zerotier-one access:

Hostname: web-65f84c6bc4-m9js7
IP: 127.0.0.1
IP: ::1
IP: 10.42.1.78
IP: fe80::5051:71ff:fe04:34e
RemoteAddr: 10.42.0.131:56840
GET / HTTP/1.1
Host: whoami.k3s.local
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:124.0) Gecko/20100101 Firefox/124.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.5
Cache-Control: no-cache
Dnt: 1
Pragma: no-cache
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
Sec-Gpc: 1
Te: trailers
Upgrade-Insecure-Requests: 1
X-Forwarded-For: 10.42.0.245
X-Forwarded-Host: whoami.k3s.local
X-Forwarded-Port: 443
X-Forwarded-Proto: https
X-Forwarded-Server: traefik-68ccf99dcd-7lttf
X-Real-Ip: 10.42.0.245

Any ideas on how to fix this with klipper?

Note:
I already have externalTrafficPolicy: Local and affinity to ensure traefik pod is in the same node as klipper.

I guess I can just use 10.42.0.0/24 in the IP whitelist and it should work but would prefer to have the correct IP forwarded!

How does this work?

how does this work? i just installed a k3s cluster, and klipper-lb is installed. Is there a yaml manifest to apply it in other clusters?

support proxy client IPv6 connection to IPv4 nodes.

In our situation, we need to allow IPv6-only client hosts to reach kubernetes services but klipper-lb does not support this.

Currently, we are using socat proxy the connections manually. The command like that:

socat TCP6-LISTEN:1234,fork TCP4:1.2.3.4:1234

I think that it would be nice if klipper-lb can do this automatically.

How can inbound traffic be routed exclusively to the current node's Traefik pod?

Currently, I have the following requirements:
Two nodes: node1 and node2
Services are running on node1, and Traefik's pod is running on node2, both with serviceLB enabled.
I want to achieve: when traffic passes through node1 or node2, I want the Traefik on the current node to exclusively handle inbound traffic on that node, rather than on other nodes (if there is no Traefik pod scheduled to the current node, the request should result in an error).
However, when both nodes have Traefik and serviceLB enabled, all requests are load-balanced before entering Traefik, and then evenly distributed to each Traefik pod (since the nodes in the cluster are in different networks, this leads to bandwidth and latency issues).
I understand that the service of type LoadBalancer in Kubernetes can specify externaltrafficpolicy and internaltrafficpolicy as Local. However, this still does not solve the problem.

there is the configuration of loadbalancer:

apiVersion: v1
kind: Service
metadata:
  name: traefik
spec:
  type: LoadBalancer
  selector:
    app: traefik
  externalTrafficPolicy: Local
  internalTrafficPolicy: Local
  ports:
    - protocol: TCP
      port: 80
      name: web80
      targetPort: 80
    - protocol: TCP
      port: 443
      name: https
      targetPort: 443
    - protocol: TCP
      port: 21115
      name: hbbs-1
    - protocol: TCP
      port: 21116
      name: hbbs-2
    - protocol: UDP
      port: 21116
      name: hbbs-3
    - protocol: TCP
      port: 21117
      name: hbbr-1
    - protocol: UDP
      port: 3478
      name: derper

no route to CoreDNS in k3s

maybe iptables issue

I have been reset iptables and restart svclb-traefik daemonSet, still no route to 10.43.0.0/16

image

k3s v0.7.0 (v1.14.4-k3s.1)
traefik (1.7.9)
coredns (1.3.1)
worker os (ubuntu 16.04.5 desktop amd64)
master os (ubuntu 18.04 server amd64)
worker environment: edge, behind nat, floating ip (4G LTE)
master environment: aws ec2, public subnet

Klipper fails when k3s runs inside LXC container

Hi
I'm running k3s inside LXC container. It starts ok, the only missing bit is pod named svclb-traefik-xxx. I see it's from image rancher/klipper-lb:v0.1.2. It doesn't start, showing error in logs:

2020-12-07T07:14:51.134808188Z + trap exit TERM INT
2020-12-07T07:14:51.135173011Z /usr/bin/entry: line 6: can't create /proc/sys/net/ipv4/ip_forward: Read-only file system
2020-12-07T07:14:51.135212862Z + echo 1
2020-12-07T07:14:51.135229374Z + true
2020-12-07T07:14:51.135381090Z + cat /proc/sys/net/ipv4/ip_forward
2020-12-07T07:14:51.136264347Z + '[' 0 '!=' 1 ]
2020-12-07T07:14:51.136442799Z + exit 1

Looking at command output from inside container

# cat /proc/sys/net/ipv4/ip_forward
1

shows, that forwarding is already enabled.

Looking at the source

klipper-lb/entry

Lines 6 to 10 in 824f44a

echo 1 > /proc/sys/net/ipv4/ip_forward || true
if [ `cat /proc/sys/net/ipv4/ip_forward` != 1 ]; then
exit 1
fi

I think that this check doesn't only check, but try to set forwarding even, if it's already enabled.

I think solution to my problem would be to first check if forwarding is already enabled, and then set forwarding only, if it's not enabled (not always, like above)

Edit: I've just found similar issue:
#4

Klipper in wrong namespace and not able to kill the pods

We are using the KlipperLB load balancer and are finding that the svclb services are getting created in the kube-system namespaces. In addition, we can not delete the pods in that namespace - the initial pod is deleted, but then it gets recreated, so it won't die.
Initial pods in kube-system namespace:

k get pods -n kube-system
NAME                                                        READY   STATUS    RESTARTS       AGE
coredns-d76bd69b-bnx9m                                      1/1     Running   3 (9d ago)     9d
metrics-server-7cd5fcb6b7-gflch                             1/1     Running   3 (9d ago)     9d
local-path-provisioner-6c79684f77-hlwsg                     1/1     Running   5 (9d ago)     9d
svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-k7wsp   1/1     Running   0              7d23h
svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-df8cj   1/1     Running   0              7d23h
svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-sxb88   1/1     Running   0              7d23h
svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-7nqq6   1/1     Running   1 (7d6h ago)   7d23h
svclb-istio-ingressgateway-c9776a98-m6hzn                   3/3     Running   0              3d2h
svclb-istio-ingressgateway-c9776a98-kwvr7                   3/3     Running   0              3d2h
svclb-istio-ingressgateway-c9776a98-4jn4m                   3/3     Running   0              3d2h
svclb-istio-ingressgateway-c9776a98-hd4jw                   3/3     Running   0              3d2h
svclb-keycloak-fb0704bb-pnq85                               1/1     Running   0              2d23h
svclb-keycloak-fb0704bb-xfh2p                               1/1     Running   0              2d23h
svclb-keycloak-fb0704bb-llc4r                               1/1     Running   0              2d23h
svclb-keycloak-fb0704bb-cjx4v                               1/1     Running   0              2d23h
svclb-postgres-db-40a02dd4-7h7q7                            1/1     Running   0              2d22h
svclb-postgres-db-40a02dd4-svf68                            1/1     Running   0              2d22h
svclb-postgres-db-40a02dd4-cn25l                            1/1     Running   0              2d22h
svclb-postgres-db-40a02dd4-tskt4                            1/1     Running   0              2d22h
svclb-stackgres-restapi-e9fc8149-xqhgw                      0/1     Pending   0              23h
svclb-stackgres-restapi-e9fc8149-jp4cx                      0/1     Pending   0              23h
svclb-stackgres-restapi-e9fc8149-444fj                      0/1     Pending   0              23h
svclb-stackgres-restapi-e9fc8149-jxgjm                      0/1     Pending   0              23h
svclb-nginx-service-2fcd8743-zbxv6                          1/1     Running   0              3h9m
svclb-nginx-service-2fcd8743-dpw6q                          1/1     Running   0              3h9m
svclb-nginx-service-2fcd8743-hj4gk                          1/1     Running   0              3h9m
svclb-nginx-service-2fcd8743-dt5pn                          1/1     Running   0              3h9m
svclb-nginx-b1379761-dhw6w                                  1/1     Running   0              3h27m
svclb-nginx-b1379761-zcgnd                                  1/1     Running   0              3h27m
svclb-nginx-b1379761-czflw                                  1/1     Running   0              3h27m
svclb-nginx-b1379761-jbpmv                                  1/1     Running   0              3h27m
svclb-west2-service-9d3852cf-rjhfw                          1/1     Running   0              64m
svclb-west2-service-9d3852cf-9p29w                          1/1     Running   0              64m
svclb-west2-service-9d3852cf-t2s5h                          1/1     Running   0              64m
svclb-west2-service-9d3852cf-7z2xw                          1/1     Running   0              64m
svclb-test-service-ac1dfd76-6qj28                           1/1     Running   0              47m
svclb-test-service-ac1dfd76-ckfdz                           1/1     Running   0              47m
svclb-test-service-ac1dfd76-cmdgz                           1/1     Running   0              47m
svclb-test-service-ac1dfd76-47j66                           1/1     Running   0              47m
svclb-test-f79880dc-gxrcx                                   0/2     Pending   0              4m35s
svclb-test-f79880dc-l9b9k                                   0/2     Pending   0              4m35s
svclb-test-f79880dc-x2bcz                                   0/2     Pending   0              4m35s
svclb-test-f79880dc-zkwdz                                   0/2     Pending   0              4m35s
svclb-test-replicas-2fef0816-l2xfv                          0/2     Pending   0              4m35s
svclb-test-replicas-2fef0816-zctcp                          0/2     Pending   0              4m35s
svclb-test-replicas-2fef0816-lgj2v                          0/2     Pending   0              4m35s
svclb-test-replicas-2fef0816-gjgwv                          0/2     Pending   0              4m35s

After deleting a couple of pods:

k delete pods -n kube-system svclb-test-f79880dc-gxrcx svclb-test-f79880dc-l9b9k
pod "svclb-test-f79880dc-gxrcx" deleted
pod "svclb-test-f79880dc-l9b9k" deleted
(envgen)$ k get pods -n kube-system
NAME                                                        READY   STATUS    RESTARTS       AGE
coredns-d76bd69b-bnx9m                                      1/1     Running   3 (9d ago)     9d
metrics-server-7cd5fcb6b7-gflch                             1/1     Running   3 (9d ago)     9d
local-path-provisioner-6c79684f77-hlwsg                     1/1     Running   5 (9d ago)     9d
svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-k7wsp   1/1     Running   0              7d23h
svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-df8cj   1/1     Running   0              7d23h
svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-sxb88   1/1     Running   0              7d23h
svclb-rook-ceph-mgr-dashboard-loadbalancer-1364e2b9-7nqq6   1/1     Running   1 (7d6h ago)   7d23h
svclb-istio-ingressgateway-c9776a98-m6hzn                   3/3     Running   0              3d2h
svclb-istio-ingressgateway-c9776a98-kwvr7                   3/3     Running   0              3d2h
svclb-istio-ingressgateway-c9776a98-4jn4m                   3/3     Running   0              3d2h
svclb-istio-ingressgateway-c9776a98-hd4jw                   3/3     Running   0              3d2h
svclb-keycloak-fb0704bb-pnq85                               1/1     Running   0              2d23h
svclb-keycloak-fb0704bb-xfh2p                               1/1     Running   0              2d23h
svclb-keycloak-fb0704bb-llc4r                               1/1     Running   0              2d23h
svclb-keycloak-fb0704bb-cjx4v                               1/1     Running   0              2d23h
svclb-postgres-db-40a02dd4-7h7q7                            1/1     Running   0              2d22h
svclb-postgres-db-40a02dd4-svf68                            1/1     Running   0              2d22h
svclb-postgres-db-40a02dd4-cn25l                            1/1     Running   0              2d22h
svclb-postgres-db-40a02dd4-tskt4                            1/1     Running   0              2d22h
svclb-stackgres-restapi-e9fc8149-xqhgw                      0/1     Pending   0              23h
svclb-stackgres-restapi-e9fc8149-jp4cx                      0/1     Pending   0              23h
svclb-stackgres-restapi-e9fc8149-444fj                      0/1     Pending   0              23h
svclb-stackgres-restapi-e9fc8149-jxgjm                      0/1     Pending   0              23h
svclb-nginx-service-2fcd8743-zbxv6                          1/1     Running   0              3h11m
svclb-nginx-service-2fcd8743-dpw6q                          1/1     Running   0              3h11m
svclb-nginx-service-2fcd8743-hj4gk                          1/1     Running   0              3h11m
svclb-nginx-service-2fcd8743-dt5pn                          1/1     Running   0              3h11m
svclb-nginx-b1379761-dhw6w                                  1/1     Running   0              3h29m
svclb-nginx-b1379761-zcgnd                                  1/1     Running   0              3h29m
svclb-nginx-b1379761-czflw                                  1/1     Running   0              3h29m
svclb-nginx-b1379761-jbpmv                                  1/1     Running   0              3h29m
svclb-west2-service-9d3852cf-rjhfw                          1/1     Running   0              66m
svclb-west2-service-9d3852cf-9p29w                          1/1     Running   0              66m
svclb-west2-service-9d3852cf-t2s5h                          1/1     Running   0              66m
svclb-west2-service-9d3852cf-7z2xw                          1/1     Running   0              66m
svclb-test-service-ac1dfd76-6qj28                           1/1     Running   0              48m
svclb-test-service-ac1dfd76-ckfdz                           1/1     Running   0              48m
svclb-test-service-ac1dfd76-cmdgz                           1/1     Running   0              48m
svclb-test-service-ac1dfd76-47j66                           1/1     Running   0              48m
svclb-test-f79880dc-x2bcz                                   0/2     Pending   0              6m23s
svclb-test-f79880dc-zkwdz                                   0/2     Pending   0              6m23s
svclb-test-replicas-2fef0816-l2xfv                          0/2     Pending   0              6m23s
svclb-test-replicas-2fef0816-zctcp                          0/2     Pending   0              6m23s
svclb-test-replicas-2fef0816-lgj2v                          0/2     Pending   0              6m23s
svclb-test-replicas-2fef0816-gjgwv                          0/2     Pending   0              6m23s
svclb-test-f79880dc-bfhcc                                   0/2     Pending   0              3s
svclb-test-f79880dc-vz52q                                   0/2     Pending   0              3s

svclb pod not returning SSL Certificates.

I am using k3d v4.2.0, but have narrowed down to this being a Klipper svclb issue. I am using the Istio proxy service, and port 80 is working fine. However when I enable SSL/TLS for routing to 443, I cannot connect properly because the SSL certificate is not being returned to the client.

I am starting my k3d cluster with this command:

k3d cluster create --registry-create --k3s-server-arg '--no-deploy=traefik' -p "9080:80@loadbalancer" -p "9443:43@loadbalancer" istio-workshop

If I connect to the istio-ingressgateway directly, it's fine. If I connect to svclb-istio-ingressgateway that is where the problem begins.

Connecting to svclb-istio-ingressgateway with openssl. No certificate returned. Error.

k port-forward svclb-istio-ingressgateway-xnxb4 7443:43 -n istio-system

openssl s_client -cipher ALL -servername istioinaction.io -connect localhost:7443
CONNECTED(00000003)
write:errno=0
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 414 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---

Connecting to istio-ingressgateway with openssl. Certificate returned. Correct.

k port-forward istio-ingressgateway-5686db779c-z2hk7 7443:43 -n istio-system

openssl s_client -cipher ALL -servername istioinaction.io -connect localhost:7443
CONNECTED(00000003)
depth=0 CN = istioinaction.io
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 CN = istioinaction.io
verify error:num=21:unable to verify the first certificate
verify return:1
depth=0 CN = istioinaction.io
verify return:1
---
Certificate chain
 0 s:CN = istioinaction.io
   i:CN = istio-workshop-ca
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIDUTCCAjmgAwIBAgIQW8bMG/ndnqBqqT3ItMqukjANBgkqhkiG9w0BAQsFADAc
MRowGAYDVQQDExFpc3Rpby13b3Jrc2hvcC1jYTAeFw0yMTAyMjUxNzE2MzJaFw0z
MTAyMjMxNzE2MzJaMBsxGTAXBgNVBAMTEGlzdGlvaW5hY3Rpb24uaW8wggEiMA0G
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCazmNTsvJa/yNcncyeN/V3HSCYU5p3
/vi38KdWiZXkFnaAXhdQtaKD/cOqzVPAWRm7hFVUnNYIgBXeYgsubUwjd9/pot12
u343pFYD+8BSZd0/dRUjLHi4R4wE2+GgX4u0uKgGupl4p7FMpIp0l0bknpIFxYVi
/RP3jnIli09YzHTdhtsY+b4iyl6XKhqOeKO0WqRnKLr6Z2PV/1U2xe+McB1Z9ELC
2bF9/d/wj/+hUrheS3EMMZxPgv4H/cXv5v8u5nskneWr1QSVxt6tXc1fAb5oVR/M
vrKZnxIMMez3AmB0gcJyGoLMBN6JUlmXLSKnCiN0KIMiiMrgjv6n5ej3AgMBAAGj
gY8wgYwwDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEF
BQcDAjAdBgNVHQ4EFgQUNqSZ1h5/Pzfvgb0yz/GNm8sP/8QwHwYDVR0jBBgwFoAU
JSpD5fHAjP/YLycK4mABIsY/y/YwGwYDVR0RBBQwEoIQaXN0aW9pbmFjdGlvbi5p
bzANBgkqhkiG9w0BAQsFAAOCAQEAh58Osb17EpCc2+qbToMiE4uaFiWISPMva+MV
WGPRgwk26lKN8TA2rgxB65qTtxfZTtmoB55OWuKAIvzWrcNnPw4GzIIi8dhX7k9a
NVZlKBVqNXrk284uXXrqycXKFZyTcwVE0IALS4ckIrDREl5L+N/EoGsAukFWKxny
Oh2Qua/qUi8XFylN3Um919kQq2TCzZe2KtEA02I0WC2y6b+rNwEZgyOC9AxN3d7S
4+fU3bUAofEx27DC4aXj52GliTrQvMEeY2wT9k8Oxjs/t5kT7/uz7zxxPZ9A6OYJ
DFd/vIrk2FHlrznfkRYYKCxLhNnsdHY+J9paO/VF8GOhPbSpIQ==
-----END CERTIFICATE-----
subject=CN = istioinaction.io

issuer=CN = istio-workshop-ca

---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: RSA-PSS
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 1343 bytes and written 494 bytes
Verification error: unable to verify the first certificate
---
New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384
Server public key is 2048 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 21 (unable to verify the first certificate)
---

Logs from svclb-istio-ingressgateway.

k logs svclb-istio-ingressgateway-xnxb4 -c lb-port-443 -n istio-system
+ trap exit TERM INT
/usr/bin/entry: line 6: can't create /proc/sys/net/ipv4/ip_forward: Read-only file system
+ echo 1
+ true
+ cat /proc/sys/net/ipv4/ip_forward
+ '[' 1 '!=' 1 ]
+ iptables -t nat -I PREROUTING '!' -s 10.43.152.110/32 -p TCP --dport 443 -j DNAT --to 10.43.152.110:443
+ iptables -t nat -I POSTROUTING -d 10.43.152.110/32 -p TCP -j MASQUERADE
+ '[' '!' -e /pause ]
+ mkfifo /pause

svclb-istio-ingressgateway pod spec.

k get pod svclb-istio-ingressgateway-xnxb4 -o yaml -n istio-system       
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2021-03-09T20:53:37Z"
  generateName: svclb-istio-ingressgateway-
  labels:
    app: svclb-istio-ingressgateway
    controller-revision-hash: 64c454b8cb
    pod-template-generation: "1"
    svccontroller.k3s.cattle.io/svcname: istio-ingressgateway
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:generateName: {}
        f:labels:
          .: {}
          f:app: {}
          f:controller-revision-hash: {}
          f:pod-template-generation: {}
          f:svccontroller.k3s.cattle.io/svcname: {}
        f:ownerReferences:
          .: {}
          k:{"uid":"6629db22-fc1a-4261-9c90-fff35a96c0ad"}:
            .: {}
            f:apiVersion: {}
            f:blockOwnerDeletion: {}
            f:controller: {}
            f:kind: {}
            f:name: {}
            f:uid: {}
      f:spec:
        f:affinity:
          .: {}
          f:nodeAffinity:
            .: {}
            f:requiredDuringSchedulingIgnoredDuringExecution:
              .: {}
              f:nodeSelectorTerms: {}
        f:containers:
          k:{"name":"lb-port-80"}:
            .: {}
            f:env:
              .: {}
              k:{"name":"DEST_IP"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"DEST_PORT"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"DEST_PROTO"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"SRC_PORT"}:
                .: {}
                f:name: {}
                f:value: {}
            f:image: {}
            f:imagePullPolicy: {}
            f:name: {}
            f:ports:
              .: {}
              k:{"containerPort":80,"protocol":"TCP"}:
                .: {}
                f:containerPort: {}
                f:hostPort: {}
                f:name: {}
                f:protocol: {}
            f:resources: {}
            f:securityContext:
              .: {}
              f:capabilities:
                .: {}
                f:add: {}
            f:terminationMessagePath: {}
            f:terminationMessagePolicy: {}
          k:{"name":"lb-port-443"}:
            .: {}
            f:env:
              .: {}
              k:{"name":"DEST_IP"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"DEST_PORT"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"DEST_PROTO"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"SRC_PORT"}:
                .: {}
                f:name: {}
                f:value: {}
            f:image: {}
            f:imagePullPolicy: {}
            f:name: {}
            f:ports:
              .: {}
              k:{"containerPort":443,"protocol":"TCP"}:
                .: {}
                f:containerPort: {}
                f:hostPort: {}
                f:name: {}
                f:protocol: {}
            f:resources: {}
            f:securityContext:
              .: {}
              f:capabilities:
                .: {}
                f:add: {}
            f:terminationMessagePath: {}
            f:terminationMessagePolicy: {}
          k:{"name":"lb-port-15012"}:
            .: {}
            f:env:
              .: {}
              k:{"name":"DEST_IP"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"DEST_PORT"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"DEST_PROTO"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"SRC_PORT"}:
                .: {}
                f:name: {}
                f:value: {}
            f:image: {}
            f:imagePullPolicy: {}
            f:name: {}
            f:ports:
              .: {}
              k:{"containerPort":15012,"protocol":"TCP"}:
                .: {}
                f:containerPort: {}
                f:hostPort: {}
                f:name: {}
                f:protocol: {}
            f:resources: {}
            f:securityContext:
              .: {}
              f:capabilities:
                .: {}
                f:add: {}
            f:terminationMessagePath: {}
            f:terminationMessagePolicy: {}
          k:{"name":"lb-port-15021"}:
            .: {}
            f:env:
              .: {}
              k:{"name":"DEST_IP"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"DEST_PORT"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"DEST_PROTO"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"SRC_PORT"}:
                .: {}
                f:name: {}
                f:value: {}
            f:image: {}
            f:imagePullPolicy: {}
            f:name: {}
            f:ports:
              .: {}
              k:{"containerPort":15021,"protocol":"TCP"}:
                .: {}
                f:containerPort: {}
                f:hostPort: {}
                f:name: {}
                f:protocol: {}
            f:resources: {}
            f:securityContext:
              .: {}
              f:capabilities:
                .: {}
                f:add: {}
            f:terminationMessagePath: {}
            f:terminationMessagePolicy: {}
          k:{"name":"lb-port-15443"}:
            .: {}
            f:env:
              .: {}
              k:{"name":"DEST_IP"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"DEST_PORT"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"DEST_PROTO"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"SRC_PORT"}:
                .: {}
                f:name: {}
                f:value: {}
            f:image: {}
            f:imagePullPolicy: {}
            f:name: {}
            f:ports:
              .: {}
              k:{"containerPort":15443,"protocol":"TCP"}:
                .: {}
                f:containerPort: {}
                f:hostPort: {}
                f:name: {}
                f:protocol: {}
            f:resources: {}
            f:securityContext:
              .: {}
              f:capabilities:
                .: {}
                f:add: {}
            f:terminationMessagePath: {}
            f:terminationMessagePolicy: {}
        f:dnsPolicy: {}
        f:enableServiceLinks: {}
        f:restartPolicy: {}
        f:schedulerName: {}
        f:securityContext: {}
        f:terminationGracePeriodSeconds: {}
        f:tolerations: {}
      f:status:
        f:conditions:
          k:{"type":"ContainersReady"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
          k:{"type":"Initialized"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
          k:{"type":"Ready"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
        f:containerStatuses: {}
        f:hostIP: {}
        f:phase: {}
        f:podIP: {}
        f:podIPs:
          .: {}
          k:{"ip":"10.42.0.12"}:
            .: {}
            f:ip: {}
        f:startTime: {}
    manager: k3s
    operation: Update
    time: "2021-03-09T20:53:51Z"
  name: svclb-istio-ingressgateway-xnxb4
  namespace: istio-system
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: DaemonSet
    name: svclb-istio-ingressgateway
    uid: 6629db22-fc1a-4261-9c90-fff35a96c0ad
  resourceVersion: "1221"
  uid: bdc816f5-17b8-417d-9a91-6afd73789356
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchFields:
          - key: metadata.name
            operator: In
            values:
            - k3d-istio-workshop-server-0
  containers:
  - env:
    - name: SRC_PORT
      value: "15021"
    - name: DEST_PROTO
      value: TCP
    - name: DEST_PORT
      value: "15021"
    - name: DEST_IP
      value: 10.43.152.110
    image: rancher/klipper-lb:v0.1.2
    imagePullPolicy: IfNotPresent
    name: lb-port-15021
    ports:
    - containerPort: 15021
      hostPort: 15021
      name: lb-port-15021
      protocol: TCP
    resources: {}
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-kbcwx
      readOnly: true
  - env:
    - name: SRC_PORT
      value: "80"
    - name: DEST_PROTO
      value: TCP
    - name: DEST_PORT
      value: "80"
    - name: DEST_IP
      value: 10.43.152.110
    image: rancher/klipper-lb:v0.1.2
    imagePullPolicy: IfNotPresent
    name: lb-port-80
    ports:
    - containerPort: 80
      hostPort: 80
      name: lb-port-80
      protocol: TCP
    resources: {}
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-kbcwx
      readOnly: true
  - env:
    - name: SRC_PORT
      value: "443"
    - name: DEST_PROTO
      value: TCP
    - name: DEST_PORT
      value: "443"
    - name: DEST_IP
      value: 10.43.152.110
    image: rancher/klipper-lb:v0.1.2
    imagePullPolicy: IfNotPresent
    name: lb-port-443
    ports:
    - containerPort: 443
      hostPort: 443
      name: lb-port-443
      protocol: TCP
    resources: {}
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-kbcwx
      readOnly: true
  - env:
    - name: SRC_PORT
      value: "15012"
    - name: DEST_PROTO
      value: TCP
    - name: DEST_PORT
      value: "15012"
    - name: DEST_IP
      value: 10.43.152.110
    image: rancher/klipper-lb:v0.1.2
    imagePullPolicy: IfNotPresent
    name: lb-port-15012
    ports:
    - containerPort: 15012
      hostPort: 15012
      name: lb-port-15012
      protocol: TCP
    resources: {}
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-kbcwx
      readOnly: true
  - env:
    - name: SRC_PORT
      value: "15443"
    - name: DEST_PROTO
      value: TCP
    - name: DEST_PORT
      value: "15443"
    - name: DEST_IP
      value: 10.43.152.110
    image: rancher/klipper-lb:v0.1.2
    imagePullPolicy: IfNotPresent
    name: lb-port-15443
    ports:
    - containerPort: 15443
      hostPort: 15443
      name: lb-port-15443
      protocol: TCP
    resources: {}
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-kbcwx
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: k3d-istio-workshop-server-0
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
    operator: Exists
  - effect: NoSchedule
    key: node-role.kubernetes.io/control-plane
    operator: Exists
  - key: CriticalAddonsOnly
    operator: Exists
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/disk-pressure
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/memory-pressure
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/pid-pressure
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/unschedulable
    operator: Exists
  volumes:
  - name: default-token-kbcwx
    secret:
      defaultMode: 420
      secretName: default-token-kbcwx
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2021-03-09T20:53:37Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2021-03-09T20:53:51Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2021-03-09T20:53:51Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2021-03-09T20:53:37Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: containerd://61cf11f9ae1667a5f4fd3c4055cd42b6d5904e2fde1f03bc228946334816336c
    image: docker.io/rancher/klipper-lb:v0.1.2
    imageID: docker.io/rancher/klipper-lb@sha256:2fb97818f5d64096d635bc72501a6cb2c8b88d5d16bc031cf71b5b6460925e4a
    lastState: {}
    name: lb-port-15012
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2021-03-09T20:53:50Z"
  - containerID: containerd://19b255038f99ec223de724a4693f2d04b2400099991997e5bd0828e42486d224
    image: docker.io/rancher/klipper-lb:v0.1.2
    imageID: docker.io/rancher/klipper-lb@sha256:2fb97818f5d64096d635bc72501a6cb2c8b88d5d16bc031cf71b5b6460925e4a
    lastState: {}
    name: lb-port-15021
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2021-03-09T20:53:50Z"
  - containerID: containerd://66bbb06af9b587a5ad3295396ebe170967171c21d9e8673040603a44b2a40753
    image: docker.io/rancher/klipper-lb:v0.1.2
    imageID: docker.io/rancher/klipper-lb@sha256:2fb97818f5d64096d635bc72501a6cb2c8b88d5d16bc031cf71b5b6460925e4a
    lastState: {}
    name: lb-port-15443
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2021-03-09T20:53:50Z"
  - containerID: containerd://f403495ccf69bb2e401ee88f1f924df9423659a645d979b5556d5760d4cafe74
    image: docker.io/rancher/klipper-lb:v0.1.2
    imageID: docker.io/rancher/klipper-lb@sha256:2fb97818f5d64096d635bc72501a6cb2c8b88d5d16bc031cf71b5b6460925e4a
    lastState: {}
    name: lb-port-443
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2021-03-09T20:53:50Z"
  - containerID: containerd://8fa8f3b38ae4a461d79e5e8fd4174452f6a9464930c8893964873309f3658aa2
    image: docker.io/rancher/klipper-lb:v0.1.2
    imageID: docker.io/rancher/klipper-lb@sha256:2fb97818f5d64096d635bc72501a6cb2c8b88d5d16bc031cf71b5b6460925e4a
    lastState: {}
    name: lb-port-80
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2021-03-09T20:53:50Z"
  hostIP: 172.26.0.2
  phase: Running
  podIP: 10.42.0.12
  podIPs:
  - ip: 10.42.0.12
  qosClass: BestEffort
  startTime: "2021-03-09T20:53:37Z"

istio-ingressgateway pod spec.

 k get pod istio-ingressgateway-5686db779c-z2hk7 -o yaml -n istio-system       
apiVersion: v1
kind: Pod
metadata:
  annotations:
    prometheus.io/path: /stats/prometheus
    prometheus.io/port: "15020"
    prometheus.io/scrape: "true"
    sidecar.istio.io/inject: "false"
  creationTimestamp: "2021-03-09T20:53:37Z"
  generateName: istio-ingressgateway-5686db779c-
  labels:
    app: istio-ingressgateway
    chart: gateways
    heritage: Tiller
    install.operator.istio.io/owning-resource: unknown
    istio: ingressgateway
    istio.io/rev: 1-8-3
    operator.istio.io/component: IngressGateways
    pod-template-hash: 5686db779c
    release: istio
    service.istio.io/canonical-name: istio-ingressgateway
    service.istio.io/canonical-revision: 1-8-3
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:prometheus.io/path: {}
          f:prometheus.io/port: {}
          f:prometheus.io/scrape: {}
          f:sidecar.istio.io/inject: {}
        f:generateName: {}
        f:labels:
          .: {}
          f:app: {}
          f:chart: {}
          f:heritage: {}
          f:install.operator.istio.io/owning-resource: {}
          f:istio: {}
          f:istio.io/rev: {}
          f:operator.istio.io/component: {}
          f:pod-template-hash: {}
          f:release: {}
          f:service.istio.io/canonical-name: {}
          f:service.istio.io/canonical-revision: {}
        f:ownerReferences:
          .: {}
          k:{"uid":"c7f93765-ead6-427e-86b9-be304827145c"}:
            .: {}
            f:apiVersion: {}
            f:blockOwnerDeletion: {}
            f:controller: {}
            f:kind: {}
            f:name: {}
            f:uid: {}
      f:spec:
        f:affinity:
          .: {}
          f:nodeAffinity:
            .: {}
            f:preferredDuringSchedulingIgnoredDuringExecution: {}
            f:requiredDuringSchedulingIgnoredDuringExecution:
              .: {}
              f:nodeSelectorTerms: {}
        f:containers:
          k:{"name":"istio-proxy"}:
            .: {}
            f:args: {}
            f:env:
              .: {}
              k:{"name":"CA_ADDR"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"CANONICAL_REVISION"}:
                .: {}
                f:name: {}
                f:valueFrom:
                  .: {}
                  f:fieldRef:
                    .: {}
                    f:apiVersion: {}
                    f:fieldPath: {}
              k:{"name":"CANONICAL_SERVICE"}:
                .: {}
                f:name: {}
                f:valueFrom:
                  .: {}
                  f:fieldRef:
                    .: {}
                    f:apiVersion: {}
                    f:fieldPath: {}
              k:{"name":"HOST_IP"}:
                .: {}
                f:name: {}
                f:valueFrom:
                  .: {}
                  f:fieldRef:
                    .: {}
                    f:apiVersion: {}
                    f:fieldPath: {}
              k:{"name":"INSTANCE_IP"}:
                .: {}
                f:name: {}
                f:valueFrom:
                  .: {}
                  f:fieldRef:
                    .: {}
                    f:apiVersion: {}
                    f:fieldPath: {}
              k:{"name":"ISTIO_META_CLUSTER_ID"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"ISTIO_META_OWNER"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"ISTIO_META_ROUTER_MODE"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"ISTIO_META_WORKLOAD_NAME"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"JWT_POLICY"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"NODE_NAME"}:
                .: {}
                f:name: {}
                f:valueFrom:
                  .: {}
                  f:fieldRef:
                    .: {}
                    f:apiVersion: {}
                    f:fieldPath: {}
              k:{"name":"PILOT_CERT_PROVIDER"}:
                .: {}
                f:name: {}
                f:value: {}
              k:{"name":"POD_NAME"}:
                .: {}
                f:name: {}
                f:valueFrom:
                  .: {}
                  f:fieldRef:
                    .: {}
                    f:apiVersion: {}
                    f:fieldPath: {}
              k:{"name":"POD_NAMESPACE"}:
                .: {}
                f:name: {}
                f:valueFrom:
                  .: {}
                  f:fieldRef:
                    .: {}
                    f:apiVersion: {}
                    f:fieldPath: {}
              k:{"name":"SERVICE_ACCOUNT"}:
                .: {}
                f:name: {}
                f:valueFrom:
                  .: {}
                  f:fieldRef:
                    .: {}
                    f:apiVersion: {}
                    f:fieldPath: {}
            f:image: {}
            f:imagePullPolicy: {}
            f:lifecycle:
              .: {}
              f:preStop:
                .: {}
                f:exec:
                  .: {}
                  f:command: {}
            f:name: {}
            f:ports:
              .: {}
              k:{"containerPort":8080,"protocol":"TCP"}:
                .: {}
                f:containerPort: {}
                f:protocol: {}
              k:{"containerPort":8443,"protocol":"TCP"}:
                .: {}
                f:containerPort: {}
                f:protocol: {}
              k:{"containerPort":15012,"protocol":"TCP"}:
                .: {}
                f:containerPort: {}
                f:protocol: {}
              k:{"containerPort":15021,"protocol":"TCP"}:
                .: {}
                f:containerPort: {}
                f:protocol: {}
              k:{"containerPort":15090,"protocol":"TCP"}:
                .: {}
                f:containerPort: {}
                f:name: {}
                f:protocol: {}
              k:{"containerPort":15443,"protocol":"TCP"}:
                .: {}
                f:containerPort: {}
                f:protocol: {}
            f:readinessProbe:
              .: {}
              f:failureThreshold: {}
              f:httpGet:
                .: {}
                f:path: {}
                f:port: {}
                f:scheme: {}
              f:initialDelaySeconds: {}
              f:periodSeconds: {}
              f:successThreshold: {}
              f:timeoutSeconds: {}
            f:resources:
              .: {}
              f:limits:
                .: {}
                f:cpu: {}
                f:memory: {}
              f:requests:
                .: {}
                f:cpu: {}
                f:memory: {}
            f:securityContext:
              .: {}
              f:allowPrivilegeEscalation: {}
              f:capabilities:
                .: {}
                f:drop: {}
              f:privileged: {}
              f:readOnlyRootFilesystem: {}
            f:terminationMessagePath: {}
            f:terminationMessagePolicy: {}
            f:volumeMounts:
              .: {}
              k:{"mountPath":"/etc/istio/config"}:
                .: {}
                f:mountPath: {}
                f:name: {}
              k:{"mountPath":"/etc/istio/ingressgateway-ca-certs"}:
                .: {}
                f:mountPath: {}
                f:name: {}
                f:readOnly: {}
              k:{"mountPath":"/etc/istio/ingressgateway-certs"}:
                .: {}
                f:mountPath: {}
                f:name: {}
                f:readOnly: {}
              k:{"mountPath":"/etc/istio/pod"}:
                .: {}
                f:mountPath: {}
                f:name: {}
              k:{"mountPath":"/etc/istio/proxy"}:
                .: {}
                f:mountPath: {}
                f:name: {}
              k:{"mountPath":"/var/lib/istio/data"}:
                .: {}
                f:mountPath: {}
                f:name: {}
              k:{"mountPath":"/var/run/ingress_gateway"}:
                .: {}
                f:mountPath: {}
                f:name: {}
              k:{"mountPath":"/var/run/secrets/istio"}:
                .: {}
                f:mountPath: {}
                f:name: {}
              k:{"mountPath":"/var/run/secrets/tokens"}:
                .: {}
                f:mountPath: {}
                f:name: {}
                f:readOnly: {}
        f:dnsPolicy: {}
        f:enableServiceLinks: {}
        f:restartPolicy: {}
        f:schedulerName: {}
        f:securityContext:
          .: {}
          f:fsGroup: {}
          f:runAsGroup: {}
          f:runAsNonRoot: {}
          f:runAsUser: {}
        f:serviceAccount: {}
        f:serviceAccountName: {}
        f:terminationGracePeriodSeconds: {}
        f:volumes:
          .: {}
          k:{"name":"config-volume"}:
            .: {}
            f:configMap:
              .: {}
              f:defaultMode: {}
              f:name: {}
              f:optional: {}
            f:name: {}
          k:{"name":"gatewaysdsudspath"}:
            .: {}
            f:emptyDir: {}
            f:name: {}
          k:{"name":"ingressgateway-ca-certs"}:
            .: {}
            f:name: {}
            f:secret:
              .: {}
              f:defaultMode: {}
              f:optional: {}
              f:secretName: {}
          k:{"name":"ingressgateway-certs"}:
            .: {}
            f:name: {}
            f:secret:
              .: {}
              f:defaultMode: {}
              f:optional: {}
              f:secretName: {}
          k:{"name":"istio-data"}:
            .: {}
            f:emptyDir: {}
            f:name: {}
          k:{"name":"istio-envoy"}:
            .: {}
            f:emptyDir: {}
            f:name: {}
          k:{"name":"istio-token"}:
            .: {}
            f:name: {}
            f:projected:
              .: {}
              f:defaultMode: {}
              f:sources: {}
          k:{"name":"istiod-ca-cert"}:
            .: {}
            f:configMap:
              .: {}
              f:defaultMode: {}
              f:name: {}
            f:name: {}
          k:{"name":"podinfo"}:
            .: {}
            f:downwardAPI:
              .: {}
              f:defaultMode: {}
              f:items: {}
            f:name: {}
      f:status:
        f:conditions:
          k:{"type":"ContainersReady"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
          k:{"type":"Initialized"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
          k:{"type":"Ready"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
        f:containerStatuses: {}
        f:hostIP: {}
        f:phase: {}
        f:podIP: {}
        f:podIPs:
          .: {}
          k:{"ip":"10.42.0.11"}:
            .: {}
            f:ip: {}
        f:startTime: {}
    manager: k3s
    operation: Update
    time: "2021-03-09T20:53:39Z"
  name: istio-ingressgateway-5686db779c-z2hk7
  namespace: istio-system
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: istio-ingressgateway-5686db779c
    uid: c7f93765-ead6-427e-86b9-be304827145c
  resourceVersion: "1186"
  uid: a5638e42-ab1e-4e4e-9a5b-7afc57165b74
spec:
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - preference:
          matchExpressions:
          - key: kubernetes.io/arch
            operator: In
            values:
            - amd64
        weight: 2
      - preference:
          matchExpressions:
          - key: kubernetes.io/arch
            operator: In
            values:
            - ppc64le
        weight: 2
      - preference:
          matchExpressions:
          - key: kubernetes.io/arch
            operator: In
            values:
            - s390x
        weight: 2
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/arch
            operator: In
            values:
            - amd64
            - ppc64le
            - s390x
  containers:
  - args:
    - proxy
    - router
    - --domain
    - $(POD_NAMESPACE).svc.cluster.local
    - --proxyLogLevel=warning
    - --proxyComponentLogLevel=misc:error
    - --log_output_level=default:info
    - --serviceCluster
    - istio-ingressgateway
    env:
    - name: JWT_POLICY
      value: third-party-jwt
    - name: PILOT_CERT_PROVIDER
      value: istiod
    - name: CA_ADDR
      value: istiod-1-8-3.istio-system.svc:15012
    - name: NODE_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: spec.nodeName
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: INSTANCE_IP
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.podIP
    - name: HOST_IP
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.hostIP
    - name: SERVICE_ACCOUNT
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: spec.serviceAccountName
    - name: CANONICAL_SERVICE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.labels['service.istio.io/canonical-name']
    - name: CANONICAL_REVISION
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.labels['service.istio.io/canonical-revision']
    - name: ISTIO_META_WORKLOAD_NAME
      value: istio-ingressgateway
    - name: ISTIO_META_OWNER
      value: kubernetes://apis/apps/v1/namespaces/istio-system/deployments/istio-ingressgateway
    - name: ISTIO_META_ROUTER_MODE
      value: standard
    - name: ISTIO_META_CLUSTER_ID
      value: Kubernetes
    image: docker.io/istio/proxyv2:1.8.3
    imagePullPolicy: IfNotPresent
    lifecycle:
      preStop:
        exec:
          command:
          - sh
          - -c
          - sleep 5
    name: istio-proxy
    ports:
    - containerPort: 15021
      protocol: TCP
    - containerPort: 8080
      protocol: TCP
    - containerPort: 8443
      protocol: TCP
    - containerPort: 15012
      protocol: TCP
    - containerPort: 15443
      protocol: TCP
    - containerPort: 15090
      name: http-envoy-prom
      protocol: TCP
    readinessProbe:
      failureThreshold: 30
      httpGet:
        path: /healthz/ready
        port: 15021
        scheme: HTTP
      initialDelaySeconds: 1
      periodSeconds: 2
      successThreshold: 1
      timeoutSeconds: 1
    resources:
      limits:
        cpu: "2"
        memory: 1Gi
      requests:
        cpu: 100m
        memory: 128Mi
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      privileged: false
      readOnlyRootFilesystem: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /etc/istio/proxy
      name: istio-envoy
    - mountPath: /etc/istio/config
      name: config-volume
    - mountPath: /var/run/secrets/istio
      name: istiod-ca-cert
    - mountPath: /var/run/secrets/tokens
      name: istio-token
      readOnly: true
    - mountPath: /var/run/ingress_gateway
      name: gatewaysdsudspath
    - mountPath: /var/lib/istio/data
      name: istio-data
    - mountPath: /etc/istio/pod
      name: podinfo
    - mountPath: /etc/istio/ingressgateway-certs
      name: ingressgateway-certs
      readOnly: true
    - mountPath: /etc/istio/ingressgateway-ca-certs
      name: ingressgateway-ca-certs
      readOnly: true
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: istio-ingressgateway-service-account-token-ht8zm
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: k3d-istio-workshop-server-0
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1337
    runAsGroup: 1337
    runAsNonRoot: true
    runAsUser: 1337
  serviceAccount: istio-ingressgateway-service-account
  serviceAccountName: istio-ingressgateway-service-account
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - configMap:
      defaultMode: 420
      name: istio-ca-root-cert
    name: istiod-ca-cert
  - downwardAPI:
      defaultMode: 420
      items:
      - fieldRef:
          apiVersion: v1
          fieldPath: metadata.labels
        path: labels
      - fieldRef:
          apiVersion: v1
          fieldPath: metadata.annotations
        path: annotations
    name: podinfo
  - emptyDir: {}
    name: istio-envoy
  - emptyDir: {}
    name: gatewaysdsudspath
  - emptyDir: {}
    name: istio-data
  - name: istio-token
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          audience: istio-ca
          expirationSeconds: 43200
          path: istio-token
  - configMap:
      defaultMode: 420
      name: istio-1-8-3
      optional: true
    name: config-volume
  - name: ingressgateway-certs
    secret:
      defaultMode: 420
      optional: true
      secretName: istio-ingressgateway-certs
  - name: ingressgateway-ca-certs
    secret:
      defaultMode: 420
      optional: true
      secretName: istio-ingressgateway-ca-certs
  - name: istio-ingressgateway-service-account-token-ht8zm
    secret:
      defaultMode: 420
      secretName: istio-ingressgateway-service-account-token-ht8zm
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2021-03-09T20:53:37Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2021-03-09T20:53:39Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2021-03-09T20:53:39Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2021-03-09T20:53:37Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: containerd://3f8d6e55d111efcdd31f113e73cbd07ee4f8ffd8ba26481460546b22533c960c
    image: docker.io/istio/proxyv2:1.8.3
    imageID: docker.io/istio/proxyv2@sha256:5cfde7ffd5b921cf805f4cf18013d3f1b825f41fe1bd1d977d805c45ca955d5a
    lastState: {}
    name: istio-proxy
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2021-03-09T20:53:37Z"
  hostIP: 172.26.0.2
  phase: Running
  podIP: 10.42.0.11
  podIPs:
  - ip: 10.42.0.11
  qosClass: Burstable
  startTime: "2021-03-09T20:53:37Z"

[BUG] svclb-traefik* won't start after host crash and restart.

What did you do

  • How was the cluster created?

    • only 1 node, with a volume mapping for /var/rancher.../storage.
  • What did you do afterwards?
    My host crashed and after restarting it and restarting k3d, I am no longer able to connect to any app service through ingress.

What did you expect to happen

Ingress should work

Screenshots or terminal output

[rockylinux@rockylinux8 infra_k3d]$ kubectl -n kube-system logs svclb-traefik-dkgkq lb-port-80
+ trap exit TERM INT
+ echo 10.43.70.41
+ grep -Eq :
+ cat /proc/sys/net/ipv4/ip_forward
+ '[' 1 '!=' 1 ]
+ iptables -t nat -I PREROUTING '!' -s 10.43.70.41/32 -p TCP --dport 80 -j DNAT --to 10.43.70.41:80
modprobe: can't change directory to '/lib/modules': No such file or directory
modprobe: can't change directory to '/lib/modules': No such file or directory
modprobe: can't change directory to '/lib/modules': No such file or directory
modprobe: can't change directory to '/lib/modules': No such file or directory
modprobe: can't change directory to '/lib/modules': No such file or directory
iptables v1.8.4 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded. 

Which OS & Architecture

  • Linux, Windows, MacOS / amd64, x86, ...?
    Linux rockylinux8.linuxvmimages.local 4.18.0-348.20.1.el8_5.x86_64 #1 SMP Thu Mar 10 20:59:28 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

Which version of k3d

  • output of k3d version
k3d version v5.3.0
k3s version v1.22.6-k3s1 (default)

Which version of docker

  • output of docker version and docker info
    [rockylinux@rockylinux8 infra_k3d]$ docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Docker Buildx (Docker Inc., v0.8.0-docker)
  scan: Docker Scan (Docker Inc., v0.17.0)

Server:
 Containers: 3
  Running: 2
  Paused: 0
  Stopped: 1
 Images: 5
 Server Version: 20.10.13
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc
 runc version: v1.0.3-0-gf46b6ba
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.18.0-348.20.1.el8_5.x86_64
 Operating System: Rocky Linux 8.5 (Green Obsidian)
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 31.19GiB
 Name: rockylinux8.linuxvmimages.local
 ID: RI32:V7KA:PDQG:Q2Z2:DNET:CMMP:3MMG:23OF:RMTN:W6J2:WOQO:N4YA
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

Support externalTrafficPolicy: Local

When using klipper-lb, even when setting externalTrafficPolicy: Local, the source IP shows as 172.16.18.1 (the first IP in the cluster cidr range).

It would be nice if it could preserve the IP address of the client, useful for when you need to do a reverse DNS lookup of the client ip

svclb pods state is pending

default Traefik deployed in k3os.
svclb pods are expecting to be running in all the machines.
But pods are not scheduled with below error:
Warning FailedScheduling 20m default-scheduler 0/15 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 1 node(s) had taint {sonarqube: true}, that the pod didn't tolerate, 13 node(s) didn't match Pod's node affinity/selector.

*The reason for this node affinity clearly. Somehow I don't understand how metadata.name matches node name in affinity. What is going wrong here pls?

affinity-issue

LB crashloop

Hi,

I have clean-installed K3S on my sbc running Ubuntu Mantic,

The K3s runs fine , traefik-pod runs fine, however the loadbalancers for port 80 and 443 crash with the message: ip-tables not found.

Maybe there is nothing wrong with ServiceLB but with my configuration. Any tips/suggestions?

kubectl get pod -A gives:

NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system local-path-provisioner-6d44f4f9d7-tl5bp 1/1 Running 0 6h40m
kube-system coredns-97b598894-8c5gh 1/1 Running 0 6h40m
kube-system metrics-server-7c55d89d5d-pv4kz 1/1 Running 0 6h40m
kube-system helm-install-traefik-crd-r8j7t 0/1 Completed 0 6h40m
kube-system helm-install-traefik-6pwvf 0/1 Completed 2 6h40m
kube-system traefik-8657d6b9f4-ctwtz 1/1 Running 0 6h37m
kube-system svclb-traefik-4ea49843-qhf9g 0/2 CrashLoopBackOff 164 (2m19s ago) 6h37m

kubectl logs svclb-traefik-4ea49843-qhf9g lb-tcp-80 -n kube-system gives:

  • trap exit TERM INT
  • BIN_DIR=/sbin
  • check_iptables_mode
  • set +e
  • grep nf_tables
  • lsmod
  • '[' 1 '=' 0 ]
  • mode=legacy
  • set -e
    [INFO] legacy mode detected
  • info 'legacy mode detected'
  • echo '[INFO] ' 'legacy mode detected'
  • set_legacy
  • ln -sf /sbin/xtables-legacy-multi /sbin/iptables
  • ln -sf /sbin/xtables-legacy-multi /sbin/iptables-save
  • ln -sf /sbin/xtables-legacy-multi /sbin/iptables-restore
  • ln -sf /sbin/xtables-legacy-multi /sbin/ip6tables
  • start_proxy
  • grep -Eq :
  • echo 0.0.0.0/0
  • iptables -t filter -I FORWARD -s 0.0.0.0/0 -p TCP --dport 80 -j ACCEPT
    /usr/bin/entry: line 46: iptables: not found

k3s —version gives:

k3s version v1.27.3+k3s-9d376dfb-dirty (9d376dfb)
go version go1.20.5

Ths SBC is a starfive visionvife 12a

Help appreciated!

KInd regards Allard Krings

[suggestion] support for labeling node with several pools

Not sure if this is already implemented, the k3s documentation only mentions one pool per node
but it would be cool to attribute a node to be available in several pools

example: svccontroller.k3s.cattle.io/lbpool='["pool1","pool2"]' || svccontroller.k3s.cattle.io/lbpool='pool1,pool2'

would that be possible?

i have made pools based on what should be exposed on specific nodes,
one node can expose several load balancer serviçes, while other nodes only expose some...
i made affinity so my load balancer workloads schedule on specific svccontroller.k3s.cattle.io/lbpool labeled pods, so it matches the externalTrafficPolicy: Local policy

Standalone version

Hi,
currently klipper-lb is only available as part of k3s. It would be great if it would be available as standalone version that can be used as an easy-to-use LB for any k8s cluster.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.