Code Monkey home page Code Monkey logo

compumike / hairpin-proxy Goto Github PK

View Code? Open in Web Editor NEW
555.0 555.0 55.0 41 KB

PROXY protocol support for internal-to-LoadBalancer traffic for Kubernetes Ingress users. If you've had problems with ingress-nginx, cert-manager, LetsEncrypt ACME HTTP01 self-check failures, and the PROXY protocol, read on.

License: MIT License

Shell 4.08% Dockerfile 10.40% Ruby 85.52%
cert-manager coredns ingress-controller ingress-nginx kubernetes kubernetes-controller load-balancer proxy-protocol

hairpin-proxy's People

Contributors

compumike avatar nnqq avatar themightychris avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hairpin-proxy's Issues

Just wanted to say thanks

Like the subject says I just wanted to thank you for this brilliant solution to this annoying problem! I am starting to use Linode's managed Kubernetes and they don't have yet an annotation for the load balancer service that forces traffic through the load balancer with proxy protocol enabled, like other services do.

This solution of yours works perfectly and cert manager was able to issue a test certificate.

Thanks!

causes requests to timeout

I had the same problem with nginx-ingress and cert-manager and tried this solutions however, when i try to curl with the external domain from inside one of my pods it times out
Screen Shot 2021-04-08 at 10 26 56 PM

both with and without the flag --haproxy-protocol and of course when I curl localhost it works fine

any ideas?

fresh install fails

I am using nginx-ingress in cluster in Digital Ocean.

Pods failed with:

/usr/local/bundle/gems/k8s-ruby-0.10.5/lib/k8s/api_client.rb:63:in `find_api_resource': Unknown resource ingresses for networking.k8s.io/v1 (K8s::Error::UndefinedResource)
	from /usr/local/bundle/gems/k8s-ruby-0.10.5/lib/k8s/api_client.rb:73:in `resource'
	from /app/src/main.rb:30:in `block in fetch_ingress_hosts'
	from /app/src/main.rb:28:in `map'
	from /app/src/main.rb:28:in `fetch_ingress_hosts'
	from /app/src/main.rb:62:in `check_and_rewrite_coredns'
	from /app/src/main.rb:134:in `block in main_loop'
	from /app/src/main.rb:132:in `loop'
	from /app/src/main.rb:132:in `main_loop'
	from /app/src/main.rb:144:in `<main>'

k8s version v1.18.8, nginx ingress is v0.40.2

Add support for different in-cluster DNS names

Hi!

Currently the hairpin controller assumes a DNS suffix of a cluster to be 'cluster.local', which is default, but not mandatory within Kubernetes.

Could you please see about making this configurable?

Compatibility with kubernetes 1.29

HI,

I have tried hairpin-proxy on kubernetes 1.29 but the pod do a crashloop with this error:

GET /api/v1/namespaces/kube-system/configmaps/coredns => HTTP 404 Not Found

on version 1.28 is was working as expected.

Can you help me please ?

CoreDNS configmap not updated

I've noticed in controller output proper values:

I, [2021-03-26T21:57:48.058982 #1]  INFO -- : Corefile has changed! New contents:
import /etc/coredns/extra-configs/*Corefile
      .:53 {
    rewrite name alertmanager.stl.xxx.dev hairpin-proxy.hairpin-proxy.svc.cluster.local # Added by hairpin-proxy
    rewrite name grafana.stl.xxx.dev hairpin-proxy.hairpin-proxy.svc.cluster.local # Added by hairpin-proxy
    rewrite name prometheus.stl.xxx.dev hairpin-proxy.hairpin-proxy.svc.cluster.local # Added by hairpin-proxy
          errors
          health
          kubernetes cluster.local in-addr.arpa ip6.arpa {
             pods insecure
             fallthrough in-addr.arpa ip6.arpa
          }
          prometheus :9153
          forward . /etc/resolv.conf
          cache 30
          loop
          reload
          loadbalance
      }
Sending updated ConfigMap to Kubernetes API server...

However, the coredns ConfigMap does not get updated and it holds only original values:

kubectl get configmap -n kube-system coredns -o=jsonpath='{.data.Corefile}'

      import /etc/coredns/extra-configs/*Corefile
      .:53 {
          errors
          health
          kubernetes cluster.local in-addr.arpa ip6.arpa {
             pods insecure
             fallthrough in-addr.arpa ip6.arpa
          }
          prometheus :9153
          forward . /etc/resolv.conf
          cache 30
          loop
          reload
          loadbalance
      }

controller container does not throw any errors, it just tells it updates configMap and this does not happen...
I'm running k8s version 1.20.4

hairpin-proxy-haproxy container CrashLoopBackoff

Ran kubectl apply -f https://raw.githubusercontent.com/compumike/hairpin-proxy/v0.2.1/deploy.yml

The hairpin-proxy-haproxy container keeps crashing with the following in the logs:

[WARNING] 312/172115 (1) : config : missing timeouts for frontend 'fe_80'.
   | While not properly invalid, you will certainly encounter various problems
   | with such a configuration. To fix this, please ensure that all following
   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[WARNING] 312/172115 (1) : config : missing timeouts for backend 'be_ingress_80'.
   | While not properly invalid, you will certainly encounter various problems
   | with such a configuration. To fix this, please ensure that all following
   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[WARNING] 312/172115 (1) : config : missing timeouts for frontend 'fe_443'.
   | While not properly invalid, you will certainly encounter various problems
   | with such a configuration. To fix this, please ensure that all following
   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[WARNING] 312/172115 (1) : config : missing timeouts for backend 'be_ingress_443'.
   | While not properly invalid, you will certainly encounter various problems
   | with such a configuration. To fix this, please ensure that all following
   | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[NOTICE] 312/172115 (1) : haproxy version is 2.3.9-53945bf
[ALERT] 312/172115 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:7] : 'server my_server' : could not resolve address 'ingress-nginx-controller.ingress-nginx.svc.cluster.local'.
[ALERT] 312/172115 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:16] : 'server my_server' : could not resolve address 'ingress-nginx-controller.ingress-nginx.svc.cluster.local'.
[ALERT] 312/172115 (1) : Failed to initialize server(s) addr.   

Compatibility with kubernetes 1.22

Seems that does not work anymore since k8s 1.22:

I, [2023-04-28T07:30:06.221620 #1]  INFO -- : Starting in CoreDNS mode. (Indended to be run as a Deployment: one instance per cluster.)
I, [2023-04-28T07:30:06.316336 #1]  INFO -- : Starting main_loop with 15s polling interval.
I, [2023-04-28T07:30:06.316833 #1]  INFO -- : Polling all Ingress resources and CoreDNS configuration...
W, [2023-04-28T07:30:06.518745 #1]  WARN -- : Warning: Unable to list ingresses in extensions/v1beta1

Which seems logical. Is there any chance to see compatibility ?

Issue with multiple nginx-ingress controllers

Summary:
When there are multiple ingress controllers; for example, one with proxy-protocol enabled, and one without proxy protocol enabled, the hairpin proxy will make rewrite rules for all. But this gives 404 issues for the non-proxied ingresses.

Steps to reproduce:

  • Make an ingress controller with proxy protocol. Name it "nginx-ingress-w-proxy-protocol"
  • Make an ingress controller without proxy protocol. Name it "nginx-ingress"
  • Install the hairpin proxy
  • Create some ingress on both
  • Try to reach a site from IN the cluster, that is served by the nginx ingress.
  • Find that it the site gives a 404 not found.
  • Check the rules written kubectl get configmap -n kube-system coredns -o=jsonpath='{.data.Corefile}'
  • Notice that ALSO the site that should not be proxy'd is proxied, and thus sent to the nginx-ingress-w-proxy-protocol ingress controller. -- But it does not know the site, and thus it fails. Ultimately cert-manager fails to renew the certificates.

Suggested fix
Include the ingress class in the polling of ingress resources.

**Now I'm writing this, I realize this may be because I'm still on K8S 1.18, and it doesn't support the new ingress definitions (supporting ingress classes). Will need to test.

Not able to make it work with microk8s ingress(nginx)

First of all, this is awesome project! Thank you for adding this. I have an issue though because I want to use microk8s.

ingress-nginx-controller.ingress-nginx.svc.cluster.local resolution is not working in on the microk8s installation of kubernetes. Any idea of what this should be or how to get it to work? Not sure what I should change it to on microk8s. I'm a little new to kubernetes. Thanks for they help!

Here is more info for debug.

microk8s enable ingress

microk8s kubectl get pods -n hairpin-proxy
NAME READY STATUS RESTARTS AGE
hairpin-proxy-controller-7b48d47458-jnspd 1/1 Running 0 16h
hairpin-proxy-haproxy-5957c6fdc-k8rnv 0/1 CrashLoopBackOff 198 16h

microk8s kubectl logs hairpin-proxy-haproxy-5957c6fdc-k8rnv -n hairpin-proxy
[NOTICE] 044/183415 (1) : haproxy version is 2.2.4-de45672
[ALERT] 044/183415 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:7] : 'server my_server' : could not resolve address 'ingress-nginx-controller.ingress-nginx.svc.cluster.local'.
[ALERT] 044/183415 (1) : parsing [/usr/local/etc/haproxy/haproxy.cfg:16] : 'server my_server' : could not resolve address 'ingress-nginx-controller.ingress-nginx.svc.cluster.local'.
[ALERT] 044/183415 (1) : Failed to initialize server(s) addr.

and even more from services
root@ku1:~/cert# k get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.152.183.1 443/TCP 20h
kube-system kube-dns ClusterIP 10.152.183.10 53/UDP,53/TCP,9153/TCP 20h
default httpbin ClusterIP 10.152.183.148 8080/TCP 17h
hairpin-proxy hairpin-proxy ClusterIP 10.152.183.220 80/TCP,443/TCP 16h
default cm-acme-http-solver-qtrpl NodePort 10.152.183.161 8089:31375/TCP 16h
cert-manager cert-manager ClusterIP 10.152.183.21 9402/TCP 15h
cert-manager cert-manager-webhook ClusterIP 10.152.183.200 443/TCP 15h
default web NodePort 10.152.183.221 8080:31771/TCP 3h14m
default test NodePort 10.152.183.81 80:30208/TCP 162m
default test2 NodePort 10.152.183.128 8080:32560/TCP 153m

hairpin proxy controller does not update coredns config map

I, [2022-04-08T15:35:38.303359 #1]  INFO -- : Corefile has changed! New contents:
.:53 {
    rewrite name foo.foo.ch hairpin-proxy.hairpin-proxy.svc.cluster.local # Added by hairpin-proxy
          errors
          health
          kubernetes cluster.local in-addr.arpa ip6.arpa {
             pods insecure
             fallthrough in-addr.arpa ip6.arpa
          }
          prometheus :9153
          forward . /etc/resolv.conf
          cache 30
          loop
          reload
          loadbalance
      }
Sending updated ConfigMap to Kubernetes API server...

but the resulting config map then does not contain the updated config with rewrites:

kubectl get configmap -n kube-system coredns -o=jsonpath='{.data.Corefile}'

      .:53 {
          errors
          health
          kubernetes cluster.local in-addr.arpa ip6.arpa {
             pods insecure
             fallthrough in-addr.arpa ip6.arpa
          }
          prometheus :9153
          forward . /etc/resolv.conf
          cache 30
          loop
          reload
          loadbalance
      }
    

What do i miss? I use the standard ingress-nginx (no Target override necessary.)

Getting 400 Bad Request

I've completed the installation with the following commands:

kubectl apply -f https://raw.githubusercontent.com/compumike/hairpin-proxy/v0.1.2/deploy.yml
kubectl -n hairpin-proxy set env deployment/hairpin-proxy-haproxy TARGET_SERVER=ingress-nginx-controller.kube-system.svc.cluster.local

Before installation, the request would time out but now I'm getting 400 Bad Request.

I couldn't find much so I'm sharing all I have right now. Here is the log form my debug pod:

$ dig mysub.domain.com
; <<>> DiG 9.12.4-P2 <<>> mysub.domain.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12261
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: e77c2f9dd19d168d (echoed)
;; QUESTION SECTION:
;mysub.domain.com.		IN	A

;; ANSWER SECTION:
mysub.domain.com.	25	IN	A	10.106.209.95

;; Query time: 1 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Sun Dec 27 22:40:58 UTC 2020
;; MSG SIZE  rcvd: 97
$ dig hairpin-proxy.hairpin-proxy.svc.cluster.local

; <<>> DiG 9.12.4-P2 <<>> hairpin-proxy.hairpin-proxy.svc.cluster.local
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62161
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 5b49c52bd3a873ea (echoed)
;; QUESTION SECTION:
;hairpin-proxy.hairpin-proxy.svc.cluster.local. IN A

;; ANSWER SECTION:
hairpin-proxy.hairpin-proxy.svc.cluster.local. 30 IN A 10.106.209.95

;; Query time: 1 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Sun Dec 27 22:41:16 UTC 2020
;; MSG SIZE  rcvd: 147
$ curl mysub.domain.com
<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>

Hairpin proxy pods show no logs. Here is the log line I see in nginx-controller when I send the curl request:

10.44.0.0 - - [27/Dec/2020:22:50:35 +0000] "PROXY TCP4 10.44.0.0 10.44.0.21 1886 80" 400 150 "-" "-" 0 0.001 [] [] - - - - ed328050593d01bbd17e87c3eb3a255d

Here is the raw curl and I've confirmed that 10.106.209.95 belongs to hairpin-proxy service:

$ curl -iv --raw mysub.domain.com

* Rebuilt URL to: mysub.domain.com/
*   Trying 10.106.209.95...
* TCP_NODELAY set
* Connected to mysub.domain.com (10.106.209.95) port 80 (#0)
> GET / HTTP/1.1
> Host: mysub.domain.com
> User-Agent: curl/7.61.1
> Accept: */*
>
< HTTP/1.1 400 Bad Request
HTTP/1.1 400 Bad Request
< Date: Sun, 27 Dec 2020 22:51:47 GMT
Date: Sun, 27 Dec 2020 22:51:47 GMT
< Content-Type: text/html
Content-Type: text/html
< Content-Length: 150
Content-Length: 150
< Connection: close
Connection: close

<
<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Closing connection 0

FWIW, I'm able to acces the URL from the public and getting the following in response to kubectl describe challenge:

Status:
  Presented:   true
  Processing:  true
  Reason:      Waiting for HTTP-01 challenge propagation: wrong status code '400', expected '200'
  State:       pending
Events:
  Type    Reason     Age   From          Message
  ----    ------     ----  ----          -------
  Normal  Started    21m   cert-manager  Challenge scheduled for processing
  Normal  Presented  20m   cert-manager  Presented challenge using HTTP-01 challenge mechanism

`map': undefined method `hosts' for nil:NilClass (NoMethodError)

2020-10-27 19:32:22 +0000: Fetching...
/app/src/main.rb:9:in `map': undefined method `hosts' for nil:NilClass (NoMethodError)
	from /app/src/main.rb:9:in `ingress_hosts'
	from /app/src/main.rb:31:in `block in main'
	from /app/src/main.rb:28:in `loop'
	from /app/src/main.rb:28:in `main'
	from /app/src/main.rb:50:in `<main>'

k8s.api("extensions/v1beta1").resource("ingresses").list.map { |r| r.spec.tls }.flatten.map(&:hosts).flatten.sort.uniq

I do have two extensions/v1beta1 ingresses in different namespaces in k8s 1.16:

spec:
  tls:
  - hosts:
    - foo

cert-manager DNS-01 challenges attempting to use cluster.local domain

After successfully installing hairpin-proxy and using it as a workaround for the PROXY protocol issues, our cert-manager DNS-01 challenges began failing because they're attempting to write DNS records for a cluster.local domain.

If this has been encountered before, is there a simple fix? I'm not sure why cert-manager would be using the rewritten DNS name instead of what's exposed from the ingress.

I've included what I think is relevant state below. I've replaced our domain name with example.com and all possibly-secret things with different values:

State

Hairpin Proxy Rewrites

kubectl get configmap -n kube-system coredns -o=jsonpath='{.data.Corefile}'

rewrite name www.example.com hairpin-proxy.hairpin-proxy.svc.cluster.local # Added by hairpin-proxy

Failing Challenge

kubectl describe -n com-example challenge prod-com-example-tls-abcd-12345-6789

Spec:
  Authorization URL:  https://acme-v02.api.letsencrypt.org/acme/authz-v3/[snip]
  Dns Name:           www.example.com
  Issuer Ref:
    Group:  cert-manager.io
    Kind:   Issuer
    Name:   letsencrypt-prod-dns
  Key:      [snip]
  Solver:
    dns01:
      Digitalocean:
        Token Secret Ref:
          Key:   access-token
          Name:  digitalocean-dns
  Token:         [snip]
  Type:          DNS-01
  URL:           https://acme-v02.api.letsencrypt.org/acme/chall-v3/[snip]/[snip]
  Wildcard:      false
Status:
  Presented:   false
  Processing:  true
  Reason:      GET https://api.digitalocean.com/v2/domains/cluster.local/records: 429 Too many requests
  State:       pending
Events:
  Type     Reason        Age   From          Message
  ----     ------        ----  ----          -------
  Warning  PresentError  50m   cert-manager  Error presenting challenge: GET https://api.digitalocean.com/v2/domains/cluster.local/records: 404 (request "546fc37a-8095-4e3e-b978-ace4a6234861") Resource not found
  Warning  PresentError  50m   cert-manager  Error presenting challenge: GET https://api.digitalocean.com/v2/domains/cluster.local/records: 404 (request "116e6237-7f6c-4cb5-8f7c-dd0d31061325") Resource not found
  Warning  PresentError  50m   cert-manager  Error presenting challenge: GET https://api.digitalocean.com/v2/domains/cluster.local/records: 404 (request "424fea63-ce7f-4d70-a231-7bedba66bc54") Resource not found
  Warning  PresentError  50m   cert-manager  Error presenting challenge: GET https://api.digitalocean.com/v2/domains/cluster.local/records: 404 (request "f16f41e6-51c8-41d8-8546-35209d670055") Resource not found
  Warning  PresentError  50m   cert-manager  Error presenting challenge: GET https://api.digitalocean.com/v2/domains/cluster.local/records: 404 (request "c8944dc5-8cca-42de-b80c-9f5b331f8ca7") Resource not found
  Warning  PresentError  50m   cert-manager  Error presenting challenge: GET https://api.digitalocean.com/v2/domains/cluster.local/records: 404 (request "ae2ebaf0-957a-4edb-8349-91c77dbb00df") Resource not found
  Warning  PresentError  20m   cert-manager  Error presenting challenge: GET https://api.digitalocean.com/v2/domains/cluster.local/records: 404 (request "b36fb409-4c7a-48ec-b9f7-60abb3a8ab76") Resource not found
  Warning  PresentError  20m   cert-manager  Error presenting challenge: GET https://api.digitalocean.com/v2/domains/cluster.local/records: 404 (request "c4faed5a-c42c-47fe-951f-858833add550") Resource not found
  Warning  PresentError  20m   cert-manager  Error presenting challenge: GET https://api.digitalocean.com/v2/domains/cluster.local/records: 404 (request "c16d4fb6-9e54-4ca3-957b-5620c053c476") Resource not found
  Warning  PresentError  20m   cert-manager  Error presenting challenge: GET https://api.digitalocean.com/v2/domains/cluster.local/records: 404 (request "385657c3-d0f0-4c60-9c6f-5eb26f612cd7") Resource not found
  Warning  PresentError  20m   cert-manager  Error presenting challenge: GET https://api.digitalocean.com/v2/domains/cluster.local/records: 404 (request "44dd0d27-1310-4143-89fc-9ade6fe4005c") Resource not found
  Warning  PresentError  20m   cert-manager  Error presenting challenge: GET https://api.digitalocean.com/v2/domains/cluster.local/records: 404 (request "4237eabf-eac4-453b-a789-f1eeb8af36d9") Resource not found

Cert Manager Config

apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
  name: letsencrypt-prod-dns
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: [email protected]
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
    - dns01:
        digitalocean:
          tokenSecretRef:
            name: digitalocean-dns
            key: access-token

Ingress

kubectl describe -n com-example ingress

Name:             com-example-prod-web
Namespace:        com-example
Address:          123.45.67.89
Default backend:  default-http-backend:80 (<none>)
TLS:
  prod-com-example-tls terminates www.example.com
Rules:
  Host                       Path  Backends
  ----                       ----  --------
  www.example.com  
                             /   com-example-prod-web:80 (123.45.67.89:80)
Annotations:
  kubernetes.io/ingress.class:                nginx
  meta.helm.sh/release-namespace:             com-example
  cert-manager.io/issuer:                     letsencrypt-prod-dns
  ingress.kubernetes.io/ssl-redirect:         “true”
  external-dns/managed:                       true
  meta.helm.sh/release-name:                  com-example-prod
Events:                                       <none>

Incompatibility with node-local-dns

Hi,

Recently I run into a incompatibility hairpin-proxy together with node-local-dns: while hairpin-proxy successefully updating configmap/coredns

It seems like hairpin should also follow and keep updated configmap/node-local-dns, appending there the rewrite section.

kubectl get configmap -n kube-system coredns -o=jsonpath='{.data.Corefile}'

.:53 {
    rewrite name example.com proxy-c24e38fb75b2c.hairpin-proxy.svc.cluster.local # Added by hairpin-proxy
    rewrite name www.example.com proxy-c24e38fb75b2c.hairpin-proxy.svc.cluster.local # Added by hairpin-proxy
    errors
    health
    ready
    kubernetes cluster.local in-addr.arpa ip6.arpa {
      pods insecure
      fallthrough in-addr.arpa ip6.arpa
    }
    hosts /etc/coredns/NodeHosts {
      ttl 60
      reload 15s
      fallthrough
    }
    prometheus :9153
    forward . /etc/resolv.conf
    cache 30
    loop
    reload
    loadbalance
    import /etc/coredns/custom/*.override
}
import /etc/coredns/custom/*.server

kubectl get configmap -n kube-system node-local-dns -o=jsonpath='{.data.Corefile}'

cluster.local:53 {
    errors
    cache {
            success 9984 30
            denial 9984 5
    }
    reload
    loop
    bind 169.254.20.10 10.43.0.10
    forward . __PILLAR__CLUSTER__DNS__ {
            force_tcp
    }
    prometheus :9253
    health 169.254.20.10:8080
    }
in-addr.arpa:53 {
    errors
    cache 30
    reload
    loop
    bind 169.254.20.10 10.43.0.10
    forward . __PILLAR__CLUSTER__DNS__ {
            force_tcp
    }
    prometheus :9253
    }
ip6.arpa:53 {
    errors
    cache 30
    reload
    loop
    bind 169.254.20.10 10.43.0.10
    forward . __PILLAR__CLUSTER__DNS__ {
            force_tcp
    }
    prometheus :9253
    }
.:53 {
    errors
    cache 30
    reload
    loop
    bind 169.254.20.10 10.43.0.10
    forward . __PILLAR__UPSTREAM__SERVERS__
    prometheus :9253
    }

It also visible through dns resolving inside the Pod:

# dig +short example.com @10.244.0.12
10.43.108.37
# dig +short example.com @10.43.0.10
167.235.116.70

Here the node-local-dns respond with external LB IP while direct response from coredns pod returning correct replaced IP of hairpin proxy.

I'm running this hairpin fork but the same applies for this one.

Hope this info may help someone to save time and would be great if anybody has idea how implement this feature.

Allow other ConfigMap names than "coredns"

Hi!

When deploying CoreDNS using a RKE2 in Rancher, they name the CoreDNS file "rke2-coredns". But since it is hardcoded here as:

cm = @k8s.api.resource("configmaps", namespace: "kube-system").get("coredns")

... make me unable to use this for my CoreDNS instance.

Perhaps the name of the ConfigMap should be an option that is just defaulted to "coredns", but could be overridden?

CrashLoopBackoff caused by error reading k8s api

I get CrashLoopBackoff in the controller because the following error:

It seems to be a transient error so I don't think the controller should be stopped when it happens. Can hairpin proxy be changed to report unhealthy instead of quitting the controller?

Happens when controller loads ingresses: @k8s.api(api_version).resource("ingresses").list

/usr/local/lib/ruby/2.7.0/openssl/buffering.rb:182:in `sysread_nonblock': Connection reset by peer (Errno::ECONNRESET) (Excon::Error::Socket)
	from /usr/local/lib/ruby/2.7.0/openssl/buffering.rb:182:in `read_nonblock'
	from /usr/local/bundle/gems/excon-0.79.0/lib/excon/socket.rb:179:in `read_nonblock'
	from /usr/local/bundle/gems/excon-0.79.0/lib/excon/socket.rb:63:in `readline'
	from /usr/local/bundle/gems/excon-0.79.0/lib/excon/response.rb:64:in `block in parse'
	from /usr/local/bundle/gems/excon-0.79.0/lib/excon/response.rb:63:in `loop'
	from /usr/local/bundle/gems/excon-0.79.0/lib/excon/response.rb:63:in `parse'
	from /usr/local/bundle/gems/excon-0.79.0/lib/excon/middlewares/response_parser.rb:7:in `response_call'
	from /usr/local/bundle/gems/excon-0.79.0/lib/excon/middlewares/redirect_follower.rb:82:in `response_call'
	from /usr/local/bundle/gems/excon-0.79.0/lib/excon/connection.rb:456:in `response'
	from /usr/local/bundle/gems/excon-0.79.0/lib/excon/connection.rb:287:in `request'
	from /usr/local/bundle/gems/k8s-ruby-0.10.5/lib/k8s/transport.rb:284:in `request'
	from /usr/local/bundle/gems/k8s-ruby-0.10.5/lib/k8s/transport.rb:363:in `get'
	from /usr/local/bundle/gems/k8s-ruby-0.10.5/lib/k8s/api_client.rb:46:in `api_resources!'
	from /usr/local/bundle/gems/k8s-ruby-0.10.5/lib/k8s/api_client.rb:54:in `api_resources'
	from /usr/local/bundle/gems/k8s-ruby-0.10.5/lib/k8s/api_client.rb:61:in `find_api_resource'
	from /usr/local/bundle/gems/k8s-ruby-0.10.5/lib/k8s/api_client.rb:73:in `resource'
	from /app/src/main.rb:30:in `block in fetch_ingress_hosts'
	from /app/src/main.rb:28:in `map'
	from /app/src/main.rb:28:in `fetch_ingress_hosts'
	from /app/src/main.rb:62:in `check_and_rewrite_coredns'
	from /app/src/main.rb:134:in `block in main_loop'
	from /app/src/main.rb:132:in `loop'
	from /app/src/main.rb:132:in `main_loop'
	from /app/src/main.rb:144:in `<main>'
/usr/local/lib/ruby/2.7.0/openssl/buffering.rb:182:in `sysread_nonblock': Connection reset by peer (Errno::ECONNRESET)
	from /usr/local/lib/ruby/2.7.0/openssl/buffering.rb:182:in `read_nonblock'
	from /usr/local/bundle/gems/excon-0.79.0/lib/excon/socket.rb:179:in `read_nonblock'
	from /usr/local/bundle/gems/excon-0.79.0/lib/excon/socket.rb:63:in `readline'
	from /usr/local/bundle/gems/excon-0.79.0/lib/excon/response.rb:64:in `block in parse'
	from /usr/local/bundle/gems/excon-0.79.0/lib/excon/response.rb:63:in `loop'
	from /usr/local/bundle/gems/excon-0.79.0/lib/excon/response.rb:63:in `parse'
	from /usr/local/bundle/gems/excon-0.79.0/lib/excon/middlewares/response_parser.rb:7:in `response_call'
	from /usr/local/bundle/gems/excon-0.79.0/lib/excon/middlewares/redirect_follower.rb:82:in `response_call'
	from /usr/local/bundle/gems/excon-0.79.0/lib/excon/connection.rb:456:in `response'
	from /usr/local/bundle/gems/excon-0.79.0/lib/excon/connection.rb:287:in `request'
	from /usr/local/bundle/gems/k8s-ruby-0.10.5/lib/k8s/transport.rb:284:in `request'
	from /usr/local/bundle/gems/k8s-ruby-0.10.5/lib/k8s/transport.rb:363:in `get'
	from /usr/local/bundle/gems/k8s-ruby-0.10.5/lib/k8s/api_client.rb:46:in `api_resources!'
	from /usr/local/bundle/gems/k8s-ruby-0.10.5/lib/k8s/api_client.rb:54:in `api_resources'
	from /usr/local/bundle/gems/k8s-ruby-0.10.5/lib/k8s/api_client.rb:61:in `find_api_resource'
	from /usr/local/bundle/gems/k8s-ruby-0.10.5/lib/k8s/api_client.rb:73:in `resource'
	from /app/src/main.rb:30:in `block in fetch_ingress_hosts'
	from /app/src/main.rb:28:in `map'
	from /app/src/main.rb:28:in `fetch_ingress_hosts'
	from /app/src/main.rb:62:in `check_and_rewrite_coredns'
	from /app/src/main.rb:134:in `block in main_loop'
	from /app/src/main.rb:132:in `loop'
	from /app/src/main.rb:132:in `main_loop'
	from /app/src/main.rb:144:in `<main>'

wish this worked for Arm

Unfortunately doesn't help me on microk8s with raspberry pi, any chance for an armv8 version?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.