Code Monkey home page Code Monkey logo

kubernetes-ingress's Introduction

HAProxy

HAProxy Kubernetes Ingress Controller

Contributors License Go Report Card

Description

An ingress controller is a Kubernetes resource that routes traffic from outside your cluster to services within the cluster.

Detailed documentation can be found within the Official Documentation.

You can also find in this repository a list of all available Ingress annotations.

Usage

Docker image is available on Docker Hub: haproxytech/kubernetes-ingress

If you prefer to build it from source use (change to appropriate platform if needed with TARGETPLATFORM, default platform is linux/amd64)

make build

With non default platform add appropriate TARGETPLATFORM

make build TARGETPLATFORM=linux/arm/v6

Example environment can be created with

make example

Please see controller.md for all available arguments of controller image.

Available customisations are described in doc

Basic setup to to run controller is described in yaml file.

kubectl apply -f deploy/haproxy-ingress.yaml

HAProxy Helm Charts

Official HAProxy Technologies Helm Charts for deploying on Kubernetes are available in haproxytech/helm-charts repository

Contributing

Thanks for your interest in the project and your willing to contribute:

Discussion

A Github issue is the right place to discuss feature requests, bug reports or any other subject that needs tracking.

To ask questions, get some help or even have a little chat, you can join our #ingress-controller channel in HAProxy Community Slack.

License

Apache License 2.0

kubernetes-ingress's People

Contributors

andreasdrougge avatar bedis avatar cristian-aldea avatar daniel-corbett avatar dkorunic avatar dosmanak avatar easkay avatar fabianonunes avatar frankkkkk avatar haproxytechblog avatar hdurand0710 avatar imatmati avatar interone-ms avatar ivanmatmati avatar jaraics avatar jgranieczny avatar mjuraga avatar mo3m3n avatar monrax avatar nickmramirez avatar ocdi avatar oktalz avatar oliwer avatar petuomin avatar prometherion avatar rmaticevic avatar rubycut avatar toshokan avatar vgramer avatar yann-soubeyrand avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-ingress's Issues

stick table peering support

Does this controller already support stick tables, favourably with peer syncing?
It looks like the models are there, but not sure how the peering configuration would look like.

Basic-auth doesn't work

Hi,

I deployed the haproxy ingress controller and everything seems to work fine except the basic-auth.

For one of my Ingress I need to set basic-auth. My ingress file looks like this:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    ingress.kubernetes.io/auth-realm: Authentication Required
    ingress.kubernetes.io/auth-secret: basic-auth
    ingress.kubernetes.io/auth-type: basic
  name: prometheus
  namespace: monitoring
spec:
  rules:
  - host: prometheus.example.fr
    http:
      paths:
      - backend:
          serviceName: prometheus-k8s
          servicePort: web

And when I access to my prometheus dashboard I don't need to set my password. It doesn't seems to consider my annotations

Did I miss something or it's not possible to set basic auth with haproxy ingress controller?

thoughts on performance with many ingresses

First of all, I’m really exited about this project! Thanks for all the work so far.

The company I work at manages in the range of ~15000 domains for many small businesses. Currently, a large portion of these domains route to various backends through an haproxy server using ACL rules reading from regularly updated lists of hostnames. We also do SSL termination via haproxy, with certs for every hostname stored in a directory and loaded on startup.

We’re in the process of migrating our services to kubernetes, and would love to manage these domains using the haproxy ingress. Up until now we’ve used the community nginx-ingress-controller inside kubernetes, and had performance issues as it needs to write to a .conf file whenever any ingress is changed, which for even a couple hundred ingresses causes slowdown and even 502's.

Is the haproxy controller a good fit for this use-case, and is there a better way to approach it inside kubernetes than our current approach outside of k8s? There’s a regular amount of addition and subtraction to hostnames and SSL certs (acquired via cert-manager), so we’re looking for the cleanest way to pick up these changes with zero-downtime.

If there is a better place to ask this kind of question just let me know!

Host matching ACLs missing <host>:<port> form

Hi,

I think the ACLs matching Host headers for routing are inaccurate:
use_backend service1 if { req.hdr(host) -i foo.bar.com } { path_beg /foo }

Host header could have the following forms:

  • foo.bar.com
  • foo.bar.com:

The acl above will not match the second form.
So it would be good to append to this acl, a new host header to which we append the port configured on the frontend

haproxy doesn't start

Hey,

I am running kubernetes on dcos 1.13.3.
Kubernetes version is 1.14.3

kubectl apply -f https://raw.githubusercontent.com/haproxytech/kubernetes-ingress/master/deploy/haproxy-ingress.yaml
When I try to start haproxy ingress controller, the healtcheck is not responding.

When I desactivate it, and I exec /bin/sh inside the pod.. I see only this process :

PID USER TIME COMMAND 1 root 0:00 /dumb-init -- /start.sh --configmap=default/haproxy-configmap --defau.... 6 root 0:00 /haproxy-ingress-controller --configmap=default/haproxy-configm.... 38 root 0:00 /bin/sh 46 root 0:00 ps

netstat -l doesn't show any listenning port

And when I try to lauch manually haproxy :
`/ # haproxy -d -f /etc/haproxy/haproxy.cfg
[WARNING] 213/140024 (78) : config : missing timeouts for frontend 'healthz'.
| While not properly invalid, you will certainly encounter various problems
| with such a configuration. To fix this, please ensure that all following
| timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[WARNING] 213/140024 (78) : config : missing timeouts for frontend 'http'.
| While not properly invalid, you will certainly encounter various problems
| with such a configuration. To fix this, please ensure that all following
| timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[WARNING] 213/140024 (78) : config : missing timeouts for frontend 'https'.
| While not properly invalid, you will certainly encounter various problems
| with such a configuration. To fix this, please ensure that all following
| timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[WARNING] 213/140024 (78) : config : missing timeouts for frontend 'stats'.
| While not properly invalid, you will certainly encounter various problems
| with such a configuration. To fix this, please ensure that all following
| timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[WARNING] 213/140024 (78) : config : missing timeouts for backend 'default-nginx-service-80'.
| While not properly invalid, you will certainly encounter various problems
| with such a configuration. To fix this, please ensure that all following
| timeouts are set to a non-zero value: 'client', 'connect', 'server'.
[WARNING] 213/140024 (78) : config : missing timeouts for backend 'kube-system-ingress-default-backend-8080'.
| While not properly invalid, you will certainly encounter various problems
| with such a configuration. To fix this, please ensure that all following
| timeouts are set to a non-zero value: 'client', 'connect', 'server'.
Note: setting global.maxconn to 536870846.
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result FAILED
Total: 3 (2 usable), will use epoll.

Available filters :
[SPOE] spoe
[COMP] compression
[CACHE] cache
[TRACE] trace
Killed`

http2 support for backends

Hello!

Are there any plans to add the ability to add http2 support to the backends via annotations or otherwise?

If not, Is there a suggested way to proxy grpc traffic using this ingress controller?

ingress.class implimented?

It looks like the haproxy ingress controller is not honoring any "ingress.class" config. From this file -->

"ingress.class": &StringW{Value: ""},

I would think this configmap would set the "ingress.class"

apiVersion: v1
kind: ConfigMap
metadata:
  name:  CONFIGMAP_NAME
data:
  ingress.class: MY_CLASS

However from the logs it looks like the controller is trying to setup servers for everything in my cluster.. Is this supported yet?

Ingress is not serving on service's targetPort

Hi,
Given the following set of manifests (shortened to relevant parts)

kind: Ingress
spec:
  rules:
  - host: foo.bar.com
    http:
      paths:
      - backend:
          serviceName: foo-app
          servicePort: http
        path: /

---
kind: Service
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http

---
kind: Deployment
spec:
  template:
    spec:
      containers:
      - ports:
        - name: http
          containerPort: 8080
          protocol: TCP

the resulting backend is

backend default-app-foo-80
  mode http
  balance roundrobin
  option forwardfor
  server SRV_qZUbo 100.96.3.89:80 disabled check weight 128

as it is using pod ips directly it should use port 8080, port 80 would be used by call to app-foo.default.cluster.local and sent by kube-proxy to 8080

Haproxy return 503 when service call haproxy ingress controller on different node

Hello,

I have a really big problem, I investigated a lot about it, but I dont know where to lookup now.

My problem :
Randomly, when I try to call my ingresses, I got a 503
My ingress configuration :

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"galleries","namespace":"default"},"spec":{"rules":[{"host":"galleries.k8s.com","http":{"paths":[{"backend":{"serviceName":"galleries","servicePort":8080},"path":"/"}]}}]}}
  creationTimestamp: "2019-07-16T09:47:51Z"
  generation: 1
  name: galleries
  namespace: default
  resourceVersion: "537902"
  selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/galleries
  uid: 41d08268-3b85-429a-82c8-a8aecaaffa1d
spec:
  rules:
  - host: galleries.k8s.com
    http:
      paths:
      - backend:
          serviceName: galleries
          servicePort: 8080
        path: /
status:
  loadBalancer: {}

My haproxy configmap:

apiVersion: v1
data:
  check: enabled
  dynamic-scaling: "true"
  forwarded-for: enabled
  load-balance: roundrobin
  maxconn: "2000"
  nbthread: "1"
  rate-limit: "OFF"
  rate-limit-expire: 30m
  rate-limit-interval: 10s
  rate-limit-size: 100k
  servers-increment: "8"
  servers-increment-max-disabled: "66"
  ssl-numproc: "0"
  ssl-redirect: "OFF"
  syslog-endpoint: 10.109.129.58:514
  timeout-client: 50s
  timeout-connect: 5s
  timeout-http-keep-alive: 1m
  timeout-http-request: 5s
  timeout-queue: 5s
  timeout-server: 50s
  timeout-tunnel: 1h
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"check":"enabled","forwarded-for":"enabled","load-balance":"roundrobin","maxconn":"2000","nbthread":"1","rate-limit":"OFF","rate-limit-expire":"30m","rate-limit-interval":"10s","rate-limit-size":"100k","servers-increment":"42","servers-increment-max-disabled":"66","ssl-certificate":"default/tls-secret","ssl-numproc":"1","ssl-redirect":"OFF","ssl-redirect-code":"302","timeout-client":"50s","timeout-connect":"5s","timeout-http-keep-alive":"1m","timeout-http-request":"5s","timeout-queue":"5s","timeout-server":"50s","timeout-tunnel":"1h"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"haproxy-configmap","namespace":"default","resourceVersion":"539852"}}
  creationTimestamp: "2019-07-16T11:49:16Z"
  name: haproxy-configmap
  namespace: default
  resourceVersion: "2474693"
  selfLink: /api/v1/namespaces/default/configmaps/haproxy-configmap
  uid: 89cbe998-f611-4bdd-a95c-4eb4f6499bbf

My haproxy deploy :

apiVersion: v1
data:
  check: enabled
  dynamic-scaling: "true"
  forwarded-for: enabled
  load-balance: roundrobin
  maxconn: "2000"
  nbthread: "1"
  rate-limit: "OFF"
  rate-limit-expire: 30m
  rate-limit-interval: 10s
  rate-limit-size: 100k
  servers-increment: "8"
  servers-increment-max-disabled: "66"
  ssl-numproc: "0"
  ssl-redirect: "OFF"
  syslog-endpoint: 10.109.129.58:514
  timeout-client: 50s
  timeout-connect: 5s
  timeout-http-keep-alive: 1m
  timeout-http-request: 5s
  timeout-queue: 5s
  timeout-server: 50s
  timeout-tunnel: 1h
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"check":"enabled","forwarded-for":"enabled","load-balance":"roundrobin","maxconn":"2000","nbthread":"1","rate-limit":"OFF","rate-limit-expire":"30m","rate-limit-interval":"10s","rate-limit-size":"100k","servers-increment":"42","servers-increment-max-disabled":"66","ssl-certificate":"default/tls-secret","ssl-numproc":"1","ssl-redirect":"OFF","ssl-redirect-code":"302","timeout-client":"50s","timeout-connect":"5s","timeout-http-keep-alive":"1m","timeout-http-request":"5s","timeout-queue":"5s","timeout-server":"50s","timeout-tunnel":"1h"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"haproxy-configmap","namespace":"default","resourceVersion":"539852"}}
  creationTimestamp: "2019-07-16T11:49:16Z"
  name: haproxy-configmap
  namespace: default
  resourceVersion: "2474693"
  selfLink: /api/v1/namespaces/default/configmaps/haproxy-configmap
  uid: 89cbe998-f611-4bdd-a95c-4eb4f6499bbf
root@k8s-qa001:~# kubectl get deploy haproxy-ingress -o yaml 
Error from server (NotFound): deployments.extensions "haproxy-ingress" not found
root@k8s-qa001:~# kubectl get deploy haproxy-ingress -o yaml  -n haproxy-controller
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "2"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"run":"haproxy-ingress"},"name":"haproxy-ingress","namespace":"haproxy-controller"},"spec":{"replicas":3,"selector":{"matchLabels":{"run":"haproxy-ingress"}},"template":{"metadata":{"labels":{"run":"haproxy-ingress"}},"spec":{"containers":[{"args":["--configmap=default/haproxy-configmap","--default-backend-service=$(POD_NAMESPACE)/ingress-default-backend"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"image":"haproxytech/kubernetes-ingress","livenessProbe":{"httpGet":{"path":"/healthz","port":1042}},"name":"haproxy-ingress","ports":[{"containerPort":80,"name":"http"},{"containerPort":443,"name":"https"},{"containerPort":1024,"name":"stat"}],"resources":{"requests":{"cpu":"500m","memory":"50Mi"}}}],"serviceAccountName":"haproxy-ingress-service-account"}}}}
  creationTimestamp: "2019-07-26T15:32:06Z"
  generation: 20
  labels:
    run: haproxy-ingress
  name: haproxy-ingress
  namespace: haproxy-controller
  resourceVersion: "2484985"
  selfLink: /apis/extensions/v1beta1/namespaces/haproxy-controller/deployments/haproxy-ingress
  uid: bf077817-cf37-45f6-9db7-4ea8162db899
spec:
  progressDeadlineSeconds: 600
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      run: haproxy-ingress
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        run: haproxy-ingress
    spec:
      containers:
      - args:
        - --configmap=default/haproxy-configmap
        - --default-backend-service=$(POD_NAMESPACE)/ingress-default-backend
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: haproxytech/kubernetes-ingress
        imagePullPolicy: Always
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 1042
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: haproxy-ingress
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
        - containerPort: 1024
          name: stat
          protocol: TCP
        resources:
          requests:
            cpu: 500m
            memory: 50Mi
        securityContext:
          capabilities:
            add:
            - SYS_PTRACE
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: haproxy-ingress-service-account
      serviceAccountName: haproxy-ingress-service-account
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 3
  conditions:
  - lastTransitionTime: "2019-07-26T15:32:06Z"
    lastUpdateTime: "2019-07-30T13:42:50Z"
    message: ReplicaSet "haproxy-ingress-6bd9fbdf6" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2019-07-30T14:56:30Z"
    lastUpdateTime: "2019-07-30T14:56:30Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 20
  readyReplicas: 3
  replicas: 3
  updatedReplicas: 3

My investigations :

1 - The problem append ONLY when I scale haproxy-controller pods
2 - The problem append ONLY when haproxy service call an haproxy pods outside of current node (I've already tried to set externalTrafficPolicy as Local or Cluster anyway should works only for nodeport access not ingress ?).
I can prouve it by watching iptables rules packet matching :

Chain KUBE-SVC-POGEE3ZVCPTG4ZOO (8 references)
 pkts bytes target     prot opt in     out     source               destination         
 Other node => HTTP 503 NOK  0     0 KUBE-SEP-UHN4AYUCBJBZCMCX  all  --  *      *       0.0.0.0/0            0.0.0.0/0            statistic mode random probability 0.33332999982
Other node => HTTP 503 NOK    0     0 KUBE-SEP-DXIK5R47GRH46BRL  all  --  *      *       0.0.0.0/0            0.0.0.0/0            statistic mode random probability 0.50000000000
Pod on node where I call ingress => HTTP 200 OK    0     0 KUBE-SEP-3ZVFJIVQPV4GJPB7  all  --  *      *       0.0.0.0/0            0.0.0.0/0        

3 - Packets can reach other node's haproxy controller
Using tcpdump inside haproxy pod on other node , I can see that haproxy pod on other node can see haproxy request (here SNAT because exttrafpol is Cluster) :

15:21:40.869646 IP (tos 0x0, ttl 62, id 21860, offset 0, flags [DF], proto TCP (6), length 60)
    192.168.88.192.34824 > 192.168.167.125.80: Flags [S], cksum 0x614c (correct), seq 2332571645, win 29200, options [mss 1460,sackOK,TS val 1159553689 ecr 0,nop,wscale 7], length 0
	0x0000:  4500 003c 5564 4000 3e06 65c9 c0a8 58c0  E..<Ud@.>.e...X.
	0x0010:  c0a8 a77d 8808 0050 8b08 37fd 0000 0000  ...}...P..7.....
	0x0020:  a002 7210 614c 0000 0204 05b4 0402 080a  ..r.aL..........
	0x0030:  451d 6299 0000 0000 0103 0307            E.b.........
15:21:40.869671 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
    192.168.167.125.80 > 192.168.88.192.34824: Flags [S.], cksum 0x81bd (incorrect -> 0x308c), seq 2402364309, ack 2332571646, win 27760, options [mss 1400,sackOK,TS val 275540824 ecr 1159553689,nop,wscale 7], length 0
	0x0000:  4500 003c 0000 4000 4006 b92d c0a8 a77d  E..<..@[email protected]...}
	0x0010:  c0a8 58c0 0050 8808 8f31 2b95 8b08 37fe  ..X..P...1+...7.
	0x0020:  a012 6c70 81bd 0000 0204 0578 0402 080a  ..lp.......x....
	0x0030:  106c 6b58 451d 6299 0103 0307            .lkXE.b.....
15:21:41.892087 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
    192.168.167.125.80 > 192.168.88.192.34824: Flags [S.], cksum 0x81bd (incorrect -> 0x2c8d), seq 2402364309, ack 2332571646, win 27760, options [mss 1400,sackOK,TS val 275541847 ecr 1159553689,nop,wscale 7], length 0
	0x0000:  4500 003c 0000 4000 4006 b92d c0a8 a77d  E..<..@[email protected]...}
	0x0010:  c0a8 58c0 0050 8808 8f31 2b95 8b08 37fe  ..X..P...1+...7.
	0x0020:  a012 6c70 81bd 0000 0204 0578 0402 080a  ..lp.......x....
	0x0030:  106c 6f57 451d 6299 0103 0307            .loWE.b.....
15:21:43.908068 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
    192.168.167.125.80 > 192.168.88.192.34824: Flags [S.], cksum 0x81bd (incorrect -> 0x24ad), seq 2402364309, ack 2332571646, win 27760, options [mss 1400,sackOK,TS val 275543863 ecr 1159553689,nop,wscale 7], length 0
	0x0000:  4500 003c 0000 4000 4006 b92d c0a8 a77d  E..<..@[email protected]...}
	0x0010:  c0a8 58c0 0050 8808 8f31 2b95 8b08 37fe  ..X..P...1+...7.
	0x0020:  a012 6c70 81bd 0000 0204 0578 0402 080a  ..lp.......x....
	0x0030:  106c 7737 451d 6299 0103 0307            .lw7E.b.....
15:21:47.940069 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
    192.168.167.125.80 > 192.168.88.192.34824: Flags [S.], cksum 0x81bd (incorrect -> 0x14ed), seq 2402364309, ack 2332571646, win 27760, options [mss 1400,sackOK,TS val 275547895 ecr 1159553689,nop,wscale 7], length 0
	0x0000:  4500 003c 0000 4000 4006 b92d c0a8 a77d  E..<..@[email protected]...}
	0x0010:  c0a8 58c0 0050 8808 8f31 2b95 8b08 37fe  ..X..P...1+...7.
	0x0020:  a012 6c70 81bd 0000 0204 0578 0402 080a  ..lp.......x....
	0x0030:  106c 86f7 451d 6299 0103 0307            .l..E.b.....

4 - haproxy process can't see request
Lets get strace of the process during the requests :
/ # strace -p 387 2>&1 | tee /tmp/trace
Lets analyze it after request .
I can see the haproxy checks :

/ # grep 192.168.167.108 /tmp/trace 
connect(15, {sa_family=AF_INET, sin_port=htons(9000), sin_addr=inet_addr("192.168.167.108")}, 16) = -1 EINPROGRESS (Operation in progress)
connect(15, {sa_family=AF_INET, sin_port=htons(9000), sin_addr=inet_addr("192.168.167.108")}, 16) = 0
connect(15, {sa_family=AF_INET, sin_port=htons(9000), sin_addr=inet_addr("192.168.167.108")}, 16) = -1 EINPROGRESS (Operation in progress)
connect(15, {sa_family=AF_INET, sin_port=htons(9000), sin_addr=inet_addr("192.168.167.108")}, 16) = 0
connect(15, {sa_family=AF_INET, sin_port=htons(9000), sin_addr=inet_addr("192.168.167.108")}, 16) = -1 EINPROGRESS (Operation in progress)
connect(15, {sa_family=AF_INET, sin_port=htons(9000), sin_addr=inet_addr("192.168.167.108")}, 16) = 0
connect(15, {sa_family=AF_INET, sin_port=htons(9000), sin_addr=inet_addr("192.168.167.108")}, 16) = -1 EINPROGRESS (Operation in progress)
connect(15, {sa_family=AF_INET, sin_port=htons(9000), sin_addr=inet_addr("192.168.167.108")}, 16) = 0
connect(15, {sa_family=AF_INET, sin_port=htons(9000), sin_addr=inet_addr("192.168.167.108")}, 16) = -1 EINPROGRESS (Operation in progress)
connect(15, {sa_family=AF_INET, sin_port=htons(9000), sin_addr=inet_addr("192.168.167.108")}, 16) = 0
connect(15, {sa_family=AF_INET, sin_port=htons(9000), sin_addr=inet_addr("192.168.167.108")}, 16) = -1 EINPROGRESS (Operation in progress)
connect(15, {sa_family=AF_INET, sin_port=htons(9000), sin_addr=inet_addr("192.168.167.108")}, 16) = 0

BUT, I cant find any connection of my remote node IP ...

/ # grep 192.168.88.192 /tmp/trace 
/ # 

Can someone help me ?

Kubernetes Ingress cannot fetch cluster ip of service

I setup a kubernetes single node master plane with calico and haproxy. Now whenever I am going to create an Ingress, the address remains empty and the server returns a 503 error.

The following shows my kubernetes deployments.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: wordpress
          ports:
            - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  ports:
    - port: 8080
      targetPort: 8080
      protocol: TCP
  selector:
    app: nginx
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: web-ingress
spec:
  rules:
    - host: wordpress.example.org
      http:
        paths:
          - path: /
            backend:
              serviceName: nginx-service
              servicePort: 8080

This is my output from the kubernetes cl.

NAME                             HOSTS                   ADDRESS   PORTS   AGE
ingress.extensions/web-ingress   wordpress.example.org             80      35s

NAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/kubernetes      ClusterIP   10.96.0.1       <none>        443/TCP    10h
service/nginx-service   ClusterIP   10.97.189.233   <none>        8080/TCP   35s

NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/nginx-deployment   1/1     1            1           35s

NAME                                    READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-7798df5dd5-gnwf2   1/1     Running   0          35s

NAME                      ENDPOINTS              AGE
endpoints/kubernetes      164.68.103.199:6443    10h
endpoints/nginx-service   192.168.104.150:8080   36s
Pascals-MBP-c8a4:api-gateway pascal$

I expect that the ingress will receive the cluster ip of the service and is listening on the given host uri and serving another information than the given 503 errors.

// Edit: Its a standalone node, not a desktop version or minikube installation!

Can I use this controller for MQTT?

First of all, amazing work done for haproxy 2.0

I was wondering if I can use this controller for MQTT ingress

Currently I have a custom haproxy image to provide TLS termination with a configuration similar to this:

listen mqtt
  bind *:1883
  bind *:8883 ssl crt /certs/server.pem ca-file /certs/client.crt verify required
  mode tcp
  option clitcpka
  timeout client 3h
  timeout server 3h
  server mqtt_server ${MQTT_SERVER}:${MQTT_PORT}

I point to the mqtt cluster service with the environment variables MQTT_SERVER and MQTT_PORT.

Thanks in advance for your help.

ingress can not find new apps backends

When I add a new app to the running ingress it is not added to the haproxy.cfg:

2019/09/06 13:41:57 controller.go:278 11: backend default-app5-80 does not exist
2019/09/06 13:41:57 controller-haproxy.go:151: 14: ERR transactionId=f249fd2b-4c5c-4873-8206-1fe036cb8a1a 
msg="Proxy 'http': unable to find required use_backend: 'default-app5-80'."
msg="Proxy 'https': unable to find required use_backend: 'default-app5-80'."
2019/09/06 13:41:57 controller-monitor.go:121: 14: ERR transactionId=f249fd2b-4c5c-4873-8206-1fe036cb8a1a 
msg="Proxy 'http': unable to find required use_backend: 'default-app5-80'."
msg="Proxy 'https': unable to find required use_backend: 'default-app5-80'."

after restarting the ingress it's fine.

I am applying the app

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: app5
  name: app5
spec:
  replicas: 1
  selector:
    matchLabels:
      run: app5
  template:
    metadata:
      labels:
        run: app5
    spec:
      containers:
      - name: app5
        image: httpd
        ports:
        - containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  labels:
    run: app5
  name: app5
  annotations:
    haproxy.org/check: "enabled"
    haproxy.org/forwarded-for: "enabled"
    kubernetes.io/ingress.class: "haproxy"
spec:
  selector:
    run: app5
  ports:
  - name: port-1
    port: 80
    protocol: TCP
    targetPort: 80

and then I update the ingress by adding another host entry.

.....
  - host: bla.example.de
    http:
      paths:
      - backend:
          serviceName: app5
          servicePort: 80
        path: /
....

I am using v1.2.1

Or is it mandatory to give every app it's own ingress?

[rate-limit] missing track rule

The configuration generated by the rate-limit feature misses a "track" rule, which means it currently does not rate-limit anything:

  acl ratelimit_cnt_abuse src_get_gpc0(RateLimit) gt 0
  acl ratelimit_inc_cnt_abuse src_inc_gpc0(RateLimit) gt 0
  acl ratelimit_is_abuse src_http_req_rate(RateLimit) ge 10
  http-request deny deny_status 0 if ratelimit_is_abuse ratelimit_inc_cnt_abuse
  http-request deny deny_status 0 if ratelimit_cnt_abuse

add support for selectorless services

Given existing codebase, you filter all pods by service selector and may optionally apply "check/check-interval" annotations.

I see the following issues:

  1. You do not honor static (selectorless) Services. Manual Endpoints are not balanced to, since they do not have pods associated to them.
  2. There could be an issue where pod readiness check changes between versions, but service annotation remains the same. Kubernetes (and Deployment mechanism) counts new pods as operational, where Ingress controller stops routing traffic to the pods in question, bringing the service down.

The ingress controller is unable to handle a large list of IP addresses

Hello!

When you provide a long list of IPs to whitelist through ingress annotations, haproxy crashes upon being reloaded, ending up in a crash loop.

2019/07/29 10:57:54 controller-haproxy.go:150: 14: ERR transactionId=7bb0ca3b-59b5-4acc-8420-af3a8520cb46                                                                                                                                                                       
msg="parsing [/tmp/haproxy/haproxy.cfg.7bb0ca3b-59b5-4acc-8420-af3a8520cb46:16]: line too long, truncating at word 65, position 920: <192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50
.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50
.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 192.168.50.1/24 }>."      
line=16 msg="error detected while parsing an 'http-request allow' condition missing closing '}' in condition."

This stems from the fact haproxy itself has a limit to how many words a single line of haproxy.cfg can include.

I've tested around and a valid workout is to put the whitelist in a separate file and use the { src -f /path/to/whitelist.lst } syntax in the config file (assuming whitelist.lst is the file with CIDRs separated by newlines).

Would it be desirable to make the Ingress controller do it that way rather than putting the IP ranges inline?

Filter incoming traffic with annotations

Hello,

I need to implement whitelist filters using annotations.
I do not need to implement blacklist filters right now.
I need to match the client ip at connection layer (src), not in the HTTP headers (X-Forwarded-For or any customizable header).
IMHO, this issue is linked with the issue #50 , as I think the header I need would be named whitelist , but the existing whitelist header does not do what I need.

Wrong port in haproxy.cfg

I've followed an example setup presented in https://www.haproxy.com/blog/dissecting-the-haproxy-kubernetes-ingress-controller/. I'm using minikube:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-20T04:49:16Z", GoVersion:"go1.12.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

My config:

---
apiVersion: v1
kind: Namespace
metadata:
  name: haproxy-controller

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: haproxy-ingress-service-account
  namespace: haproxy-controller

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: haproxy-ingress-cluster-role
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - endpoints
  - nodes
  - pods
  - services
  - namespaces
  - events
  - serviceaccounts
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - "extensions"
  resources:
  - ingresses
  - ingresses/status
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - list
  - watch
  - create
  - patch
  - update
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: haproxy-ingress-cluster-role-binding
  namespace: haproxy-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: haproxy-ingress-cluster-role
subjects:
- kind: ServiceAccount
  name: haproxy-ingress-service-account
  namespace: haproxy-controller

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: haproxy-configmap
  namespace: default
data:
  servers-increment: "2"

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: ingress-default-backend
  name: ingress-default-backend
  namespace: haproxy-controller
spec:
  replicas: 1
  selector:
    matchLabels:
      run: ingress-default-backend
  template:
    metadata:
      labels:
        run: ingress-default-backend
    spec:
      containers:
      - name: ingress-default-backend
        image: gcr.io/google_containers/defaultbackend:1.0
        ports:
        - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  labels:
    run: ingress-default-backend
  name: ingress-default-backend
  namespace: haproxy-controller
spec:
  selector:
    run: ingress-default-backend
  ports:
  - name: port-1
    port: 8080
    protocol: TCP
    targetPort: 8080

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: haproxy-ingress
  name: haproxy-ingress
  namespace: haproxy-controller
spec:
  replicas: 1
  selector:
    matchLabels:
      run: haproxy-ingress
  template:
    metadata:
      labels:
        run: haproxy-ingress
    spec:
      serviceAccountName: haproxy-ingress-service-account
      containers:
      - name: haproxy-ingress
        image: haproxytech/kubernetes-ingress:1.1.4
        args:
          # - --default-ssl-certificate=default/tls-secret
          - --configmap=default/haproxy-configmap
          - --default-backend-service=haproxy-controller/ingress-default-backend
        resources:
          requests:
            cpu: "500m"
            memory: "50Mi"
        livenessProbe:
          httpGet:
            path: /healthz
            port: 1042
        ports:
        - name: http
          containerPort: 80
        - name: https
          containerPort: 443
        - name: stat
          containerPort: 1024
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace

---
apiVersion: v1
kind: Service
metadata:
  labels:
    run: haproxy-ingress
  name: haproxy-ingress
  namespace: haproxy-controller
spec:
  selector:
    run: haproxy-ingress
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
  - name: stat
    port: 1024
    protocol: TCP
    targetPort: 1024

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: app
  name: app
spec:
  replicas: 2
  selector:
    matchLabels:
      run: app
  template:
    metadata:
      labels:
        run: app
    spec:
      containers:
      - name: app
        image: jmalloc/echo-server
        ports:
        - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  labels:
    run: app
  name: app
  annotations:
    haproxy.org/check: "enabled"
    haproxy.org/forwarded-for: "enabled"
    haproxy.org/load-balance: "roundrobin"
spec:
  selector:
    run: app
  ports:
  - name: port-1
    port: 80
    protocol: TCP
    targetPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: web-ingress
  namespace: default
spec:
  rules:
  - host: foo.bar
    http:
      paths:
      - path: /
        backend:
          serviceName: app
          servicePort: 80

Let's see what's inside haproxy.cfg.

$ kubectl -n haproxy-controller exec haproxy-ingress-7669c9dcb6-c9f58 cat /etc/haproxy/haproxy.cfg
# [...]
backend default-app-80
  mode http
  balance roundrobin
  option forwardfor
  server SRV_YkL17 172.17.0.6:80 check weight 128
  server SRV_kIZ9P 172.17.0.7:80 check weight 128
# [...]

The backend servers have the wrong 80 port. Obviously HAProxy can't connect:
image

Why the targetPort from the app Service is ignored? If I change the app container to listen on port 80 then everything works fine. Theses are the "fixing" changes:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: app
  name: app
spec:
  replicas: 2
  selector:
    matchLabels:
      run: app
  template:
    metadata:
      labels:
        run: app
    spec:
      containers:
      - name: app
        image: jmalloc/echo-server
        env:
          - name: PORT
            value: "80"
        ports:
        - containerPort: 80

However, I guess this is not a desired solution.

Am I doing something wrong?

Concerning messages in logs

Was evaluating this and among other issues I saw this in the logs:

2019/06/27 15:02:34 controller-events.go:229: Pod not registered with controller, cannot modify ! datadog-42hcm
2019/06/27 15:02:34 controller-events.go:229: Pod not registered with controller, cannot modify ! datadog-cxx46
2019/06/27 15:02:34 controller-events.go:229: Pod not registered with controller, cannot modify ! datadog-sz2w9
2019/06/27 15:02:34 controller-events.go:229: Pod not registered with controller, cannot modify ! datadog-clusterchecks-6b99b5f5bd-zw2gv
2019/06/27 15:02:36 controller-events.go:229: Pod not registered with controller, cannot modify ! datadog-wqk4x
2019/06/27 15:02:37 controller-events.go:229: Pod not registered with controller, cannot modify ! datadog-clusterchecks-6b99b5f5bd-2m5sj
2019/06/27 15:02:38 controller-events.go:229: Pod not registered with controller, cannot modify ! datadog-c7j5j
2019/06/27 15:02:40 controller-events.go:229: Pod not registered with controller, cannot modify ! prometheus-prometheus-operator-prometheus-master-0
2019/06/27 15:02:41 controller-events.go:229: Pod not registered with controller, cannot modify ! datadog-lpf2k

Why, would the ingress be doing anything at all with these pods, unrelated to any ingress record or deployment, and why would it be trying to modify them in any way?

Error when enabling rate-limiting

I set the following YAML in my configmap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: haproxy-ingress-external
  labels:
    app.kubernetes.io/name: haproxy-ingress
    helm.sh/chart: haproxy-ingress-0.1.0
    app.kubernetes.io/instance: external
    app.kubernetes.io/managed-by: Tiller
data:
  servers-increment: "10"
  servers-increment-max-disabled: "10"
  timeout-connect: "250ms"
  rate-limit: "ON"
  rate-limit-expire: "1m"
  rate-limit-interval: "10s"
  rate-limit-size: "100k"

And got the following error logs from the controller:

2019/07/05 09:03:08 controller-haproxy.go:140: 14: ERR transactionId=14481e20-00f6-4e3b-899c-03b26a747e67 
line=16 msg="unable to find table 'RateLimit' referenced in arg 1 of ACL keyword 'src_get_gpc0' in proxy 'http'."
line=17 msg="unable to find table 'RateLimit' referenced in arg 1 of ACL keyword 'src_inc_gpc0' in proxy 'http'."
line=18 msg="unable to find table 'RateLimit' referenced in arg 1 of ACL keyword 'src_http_req_rate' in proxy 'http'."
line=28 msg="unable to find table 'RateLimit' referenced in arg 1 of ACL keyword 'src_get_gpc0' in proxy 'https'."
line=29 msg="unable to find table 'RateLimit' referenced in arg 1 of ACL keyword 'src_inc_gpc0' in proxy 'https'."
line=30 msg="unable to find table 'RateLimit' referenced in arg 1 of ACL keyword 'src_http_req_rate' in proxy 'https'."

[rate-limit] deny status set to 0

in the configuration generated by rate-limit feature, the deny status code is set to 0 in the generated configuration:

  acl ratelimit_cnt_abuse src_get_gpc0(RateLimit) gt 0
  acl ratelimit_inc_cnt_abuse src_inc_gpc0(RateLimit) gt 0
  acl ratelimit_is_abuse src_http_req_rate(RateLimit) ge 10
  http-request deny deny_status 0 if ratelimit_is_abuse ratelimit_inc_cnt_abuse
  http-request deny deny_status 0 if ratelimit_cnt_abuse

Support for "pathless" ingress rules

Hi,

Kubernetes allows writing such type of rule:

  rules:
  - host: foo.bar.com
    http:
      paths:
      - backend:
          serviceName: service1
          servicePort: http

Based on the rule above, the current Ingress Controller (1.0.1) generates the following configuration:
use_backend service1 if { req.hdr(host) -i foo.bar.com } { path_beg }

Unfortunately, the acl { path_beg } will never match and this backend will never receive any traffic.

Basically, if no path is provided, we should not set any path_beg acls.

[ssl] use `crt-list` to map SNI to certs

Kubernetes ingress rules allows associating domains to certificates. IE:

kind: Ingress
metadata:
  name: foo-tls
  namespace: default
spec:
  tls:
  - hosts:
    - foo.bar.com
    secretName: foobar
  - hosts:
    - bar.baz.com
    secretName: barbaz

This would work out of the box with current implementation because there is no overlap between foo.bar.com and bar.baz.com.

Now, let's try this:

kind: Ingress
metadata:
  name: foo-tls
  namespace: default
spec:
  tls:
  - hosts:
    - foo.bar.com
    secretName: bar-wildcard
  - hosts:
    - www.bar.com
    secretName: bar-www

The configuration above can be "translated" into HAProxy's crt-list feature: https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#5.1-crt-list.

The beauty is that this will allow to implement "per host" some SSL/TLS options (that could be set as annotation in the ingress rule).

Helm chart?

It'd be great to have the ability to deploy using a Helm Chart.

Named service port set to "0" in backend name

Hi,

Imagine I create the following service:

kind: Service
spec:
  ports:
    - port: 8080
      targetPort: http
      protocol: TCP
      name: http

And the following ingress rule:

kind: Ingress
  rules:
  - host: foo.bar.com
    http:
      paths:
      - backend:
          serviceName: service1
          servicePort: http

Then the name of the relevant backend in HAProxy configuration would be <namespacename>-<servicename>-<port number>

In the case of the service above, <port number> would be replaced by 0 which is pretty inaccurate and could lead to backend name overlap.

Ingress controller not listening on port 80/443

Hi,

So I'm having issues implementing the HAProxy ingress controller. I thought that Ingress controllers were meant to listen on ports 80 & 443 - however, these aren't exposed on the nodes.

If I list out the services I can get the node ports -

[root@master server]$ kubectl get svc -o wide -n haproxy-controller
NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                     AGE   SELECTOR
haproxy-ingress           NodePort    10.102.75.131   <none>        80:30990/TCP,443:30486/TCP,1024:31359/TCP   27m   run=haproxy-ingress
ingress-default-backend   ClusterIP   10.104.89.140   <none>        8080/TCP                                    27m   run=ingress-default-backend

So from here I can see that port 80 translates to 30990 & 443 to 30486. I've configured my external HAProxy load balancer to point to these ports it suggests they are up and running. I can also see HAProxy stats on port 1024:31359

I'm running a service called geoip -

statsscreen

My ingress rules look like this -

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: geoip
spec:
  rules:
    - host: geoip.example.com
      http:
        paths:
          - backend:
              serviceName: geoip
              servicePort: 8080

Port 8080 is the port my image listens on and I can confirm this is working. I've also tested with nginx ingress with HAProxy as external LB and it works well.

However, when I try and access geoip.example.com via the external load balancer I just get an empty response. Any ideas why this would be?

Thanks,
Chris.

Support caching CORS

HAProxy has a built-in "maintenance" free HTTP caching that support caching for OPTIONS request since this commit: haproxy/haproxy@1263540.

When load-balancing APIs, a lot of OPTIONS requests related to CORS are issued to the application server.
The idea here is to configure HAProxy to cache these responses to offload processing on the server side and improve performance of the entire application.

Note that as of today, HAProxy cache does not yet support the Vary header, so this would be useful only when the server send access-control-allow-origin: *.

The HAProxy configuration should be stored in the relevant backend and would look like:

backend b_myapp
  http-request cache-use cors if METH_OPTIONS
  http-response cache-store cors if METH_OPTIONS

cache cors
  total-max-size 64
  max-object-size 1024
  max-age 60

Note that the server must send a Cache-Control header which allows caching (IE max-age=60,private) and no Vary header of course.

I think, for starting we need the following annotations:

  • cors-cache-enable: boolean, default false

I think a single cache bucket (called "cors" in the example above) is sufficient with reasonable default values. Maybe we could allow tuning the cors cache bucket parameters through annotation in the Controller's configmap:

  • cachebucket-cors-total-max-size: max size in bytes, default 64
  • cachebucket-cors-max-object-size: max object size in bytes, default 1024
  • cachebucket-cors-max-age: how long HAProxy will keep an object in the cache in seconds, default 60

/cc @vgallissot

Service port change not reflected

hi,

When I change a service port, then the ingress controller does not follow up and HAProxy is still configured with the old port.

Given the following Ingress rule:

kind: Ingress
  rules:
  - host: foo.bar.com
    http:
      paths:
      - backend:
          serviceName: service1
          servicePort: http

And Service definition:

kind: Service
spec:
  ports:
    - port: 8080
      targetPort: http
      protocol: TCP
      name: http

All of this works.

Now, I update my service to match the following:
And Service definition:

kind: Service
spec:
  ports:
    - port: 10000
      targetPort: http
      protocol: TCP
      name: http

Then my HAProxy configuration is not updated and port in this backend is still set to the old value (8080)

Traffic shadowing

Is it possible to use traffic shadowing feature with the HAProxy kubernetes ingress? Looking at the annotations nothing looks like that it could be used for this.

docs: pin version in docs and yaml

For the sake of reproduceability I would suggest to pin versions both in docs and yaml. E.g. based on the releases, it should read:

$ kubectl apply -f https://raw.githubusercontent.com/haproxytech/kubernetes-ingress/v1.1.1/deploy/haproxy-ingress.yaml

and the version of the image inside of that yaml should probably in any case be pinned.

[ingress class] don't apply ingress rules with empty ingress class

Hi,
When an ingress rule is deployed with no class, then current ingress controller will apply it, regardless its own ingress class ownership.

I'd like to change this behavior and consider "no class" ingress rules on their own and only an ingress controller with no ingress class would apply them.

haproxy keeps serving default ssl.

Hey there,

I am using haproxy, cert-manager and let's encrypt in my k8s tech stack.

Whenever I am deploying an ingress with another domain like this:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: api-ingress
  namespace: cloud-system
  annotations:
    kubernetes.io/ingress.class: "haproxy"
    kubernetes.io/tls-acme: "true"
    certmanager.k8s.io/cluster-issuer: "letsencrypt-prod"
    certmanager.k8s.io/acme-challenge-type: "http01"
spec:
  tls:
    - hosts:
        - api.example.me
      secretName: api-example-me-cert
  rules:
    - host: api.example.me
      http:
        paths:
          - path: /
            backend:
              serviceName: "api-service"
              servicePort: 80

My haproxy setup always returns the default cert (setted up in haproxy-ingress deployment as argument like : --default-ssl-certificate=default/example-tld-tls). And I don't know why.

What would be the best way to alter the base haproxy.cfg?

To be more specific, I'm running the ingress with hostNetwork: true as my cluster is deployed on bare metal.

I'd like to lock down the stats/metrics port to only internal cluster IP's.

I was thinking that I could create my own base config and use a startup script to copy my config in over the base one, or spin my own image.

It would be great if that kind of config data could be pulled from the configmap as well.

Rename "whitelist" and remove "whitelist-with-rate-limit" annotation

The name of this annotation is confusing, since it applies to the "rate limit" function only.
Would be good to rename it to rate-limit-whitelist.
In the mean time, I think the annotation whitelist-with-rate-limit is redundant and that we could apply the whitelist to the rate-limit only when rate-limit-whitelist is set.

haproxy-ingress-controller just serves the default-ssl-certificate set on the yaml

When using haproxy-ingress-controller, it just serves the default-ssl-certificate set on the yaml.

I have verified the secret which contains the needed certificate (generated by cert-manager) and it is correct.

This secret is not linked anywhere. There is no other cert file in /etc/haproxy/certs than the one specified in default-ssl-certificate.

The certificate is set on the ingress session from my chart like below

 ssl-certificate: lool/sub-server-tls
    hosts:
    - sub.mydomain.com
  tls:
    - secretName: lool-server-tls
      hosts:
        - sub.mydomain.com

I had made a few changes on haproxy-ingress.yaml to adjust to my needs and system. Even if I don't see how these changes could interfere on the ssl part, see the attachment.
haproxy-ingress.yaml.txt
This is the haproxy.cfg from inside the pod
haproxy.cfg.txt

Confirm haproxy supports ingress spec.tls[].secretName?

Does this Ingress support specifying the tls cert via spec.tls[].secretName key on the Ingress specification? I can't find any evidence for it in the documentation or code.

Official documentation at https://kubernetes.io/docs/concepts/services-networking/ingress/:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: tls-example-ingress
spec:
  tls:
  - hosts:
    - sslexample.foo.com
    secretName: testsecret-tls
  rules:
    - host: sslexample.foo.com
      http:
        paths:
        - path: /
          backend:
            serviceName: service1
            servicePort: 80

Support for proxy-protocol

I have use cases where I use HAProxy as an external load-balancer.
In this use case, the external load-balancer used proxy-protocol and tcp mode only to get connected to the ingress controllers.

This is dependent on the following issue in the client native, I guess:
haproxytech/client-native#4

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.