Code Monkey home page Code Monkey logo

f5-ipam-controller's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

f5-ipam-controller's Issues

ip-range should accept comma separated list of IPs

Description

This is more or less a question. I haven't seen any docs that describe the accepted format of the ip-range. Can this accept a comma seperated list? I have a block of IPs which have some being used in the middle. I would like to use an ip range like so:

{"dev":"192.168.1.2-192.168.1.50,192.168.1.55-192.168.1.200"}

If this is already possible that would be great to know!

Deploying 20 Type LB services takes more than 3-4 minutes to deploy

Setup Details

FIC Version : 0.1.6
CIS Version : 2.8.0
Orchestration: K8S
Orchestration Version: 1.21

Description

A customer is deploying 45 type LB services (using 3 different ipamlabels) and reported back times that exceed 15 minutes to add and more to delete.
I replicated the environment and I could observe that with 20 service LB (with a single ipamlabel) it takes more than 4 minutes to deploy. From the logs I observed that it takes CIS 4 minutes to send the declaration to BIGIP. Therefore the bottleneck is in CIS.
Also the IP is assigned very quickly from FIC.

Steps To Reproduce

  1. create a ipamlabel (Prod) and deploy the attached configuration
    20-lb.yaml.txt

Actual Result

The whole process should not take more than few seconds.

Diagnostic Information

the IPAM deployment can be found below

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: f5-ipam
  name: f5-ipam
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: f5-ipam
  template:
    metadata:
      labels:
        app: f5-ipam
    spec:
      containers:
      - args:
        - --orchestration=kubernetes
        - --ip-range='{"Dev":"192.168.8.10-192.168.8.40","Prod":"192.168.8.60-192.168.8.142","oam":"192.168.8.143-192.168.8.150","sig":"192.168.8.151-192.168.8.160","prov":"192.168.8.161-192.168.8.172"}'
        - --log-level=DEBUG
        command:
        - /app/bin/f5-ipam-controller
        image: f5networks/f5-ipam-controller:0.1.6
        imagePullPolicy: IfNotPresent
        name: f5-ipam-controller
        terminationMessagePath: /dev/termination-log
        volumeMounts:
        - mountPath: /app/ipamdb
          name: samplevol
      securityContext:
        fsGroup: 1200
        runAsGroup: 1200
        runAsUser: 1200
      serviceAccount: bigip-ctlr
      serviceAccountName: bigip-ctlr
      volumes:
      - name: samplevol
        persistentVolumeClaim:
          claimName: pvc-local
          


Observations (if any)

BIG-IP partitions with upppercase names generate error when creating IPAM resource : metadata.name: Invalid value

Setup Details

FIC Version : 0.1.2
CIS Version : 2.4
FIC Build: f5networks/f5-ipam-controller:latest
CIS Build: f5networks/k8s-bigip-ctlr:latest
BIGIP Version: Big IP 15.1
AS3 Version: 3.26
Orchestration: OSCP

Description

When you use BIG-IP partition with uppercase name (let's say "OCP"), FIC throws following error :

[ipam] error while creating IPAM custom resource. F5IPAM.fic.f5.com "ipam.truncated.mycompany.local.OCP" is invalid: metadata.name: Invalid value: "ipam.truncated.mycompany.local.OCP": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character

Steps To Reproduce

  1. use uppercase partition name
  2. create LB service type resource using IPAM

Expected Result

CIS or FIC should convert any BIG-IP partition name to lower case in order to comply with RFC 1123 rather that throwing an error.

Actual Result

FIC is throwing an error with invalid value

IPAM deployment fails to write to PVC and crashes

Setup Details

FIC Version : 0.1.5
CIS Version : 2.7.1
FIC Build: f5networks/f5-ipam-controller:latest
CIS Build: f5networks/k8s-bigip-ctlr:latest
BIGIP Version: Big IP x.x.x
AS3 Version: 3.x
Orchestration: K8S/OSCP
Orchestration Version:
Additional Setup details: <Platform/CNI Plugins/ cluster nodes/ etc>

Description

The IPAM controller fails to write to PVC and subsequently crashes on Openshift 4.8.

The reason may be, there is securityContext which sets fsGroup, runAsUser and runAsGroup on the deployment, which cannot be handled by CSI drivers not supporting fsGroip change.

For this use case, however, specifying fsGroup should not be needed at all. We suggest removing the securityContext altogether, at least for Openshift, as there does not seem to be anything which should require running under specific user and especially under specific fsGroup.

These are logs:

2022/02/09 11:00:16 [INFO] [INIT] Starting: F5 IPAM Controller - Version: 0.1.5, BuildInfo: azure-1035-1bb5b0bc70546b7546ad2b1f42405b9aa867de2e
2022/02/09 11:00:16 [ERROR] [STORE] Unable to create IPAM DB file: open /app/ipamdb/cis_ipam.sqlite3: permission denied
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x60 pc=0x5c7395]

goroutine 1 [running]:
github.com/F5Networks/f5-ipam-controller/pkg/provider.(*IPAMProvider).Init(0xc000792a60, 0x7ffc13bb2dd6, 0x24, 0x10)
/go/src/github.com/F5Networks/f5-ipam-controller/pkg/provider/provider.go:60 +0xf5
github.com/F5Networks/f5-ipam-controller/pkg/provider.NewProvider(0x7ffc13bb2dd6, 0x24, 0xc000207b38)
/go/src/github.com/F5Networks/f5-ipam-controller/pkg/provider/provider.go:44 +0xa5
github.com/F5Networks/f5-ipam-controller/pkg/manager.NewIPAMManager(0x7ffc13bb2dd6, 0x24, 0x28, 0xc000798490, 0x1)
/go/src/github.com/F5Networks/f5-ipam-controller/pkg/manager/f5ipammanager.go:39 +0x39
github.com/F5Networks/f5-ipam-controller/pkg/manager.NewManager(0x7ffc13bb2dbc, 0xe, 0x7ffc13bb2dd6, 0x24, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/F5Networks/f5-ipam-controller/pkg/manager/manager.go:53 +0x452
main.main()
/go/src/github.com/F5Networks/f5-ipam-controller/cmd/f5-ipam-controller/main.go:278 +0x4a5

Fail to get IP from IPAM

FIC Version : 0.1.3
CIS Version : 2.4.1
FIC Build: f5networks/f5-ipam-controller:latest
CIS Build: f5networks/k8s-bigip-ctlr:latest
BIGIP Version: Big IP 15.1.2.1
AS3 Version: 3.28.0
Orchestration: K8S
Orchestration Version:
Additional Setup details:
CNI : Cilium + FRRouting

When create TransportServer CRD (include ipamLabel in yaml file), CIS pod log : "[ipam] error while retrieving IPAM custom resource" and fail to create vs in BIG-IP.

CIS Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: bigip-ctlr
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: bigip-ctlr
  template:
    metadata:
      name: bigip-ctlr
      labels:
        app: bigip-ctlr
    spec:
      serviceAccountName: bigip-ctlr
      containers:
        - name: bigip-ctlr
          image: "f5networks/k8s-bigip-ctlr:latest"
          env:
            - name: BIGIP_USERNAME
              valueFrom:
                secretKeyRef:
                  name: bigip-login
                  key: username
            - name: BIGIP_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: bigip-login
                  key: password
          command: ["/app/bin/k8s-bigip-ctlr"]
          args: [
            "--bigip-username=$(BIGIP_USERNAME)",
            "--bigip-password=$(BIGIP_PASSWORD)",
            "--bigip-url=x.x.x.x",
            "--bigip-partition=CIS",
            "--custom-resource-mode=true",
            "--ipam=true",
            "--insecure",
            "--log-level=DEBUG"
            ]
      imagePullSecrets:
        - name: bigip-login

CIS RBAC

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: bigip-ctlr-clusterrole
rules:
  - apiGroups: ["", "extensions", "networking.k8s.io"]
    resources: ["nodes", "services", "endpoints", "namespaces", "ingresses", "pods", "ingressclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["", "extensions", "networking.k8s.io"]
    resources: ["configmaps", "events", "ingresses/status", "services/status"]
    verbs: ["get", "list", "watch", "update", "create", "patch"]
  - apiGroups: ["cis.f5.com"]
    resources: ["virtualservers","virtualservers/status", "tlsprofiles", "transportservers", "ingresslinks", "externaldnss"]
    verbs: ["get", "list", "watch", "update", "patch"]
  - apiGroups: ["fic.f5.com"]
    resources: ["f5ipams", "f5ipams/status"]
    verbs: ["get", "list", "watch", "update", "create", "patch", "delete"]
  - apiGroups: ["apiextensions.k8s.io"]
    resources: ["customresourcedefinitions"]
    verbs: ["get", "list", "watch", "update", "create", "patch"]
  - apiGroups: ["", "extensions"]
    resources: ["secrets"]
    resourceNames: ["<secret-containing-bigip-login>"]
    verbs: ["get", "list", "watch"]

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: bigip-ctlr-clusterrole-binding
  namespace: <controller_namespace>
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: bigip-ctlr-clusterrole
subjects:
  - apiGroup: ""
    kind: ServiceAccount
    name: bigip-ctlr
    namespace: kube-system

IPAM Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: f5-ipam-controller
  name: f5-ipam-controller
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: f5-ipam-controller
  template:
    metadata:
      labels:
        app: f5-ipam-controller
    spec:
      containers:
      - args:
        - --orchestration=kubernetes
        - --ip-range={"CAD":"x.x.x.x-x.x.x.x"}'
        - --log-level=DEBUG
        command:
        - /app/bin/f5-ipam-controller
        image: f5networks/f5-ipam-controller:latest
        imagePullPolicy: IfNotPresent
        name: f5-ipam-controller
      serviceAccount: ipam-ctlr
      serviceAccountName: ipam-ctlr

IPAM RBAC

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ipam-ctlr-clusterrole
rules:
  - apiGroups: ["fic.f5.com"]
    resources: ["f5ipams","f5ipams/status"]
    verbs: ["get", "list", "watch", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ipam-ctlr-clusterrole-binding
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ipam-ctlr-clusterrole
subjects:
  - apiGroup: ""
    kind: ServiceAccount
    name: ipam-ctlr
    namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ipam-ctlr
  namespace: kube-system

TransportServer

apiVersion: cis.f5.com/v1
kind: TransportServer
metadata:
  name: vs-l4
  namespace: test
  labels:
    f5cr: "true"
spec:
  ipamLabel: CAD
  virtualServerPort: 80
  mode: performance
  pool:
    monitor:
      interval: 10
      timeout: 31
      type: tcp
    service: nginx
    servicePort: 80
  snat: auto
  type: tcp

CIS Log :
image

IPAM Log :
image

F5 CIS in nodeport mode unable to create a VIP for "VirtualServer" crd

Before you raise a new bug, please ensure you have visited the troubleshooting guide

Setup Details

FIC Version : Version: 0.1.8
CIS Version : 2.10.1
FIC Build: f5networks/f5-ipam-controller:latest
CIS Build: f5networks/k8s-bigip-ctlr:latest
BIGIP Version: Big IP 17.0.0.1-0.0.4.ALL
AS3 Version: 3.39.0.7
Orchestration: Tanzu
Orchestration Version: 1.5.4
Additional Setup details: <Platform/CNI Plugins/ cluster nodes/ etc> : Antrea CNI

Description

When we deploy a F5 CIS controller in nodeport mode in Tanzu k8s cluster, we are able to deploy the k8s native L4 loadbalancer service. However, when we try to deploy a F5 crd "VirtualServer" to use the L7 applications, its unable to create the VIP object in the Big IP controller. K8s shows the Virtualserver CRD is created and IPAM assigns an IP address to the VS, however the object is not getting created in the BIG-IP partition.

Steps To Reproduce

  1. Deploy F5 CIS controller in nodeport mode
  2. deploy F5 IPAM controller
  3. Deploy a F5 virtualserver object

Expected Result

Actual Result

Diagnostic Information

F5 CIS controller pod logs shows virtual server config is missing the serviceport, however we confirm that it exits but still we see the error. please find the below service,VS config and the error in pod logs:

root@photon-JB [ ~/f5/L4 ]# cat 2-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: f5-hello-world
  name: f5-hello-world
spec:
  ports:
    - name: f5-hello-world
      port: 8080
      protocol: TCP
      targetPort: 8080
  selector:
    app: f5-hello-world
  type: ClusterIP

root@photon-JB [ ~/f5/L4 ]# cat 3-vs.yaml
apiVersion: "cis.f5.com/v1"
kind: VirtualServer
metadata:
 name: cafe-virtual-server
 labels:
   f5cr: "true"
spec:
 host: cafe.tanzu.lab
 ipamLabel: Prod
 pools:
 - path: /coffee
   service: f5-hello-world
   servicePort: 8080

Pod logs:

2022/10/26 09:29:52 [DEBUG] [AS3] posting request to https://172.16.2.244/mgmt/shared/appsvcs/declare/bigip-partition
2022/10/26 09:29:53 [ERROR] [AS3] Raw response from Big-IP: map[code:422 declarationFullId: errors:[/bigip-partition/Shared/f5_hello_world_8080_default_cafe_tanzu_lab/members/0: should have required property 'servicePort'] message:declaration is invalid] {"$schema":"https://raw.githubusercontent.com/F5Networks/f5-appsvcs-extension/master/schema/3.38.0/as3-schema-3.38.0-3.json","class":"AS3","declaration":{"bigip-partition":{"Shared":{"class":"Application","coffee_lb_8080_default":{"class":"Pool","members":[{"addressDiscovery":"static","serverAddresses":["172.16.48.36"],"servicePort":32764},{"addressDiscovery":"static","serverAddresses":["172.16.48.44"],"servicePort":32764},{"addressDiscovery":"static","serverAddresses":["172.16.48.35"],"servicePort":32764},{"addressDiscovery":"static","serverAddresses":["172.16.48.40"],"servicePort":32764},{"addressDiscovery":"static","serverAddresses":["172.16.48.47"],"servicePort":32764},{"addressDiscovery":"static","serverAddresses":["172.16.48.49"],"servicePort":32764}]},"crd_172_16_48_146_80":{"source":"0.0.0.0/0","translateServerAddress":true,"translateServerPort":true,"class":"Service_HTTP","virtualAddresses":["172.16.48.146"],"virtualPort":80,"snat":"auto","policyEndpoint":"/bigip-partition/Shared/crd_172_16_48_146_80_cafe_tanzu_lab_policy"},"crd_172_16_48_146_80_cafe_tanzu_lab_policy":{"class":"Endpoint_Policy","rules":[{"name":"vs_cafe_tanzu_lab_coffee_f5_hello_world_8080_default_cafe_tanzu_lab","conditions":[{"type":"httpHeader","name":"host","event":"request","all":{"values":["cafe.tanzu.lab"],"operand":"equals"}},{"type":"httpUri","name":"1","event":"request","index":1,"pathSegment":{"values":["coffee"],"operand":"equals"}}],"actions":[{"type":"forward","event":"request","select":{"pool":{"use":"f5_hello_world_8080_default_cafe_tanzu_lab"}}}]}],"strategy":"first-match"},"f5_hello_world_8080_default_cafe_tanzu_lab":{"class":"Pool","members":[{"addressDiscovery":"static","serverAddresses":["172.16.48.36"]},{"addressDiscovery":"static","serverAddresses":["172.16.48.44"]},{"addressDiscovery":"static","serverAddresses":["172.16.48.35"]},{"addressDiscovery":"static","serverAddresses":["172.16.48.40"]},{"addressDiscovery":"static","serverAddresses":["172.16.48.47"]},{"addressDiscovery":"static","serverAddresses":["172.16.48.49"]}]},"template":"shared","vs_lb_svc_default_coffee_lb_172_16_48_145_8080":{"class":"Service_TCP","virtualAddresses":["172.16.48.145"],"virtualPort":8080,"snat":"auto","pool":"coffee_lb_8080_default","profileL4":"basic"}},"class":"Tenant","defaultRouteDomain":0},"class":"ADC","controls":{"class":"Controls","userAgent":"CIS/v2.10.1 K8S/v1.22.9+vmware.1"},"id":"urn:uuid:85626792-9ee7-46bb-8fc8-4ba708cfdc1d","label":"CIS Declaration","remark":"Auto-generated by CIS","schemaVersion":"3.38.0"}}
2022/10/26 09:29:53 [ERROR] [AS3] Big-IP Responded with code: 422
2022/10/26 09:29:53 [DEBUG] [AS3] Posting failed tenants configuration in 30s seconds
2022/10/26 09:29:53 [DEBUG] Updating VirtualServer Status with {172.16.48.146 Ok} for resource name:cafe-virtual-server , namespace: default
 

Observations (if any)

When we deploy F5 CIS controller in nodeportlocal mode, and create teh virtualserver object with the same configuration, it works fine. However, nodeportlocal mode does not support k8s native L4 LoadBalancer.

L4 is supported with nodeport only. so we wanted to deploy cis in nodeport mode and create both L4 and L7 services.

FIC f5-bigip-ctlr.k8s.ipam not found

Setup Details

FIC Version : 0.1.5
CIS Version : 2.6.0
FIC Build: f5networks/f5-ipam-controller:0.1.5
CIS Build: f5networks/k8s-bigip-ctlr:2.6.0
BIGIP Version: Big IP 15.0
AS3 Version: 3.28
Orchestration: K8S
Orchestration Version:
Additional Setup details: Cilium

Description

Deploying a type loadbalancer with the IPAM label results in the following FIC error:
[ERROR] Unable to Update IPAM: kube-system/f5-bigip-ctlr.k8s.ipam Error: ipams.fic.f5.com "f5-bigip-ctlr.k8s.ipam" not found

And due to the above the CIS message:
IP address not available, yet, for service service: nginx/nginx-lb

Steps To Reproduce

CIS is installed using Helm and all objects that does not require FIC works when manually specifying the IP.
Roles are updated according to the doc.

kubectl describe clusterrole f5-bigip-ctlr

  ipams.fic.f5.com/status                         []                 []              [get list watch update create patch delete]
  ipams.fic.f5.com                                []                 []              [get list watch update create patch delete]

kubectl describe clusterrole ipam-ctlr-clusterrole

  ipams.fic.f5.com/status  []                 []              [get list watch update patch delete create]
  ipams.fic.f5.com         []                 []              [get list watch update patch delete create]

kubectl describe ipam -n kube-system

Name:         f5-bigip-ctlr.k8s.ipam
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>
API Version:  fic.f5.com/v1
Kind:         IPAM
Metadata:
  Creation Timestamp:  2021-10-12T15:36:24Z
  Generation:          45
  Managed Fields:
    API Version:  fic.f5.com/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:spec:
        .:
        f:hostSpecs:
      f:status:
    Manager:         k8s-bigip-ctlr.real
    Operation:       Update
    Time:            2021-10-13T08:21:41Z
  Resource Version:  3617083
  UID:               7a38d2ae-77bd-4de7-85d3-f219fa4674ed
Spec:
  Host Specs:
    Ipam Label:  Production
    Key:         nginx/nginx-lb_svc
Status:
Events:  <none>

kubectl describe svc -n nginx

Name:              nginx
Namespace:         nginx
Labels:            name=nginx
                   role=public
Annotations:       <none>
Selector:          name=nginx
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.103.63.53
IPs:               10.103.63.53
Port:              http  80/TCP
TargetPort:        80/TCP
Endpoints:         10.245.2.250:80,10.245.2.254:80,10.245.2.66:80
Session Affinity:  None
Events:            <none>


Name:                     nginx-lb
Namespace:                nginx
Labels:                   app=nginx
Annotations:              cis.f5.com/health: {"interval": 10, "timeout": 31}
                          cis.f5.com/ipamLabel: Production
Selector:                 name=nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.105.127.243
IPs:                      10.105.127.243
Port:                     svc-lb1-80  80/TCP
TargetPort:               80/TCP
NodePort:                 svc-lb1-80  32479/TCP
Endpoints:                10.245.2.250:80,10.245.2.254:80,10.245.2.66:80
Port:                     svc-lb1-8080  8080/TCP
TargetPort:               8080/TCP
NodePort:                 svc-lb1-8080  30797/TCP
Endpoints:                10.245.2.250:8080,10.245.2.254:8080,10.245.2.66:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

FIC logs:

2021/10/13 14:51:18 [INFO] [INIT] Starting: F5 IPAM Controller - Version: 0.1.5, BuildInfo: azure-1035-1bb5b0bc70546b7546ad2b1f42405b9aa867de2e
2021/10/13 14:51:18 [DEBUG] Creating IPAM Kubernetes Client
2021/10/13 14:51:18 [DEBUG] [ipam] Creating Informers for Namespace kube-system
2021/10/13 14:51:18 [DEBUG] Created New IPAM Client
2021/10/13 14:51:18 [DEBUG] [MGR] Creating Manager with Provider: f5-ip-provider
2021/10/13 14:51:18 [DEBUG] [STORE] Using IPAM DB file from mount path
2021/10/13 14:51:18 [DEBUG] [STORE] [ipaddress status ipam_label reference]
2021/10/13 14:51:18 [DEBUG] [STORE] 10.100.100.209 1 Stage Uv38ByGCZU8WP18P
2021/10/13 14:51:18 [DEBUG] [STORE] 10.100.100.210 1 Stage lWbHTRADfE17uwQH
2021/10/13 14:51:18 [DEBUG] [STORE] 10.100.100.211 1 Stage gYVa2GgdDYbR6R4A
2021/10/13 14:51:18 [DEBUG] [STORE] 10.100.100.212 1 Stage ZpTSxCKs0gigByk5
2021/10/13 14:51:18 [DEBUG] [STORE] 10.100.100.213 1 Stage 650YpEeEBF2H88Z8
2021/10/13 14:51:18 [DEBUG] [STORE] 10.100.100.214 1 Stage la9aJTZ5Ubqi/2zU
2021/10/13 14:51:18 [DEBUG] [STORE] 10.100.100.215 1 Stage X7kLrbN8WCG22VUm
2021/10/13 14:51:18 [DEBUG] [STORE] 10.100.100.201 0 Production nginx/nginx-lb_svc
2021/10/13 14:51:18 [DEBUG] [STORE] 10.100.100.202 1 Production YyUlP+xzjdep4ov5
2021/10/13 14:51:18 [DEBUG] [STORE] 10.100.100.203 1 Production DwcCRIYVu9oIMT9q
2021/10/13 14:51:18 [DEBUG] [STORE] 10.100.100.204 1 Production C/UFmHWSHmaKW98s
2021/10/13 14:51:18 [DEBUG] [STORE] 10.100.100.205 1 Production ktJXK80GaNLWxS9Q
2021/10/13 14:51:18 [DEBUG] [STORE] 10.100.100.206 1 Production a/hMcXTLdHY2TMPb
2021/10/13 14:51:18 [DEBUG] [STORE] 10.100.100.207 1 Production Fy7YV5S7NYsMO1Jd
2021/10/13 14:51:18 [DEBUG] [STORE] 10.100.100.208 1 Production /wlCedsZROvXoZ0P
2021/10/13 14:51:18 [DEBUG] [PROV] Provider Initialised
2021/10/13 14:51:18 [INFO] [CORE] Controller started
2021/10/13 14:51:18 [INFO] Starting IPAMClient Informer
I1013 14:51:18.234151       1 shared_informer.go:240] Waiting for caches to sync for F5 IPAMClient Controller
2021/10/13 14:51:18 [DEBUG] Enqueueing on Create: kube-system/f5-bigip-ctlr.k8s.ipam
I1013 14:51:18.334297       1 shared_informer.go:247] Caches are synced for F5 IPAMClient Controller 
2021/10/13 14:51:18 [DEBUG] K8S Orchestrator Started
2021/10/13 14:51:18 [DEBUG] Starting Response Worker
2021/10/13 14:51:18 [DEBUG] Starting Custom Resource Worker
2021/10/13 14:51:18 [DEBUG] Processing Key: &{0xc0004d6420 <nil> Create}
2021/10/13 14:51:18 [ERROR] Unable to Update IPAM: kube-system/f5-bigip-ctlr.k8s.ipam	 Error: ipams.fic.f5.com "f5-bigip-ctlr.k8s.ipam" not found
2021/10/13 14:51:18 [DEBUG] Updated: kube-system/f5-bigip-ctlr.k8s.ipam with Status. With IP: 10.100.100.201 for Request: 
Hostname: 	Key: nginx/nginx-lb_svc	IPAMLabel: Production	IPAddr: 	Operation: Create

f5-ipam-controller update. The current release has multiple high vulnerabilities.

Requesting F5-ipam-controller update due to the current release having multiple high vulnerabilities.

The current release of the F5 ipam controller has multiple high vulnerabilities detected when scanned with a vulnerability scanner like trivy.

Our most recent scan of the F5 ipam controller through harbor using trivy shows the following high vulnerabilities.
CVE-2023-38545, CVE-2023-2491, CVE-2023-28617, CVE-2023-4911, CVE-2023-4911, CVE-2023-4911, CVE-2023-38545, CVE-2023-30079, CVE-2023-44487, CVE-2023-24329, CVE-2023-40217, CVE-2023-24329, CVE-2023-40217, CVE-2023-24329, CVE-2023-40217, CVE-2022-41723, CVE-2023-39325

LoadBalancer Service Struck in Pending State

Before you raise a new bug, please ensure you have visited the troubleshooting guide

Setup Details

FIC Version : 0.1.9
CIS Version : 2.15.0
FIC Build: f5networks/f5-ipam-controller:latest
CIS Build: f5networks/k8s-bigip-ctlr:latest
BIGIP Version: Big IP x.x.x
AS3 Version: 3.x
Orchestration: K8S
Orchestration Version:
Additional Setup details: <Platform/CNI Plugins/ cluster nodes/ etc>

Description

IPAM controller pod is crashing with below message
unable to establish connection to DB, database is locked

Also none of loadbalancer's are getting provisioned. They are struck in pending state and the f5-cis-ctlr logs show below "skipping service '{ servicename namespace}' as its not used by any CIS monitored resource

Steps To Reproduce

Expected Result

successfully provisioned LB with an external IP from the range allocated IPAM

Actual Result

Service struck in pending state

Diagnostic Information

<Configuration files, error messages, logs>
Note: Sanitize the data. For example, be mindful of IPs, ports, application names and URLs
Note: The following F5 article outlines the information required when opening an issue.
https://support.f5.com/csp/article/K60974137

Observations (if any)

Cannot find the requested resource (get f5ipams.fic.f5.com)

Setup Details

FIC Version : 0.1.3
CIS Version : 2.4.1
FIC Build: f5networks/f5-ipam-controller:latest
CIS Build: f5networks/k8s-bigip-ctlr:latest
BIGIP Version: Big IP 15.1
AS3 Version: 3.28
Orchestration: K8S
Orchestration Version: 1.20.8
Additional Setup details: <Platform/CNI Plugins/ cluster nodes/ etc>
1 standalone BIGIP
1 K8s cluster
Flannel

Description

I deployed the F5 IPAM Controller as per the instructions on https://clouddocs.f5.com/containers/latest/userguide/ipam/ but I am getting the following error on the logs of the controller.
Any configuration I do, using the ipamLabel, the configuration is never pushed on the F5 BIGIP.

***************************************************************************************
2021/07/05 12:08:17 [DEBUG] [STORE]  37  10.1.10.46 1 Production
2021/07/05 12:08:17 [DEBUG] [STORE]  38  10.1.10.47 1 Production
2021/07/05 12:08:17 [DEBUG] [STORE]  39  10.1.10.48 1 Production
2021/07/05 12:08:17 [DEBUG] [STORE]  40  10.1.10.49 1 Production
2021/07/05 12:08:17 [INFO] [CORE] Controller started
2021/07/05 12:08:17 [INFO] Starting IPAMClient Informer
I0705 12:08:17.519011       1 shared_informer.go:240] Waiting for caches to sync for F5 IPAMClient Controller
E0705 12:08:17.536543       1 reflector.go:138] github.com/F5Networks/f5-ipam-controller/pkg/ipammachinery/informers.go:35: Failed to watch *v1.F5IPAM: failed to list *v1.F5IPAM: the server could not find the requested resource (get f5ipams.fic.f5.com)
E0705 12:08:18.826250       1 reflector.go:138] github.com/F5Networks/f5-ipam-controller/pkg/ipammachinery/informers.go:35: Failed to watch *v1.F5IPAM: failed to list *v1.F5IPAM: the server could not find the requested resource (get f5ipams.fic.f5.com)
E0705 12:08:21.941636       1 reflector.go:138] github.com/F5Networks/f5-ipam-controller/pkg/ipammachinery/informers.go:35: Failed to watch *v1.F5IPAM: failed to list *v1.F5IPAM: the server could not find the requested resource (get f5ipams.fic.f5.com)
E0705 12:08:27.279356       1 reflector.go:138] github.com/F5Networks/f5-ipam-controller/pkg/ipammachinery/informers.go:35: Failed to watch *v1.F5IPAM: failed to list *v1.F5IPAM: the server could not find the requested resource (get f5ipams.fic.f5.com)

****************************************************************************************************************************

I am getting a similar log from the CIS Controller when i enable the F5 IPAM ("--ipam=true")

*******************************************************************************************
2021/07/05 12:05:35 [DEBUG] [CORE] NodePoller (0xc0007faa20) notifying listener: {l:0xc0005acc60 s:0xc0005accc0}
2021/07/05 12:05:35 [DEBUG] [CORE] NodePoller (0xc0007faa20) listener callback - num items: 4 err: <nil>
2021/07/05 12:06:02 [DEBUG] [2021-07-05 12:06:02,237 __main__ DEBUG] config handler woken for reset
2021/07/05 12:06:02 [DEBUG] [2021-07-05 12:06:02,238 __main__ DEBUG] loaded configuration file successfully
2021/07/05 12:06:02 [DEBUG] [2021-07-05 12:06:02,238 __main__ DEBUG] NET Config: {}
2021/07/05 12:06:02 [DEBUG] [2021-07-05 12:06:02,238 __main__ DEBUG] loaded configuration file successfully
2021/07/05 12:06:02 [DEBUG] [2021-07-05 12:06:02,238 __main__ DEBUG] updating tasks finished, took 0.0006988048553
466797 seconds
E0705 12:06:02.546954       1 reflector.go:138] github.com/F5Networks/f5-ipam-controller/pkg/ipammachinery/informe
rs.go:35: Failed to watch *v1.F5IPAM: failed to list *v1.F5IPAM: the server could not find the requested resource 
(get f5ipams.fic.f5.com)
2021/07/05 12:06:05 [DEBUG] [CORE] NodePoller (0xc0007faa20) ready to poll, last wait: 30s

2021/07/05 12:06:05 [DEBUG] [CORE] NodePoller (0xc0007faa20) notifying listener: {l:0xc0005acc60 s:0xc0005accc0}
2021/07/05 12:06:05 [DEBUG] [CORE] NodePoller (0xc0007faa20) listener callback - num items: 4 err: <nil>

*****************************************************************************************************************************

Steps To Reproduce

  1. https://clouddocs.f5.com/containers/latest/userguide/ipam/
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ipam-ctlr-clusterrole-binding
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ipam-ctlr-clusterrole
subjects:
  - apiGroup: ""
    kind: ServiceAccount
    name: ipam-ctlr
    namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ipam-ctlr-clusterrole
rules:
  - apiGroups: ["fic.f5.com"]
    resources: ["f5ipams","f5ipams/status"]
    verbs: ["get", "list", "watch", "update", "patch"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ipam-ctlr
  namespace: kube-system

---

CRDs
https://raw.githubusercontent.com/F5Networks/f5-ipam-controller/main/docs/_static/schemas/ipam_schema.yaml

Expected Result

Actual Result

Diagnostic Information

<Configuration files, error messages, logs>
Note: Sanitize the data. For example, be mindful of IPs, ports, application names and URLs
Note: The following F5 article outlines the information required when opening an issue.
https://support.f5.com/csp/article/K60974137

Observations (if any)

Add Netbox IPAM provider support

Title

Add Netbox IPAM provider support

Description

As Netbox is widely popular for IPAM and it's documentation, I propose to add such a provider.

Actual Problem

Not able to automatically assign a valid IP from Netbox

Solution Proposed

Add Netbox provider

Alternatives

None

Additional context

Cannot deploy using a NFS volume due to the forced securityContext in the helm chart

Setup Details

FIC Version : HelmChart 0.0.3, 0.1.8
CIS Version : 2.10.1
FIC Build: f5networks/f5-ipam-controller:0.1.8
CIS Build: f5networks/k8s-bigip-ctlr:2.10.1
AS3 Version: 3.4
Orchestration: K8S
Orchestration Version: 1.24.4
Additional Setup details: nfs.csi.k8s.io (https://github.com/kubernetes-csi/csi-driver-nfs)

Description

When using a NFS storage class:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: f5-ipam-shared-sc
provisioner: nfs.csi.k8s.io
parameters:
  server: 172.17.20.200
  share: /f5-ipam
  csi.storage.k8s.io/provisioner-secret-name: "f5-ipam-shared-csi-secret"
  csi.storage.k8s.io/provisioner-secret-namespace: "kube-system"
reclaimPolicy: Retain
volumeBindingMode: Immediate
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: f5-ipam-controller
  namespace: kube-system
spec:
  repo: https://f5networks.github.io/f5-ipam-controller/helm-charts/stable
  chart: f5-ipam-controller
  version: 0.0.3
  targetNamespace: kube-system
  valuesContent: |-
    nodeSelector:
      kubernetes.io/os: linux
    pvc:
      create: true
      name: f5-ipam-controller-pvc
      storageClassName: f5-ipam-shared-sc
      accessMode: ReadWriteOnce
      storage: 512Mi
    args:
      orchestration: "kubernetes"
      provider: "f5-ip-provider"
      ip_range: '{"iprange":"172.17.20.100-172.17.20.199"}'
      log_level: DEBUG
    image:
      version: 0.1.8
      pullPolicy: IfNotPresent
    securityContext:
      runAsUser: 1200
      runAsGroup: 1200
      fsGroup: 1200

The helm install will work however the IPAM Controller Pod will not run due to a permission issue:

2022/12/02 12:59:31 [DEBUG] Creating IPAM Kubernetes Client
2022/12/02 12:59:31 [INFO] [INIT] Starting: F5 IPAM Controller - Version: 0.1.8, BuildInfo: azure-2661-f66ad6d2a4a94e0f0a8619191303af556f45dd0d
2022/12/02 12:59:31 [DEBUG] [ipam] Creating Informers for Namespace kube-system
2022/12/02 12:59:31 [DEBUG] Created New IPAM Client
2022/12/02 12:59:31 [DEBUG] [MGR] Creating Manager with Provider: f5-ip-provider
2022/12/02 12:59:31 [ERROR] [STORE] Unable to read IPAM DB file due to permission issue: stat /app/ipamdb/cis_ipam.sqlite3: permission denied
2022/12/02 12:59:31 [ERROR] [PROV] Store not initialized
2022/12/02 12:59:31 [ERROR] [PROV] Failed to Initialize Provider
2022/12/02 12:59:31 [ERROR] Unable to initialize manager: [IPMG] Unable to create Provider

Even if I create a Job that sets the correct permissions, it won't do the trick:

---
apiVersion: batch/v1
kind: Job
metadata:
  name: f5-ipam-set-volume-permissions
  namespace: kube-system
spec:
  backoffLimit: 4
  template:
    spec:
      restartPolicy: Never
      volumes:
        - name: f5-ipam-controller-pvc
          persistentVolumeClaim:
            claimName: f5-ipam-controller-pvc
      containers:
        - name: f5-ipam-init-chown-data
          image: busybox
          securityContext:
            runAsNonRoot: false
            runAsUser: 0
          command: ["chown",  "-R", "1200:1200", "/app/ipamdb/"]
          volumeMounts:
            - name: f5-ipam-controller-pvc
              mountPath: /app/ipamdb/

Steps To Reproduce

  1. Install a K8s clusterr
  2. Install the NFS CSI plugin
  3. Create a NFS export on the NFS server
  4. Create a storage class with the NFS server IP and path
  5. Use helm to install the F5 IPAM controller

Expected Result

  • Helm should have an option to exclude the securityContext whenever NFS is used

Actual Result

2022/12/02 12:40:44 [ERROR] [STORE] Unable to read IPAM DB file due to permission issue: stat /app/ipamdb/cis_ipam.sqlite3: permission denied

BloxOne API key support

Is the current IPAM solution able to work with the BloxOne Username/API Key authentication? If not, is there any plan to add this support?

IPAM breaks if ipamLabel is changed for a typeLB service

Setup Details

FIC Version : 0.1.6
CIS Version : 2.7.1

Description

When I deploy an LB service with an ipamLabel that is wrong, even if I correct it later, this doesn't reflect on the service and it always shows as .
In order this to work I need to change the name of the service.

Steps To Reproduce

  1. Deploy a service with an ipamLabel that doesn't exists
  2. Redeploy the service with the correct label.
  3. Check that the service is in pending mode.

Diagnostic Information

even if I have overwritten the IPAM label 'doesntexist', IPAM logs keep on showing this.

22/04/01 09:27:14 [DEBUG] Enqueueing on Update: kube-system/f5-cis-crd.cis-crd.ipam
2022/04/01 09:27:14 [DEBUG] Processing Key: &{0xc00045c840 0xc00045c580 Update}
2022/04/01 09:27:14 [DEBUG] [PROV] IPAM LABEL: doesntexist Not Found
2022/04/01 09:27:14 [DEBUG] Updated: kube-system/f5-cis-crd.cis-crd.ipam with Status. Removed 
Hostname:       Key: default/test_svc   IPAMLabel: doesntexist  IPAddr:         Operation: Delete

2022/04/01 09:27:14 [DEBUG] Enqueueing on Update: kube-system/f5-cis-crd.cis-crd.ipam
2022/04/01 09:27:14 [DEBUG] Processing Key: &{0xc00045d340 0xc00045c840 Update}
2022/04/01 09:27:14 [DEBUG] [PROV] IPAM LABEL: doesntexist Not Found
2022/04/01 09:27:14 [DEBUG] [PROV] Unsupported IPAM LABEL: doesntexist
2022/04/01 09:27:14 [DEBUG] Enqueueing on Update: kube-system/f5-cis-crd.cis-crd.ipam
2022/04/01 09:27:14 [DEBUG] Processing Key: &{0xc0004c2000 0xc00045d340 Update}
2022/04/01 09:27:14 [DEBUG] [PROV] IPAM LABEL: doesntexist Not Found
2022/04/01 09:27:14 [DEBUG] Updated: kube-system/f5-cis-crd.cis-crd.ipam with Status. Removed 
Hostname:       Key: default/test_svc   IPAMLabel: doesntexist  IPAddr:         Operation: Delete

2022/04/01 09:27:15 [DEBUG] Enqueueing on Update: kube-system/f5-cis-crd.cis-crd.ipam
2022/04/01 09:27:15 [DEBUG] Processing Key: &{0xc0000c7b80 0xc0004c2000 Update}
2022/04/01 09:27:15 [DEBUG] [PROV] IPAM LABEL: doesntexist Not Found
2022/04/01 09:27:15 [DEBUG] [PROV] Unsupported IPAM LABEL: doesntexist

Observations (if any)

Helm Installation in OpenShift 4.12 fails due to securityContext

Setup Details

Helm chart: f5-ipam-controller-0.0.4.tgz
FIC Version : 0.1.5

Description

I had to remove the securityContext section in the Deployment manifest otherwise I had the following errors:

  - lastTransitionTime: "2023-11-14T10:56:03Z"
    lastUpdateTime: "2023-11-14T10:56:03Z"
    message: 'pods "f5-ipam-controller-5f87c554f9-" is forbidden: unable to validate
      against any security context constraint: [provider "anyuid": Forbidden: not
      usable by user or serviceaccount, provider restricted-v2: .spec.securityContext.fsGroup:
      Invalid value: []int64{1000}: 1000 is not an allowed group, provider restricted-v2:
      .containers[0].runAsUser: Invalid value: 1000: must be in the ranges: [1000760000,
      1000769999], provider "restricted": Forbidden: not usable by user or serviceaccount,
      provider "nonroot-v2": Forbidden: not usable by user or serviceaccount, provider
      "nonroot": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid":
      Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler":
      Forbidden: not usable by user or serviceaccount, provider "hostnetwork-v2":
      Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden:
      not usable by user or serviceaccount, provider "hostaccess": Forbidden: not
      usable by user or serviceaccount, provider "node-exporter": Forbidden: not usable
      by user or serviceaccount, provider "privileged": Forbidden: not usable by user
      or serviceaccount]'

Steps To Reproduce

Install folloiwing the official instructions in OpenShift

I used the attached values.yaml file, tried different options in the securityContext option hoping to eliminate the securityContext section but at the end the solution was to modify the helm chart

values.yaml.txt

Add PVC creation to Helm chart

Title

Add PVC creation to Helm chart

Description

To simplify the deployment of the IPAM controller I suggest you add the PVC creation part as well to it.

Actual Problem

When deploying FIC using Helm you must first create the PVC separately and point to that PVC in the FIC Helm value file.

Solution Proposed

I suggest you add something like Grafana does to the Helm values file so that FIC can be deployed in one go using Helm only:

persistence:
  type: pvc
  enabled: false
  storageClassName: default
  accessModes:
    - ReadWriteOnce
  size: 0.1Gi

If you see a use case for not creating the PVC using Helm then perhaps give an option to enable/disable it.

Helm YAML parse error

Description

The helm repo has a parsing error when specifying your own ip range

Steps To Reproduce

  1. Create a values.yml with the content:
args:
  ip_range: '{"dev": "1.1.1.1-1.1.1.2"}'
  1. helm install -f values.yml ...
  2. Notice error

Expected Result

The IP range should be rendered properly

Actual Result

Error: YAML parse error on f5-ipam-controller/templates/f5-ipam-controller.deploy.yaml: error converting YAML to JSON: yaml: line 44: did not find expected key

Unable to update IPAM (ipams.fic.f5.com not found)

Setup Details

FIC Version : 0.1.6
CIS Version : 2.7.0
FIC Build: f5networks/f5-ipam-controller:0.1.6
CIS Build: f5networks/k8s-bigip-ctlr:2.7.0
BIGIP Version: Big IP 15.1
AS3 Version: 3.33
Orchestration: K8S
Orchestration Version: 1.20
Additional Setup details: Calico

Description

The IPAM doesn't provide an IP address.
After update from 2.6.1 I came across this issue that IPs will not be provided by the IPAM controller.

Diagnostic Information

The logs from the IPAM controller show that IPAM controller is unable to update the "kube-system/f5cis.cis.ipam" as it is not found.

2022/01/18 05:12:11 [DEBUG] K8S Orchestrator Started
2022/01/18 05:12:11 [DEBUG] Starting Response Worker
2022/01/18 05:12:11 [DEBUG] Starting Custom Resource Worker
2022/01/18 05:12:11 [DEBUG] Processing Key: &{0xc00015c420 <nil> Create}
2022/01/18 05:12:11 [ERROR] Unable to Update IPAM: kube-system/f5cis.cis.ipam    Error: ipams.fic.f5.com "f5cis.cis.ipam" not found
2022/01/18 05:12:11 [DEBUG] Updated: kube-system/f5cis.cis.ipam with Status. With IP: 172.16.3.31 for Request: 
Hostname: test1.demo.com        Key:    IPAMLabel: test IPAddr:         Operation: Create


The object actually exists and has been created by CIS.

kostas@master:~$ kubectl get ipams -A
NAMESPACE     NAME             AGE
kube-system   f5cis.cis.ipam   10m

Describing it we can see that the Host Specs are defined by CIS

kostas@master:~$ kubectl describe ipams f5cis.cis.ipam -n kube-system
Name:         f5cis.cis.ipam
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>
API Version:  fic.f5.com/v1
Kind:         IPAM
Metadata:
  Creation Timestamp:  2022-01-18T05:10:23Z
  Generation:          2
  Managed Fields:
    API Version:  fic.f5.com/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:spec:
        .:
        f:hostSpecs:
      f:status:
    Manager:         k8s-bigip-ctlr.real
    Operation:       Update
    Time:            2022-01-18T05:10:23Z
  Resource Version:  1001935
  UID:               4c93b89d-37b6-4bdb-b65e-3f6102c6ddfc
Spec:
  Host Specs:
    Host:        test1.demo.com
    Ipam Label:  test
Status:
Events:  <none>

Steps To Reproduce

I have included all manifests for both CIS,IPAM and VS deployments
Deploy CIS first and then IPAM.
Once both have been deployed, configure a virtualserver with ipamLabel: test (you can find it on the attached file vs.yaml)

Expected Result

Actual Result

Observations (if any)

Helm chart does not allow setting tolerations

Description

Although you can set node affinity, you can't actually schedule this on control plane nodes because there is not way to add tolerations.

This should be possible so that the workload can be scheduled on system nodes which tend to be more stable in deployments as they do not grow and shrink as often in deployment scaling.

IPAM controller operator crashes on Openshift

Setup Details

FIC Version : 0.1.5
CIS Version : 2.7.1
FIC Build: f5networks/f5-ipam-controller:latest
CIS Build: f5networks/k8s-bigip-ctlr:latest
BIGIP Version: Big IP x.x.x
AS3 Version: 3.x
Orchestration: K8S/OSCP
Orchestration Version:
Additional Setup details: Operator version from OLM is 0.0.1

Description

IPAM controller operator manager gets OOM Killed when reconciling F5IpamCtlr resources. The memory request/limit should raised on for the operator pod.

Steps To Reproduce

  1. Install F5 IPAM controller operator
  2. Create a F5IpamCtrl resource
  3. The controller gets OOMKilled by Openshift dureing reconciliation.

Expected Result

Operator is not OOMKilled.

Incorrect installation instructions for Helm

```helm install -f values.yaml <new-chart-name> f5-stable/f5-ipam-controller```

It appears the installation instructions for helm are wrong.
The existing line
helm install -f values.yaml <new-chart-name> f5-stable/f5-ipam-controller
Should be
helm install -f values.yaml <new-chart-name> f5-ipam-stable/f5-ipam-controller

Based on the helm repo add command above
helm repo add f5-ipam-stable https://f5networks.github.io/f5-ipam-controller/helm-charts/stable

Helm chart should create the PVC

Title

The Helm chart requires you pre-create the PVC. It should have an option to make this as part of the deployment.

Additional context

When using a tool like Terraform with Rancher you do not have the option of creating a PV but you can install helm charts. It makes it difficult to deploy this chart without also creating the PVC for you.

IPAM Helm Deployment strategy should be recreate

Description

In the helm chart for the IPAM controller, it uses the default rolling update strategy. This is a problem for two reasons:

  1. The PVC should only be accessed by one pod at a time because multiple editing the same DB will cause issues
  2. Not all PV types support multiple binds at the same time.

Steps To Reproduce

  1. Deploy IPAM with the helm chart using a storage class that only supports one bind at a time
  2. Upgrade the existing deployment
  3. Notice how the deployment will never complete because the new pod cannot startup as an existing one is holding the PVC

Expected Result

Upgrades should happen transparently.

Actual Result

You need to scale the deployment to 0, then back up to 1

FIC throws Errors after adding a new label

Before you raise a new bug, please ensure you have visited the troubleshooting guide

Setup Details

FIC Version : 0.1.7
CIS Version: 2.8.0
FIC Build: f5networks/f5-ipam-controller:latest
CIS Build: f5networks/k8s-bigip-ctlr:latest
BIGIP Version: Big IP x.x.x
AS3 Version: 3.x
Orchestration: K8S/OSCP
Orchestration Version:
Additional Setup details: <Platform/CNI Plugins/ cluster nodes/ etc>

Description

The FIC contains error log

2022/04/28 09:37:06 [ERROR] [STORE] Unable to Insert row in Table 'ipaddress_range': UNIQUE constraint failed: ipaddress_range.reference
2022/04/28 09:37:06 [ERROR] [STORE] Unable to Insert row in Table 'ipaddress_range': UNIQUE constraint failed: ipaddress_range.reference
2022/04/28 09:37:06 [ERROR] [STORE] Unable to Insert row in Table 'ipaddress_range': UNIQUE constraint failed: ipaddress_range.reference
2022/04/28 09:37:06 [ERROR] [STORE] Unable to Insert row in Table 'ipaddress_range': UNIQUE constraint failed: ipaddress_range.reference
2022/04/28 09:37:06 [ERROR] [STORE] Unable to Insert row in Table 'ipaddress_range': UNIQUE constraint failed: ipaddress_range.reference
2022/04/28 09:37:06 [ERROR] [STORE] Unable to Insert row in Table 'ipaddress_range': UNIQUE constraint failed: ipaddress_range.reference

but it works a expected, is this kinds of ERROR should be Warning level?

the fic deployment is

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: f5-ipam-controller
  name: f5-ipam-controller
  namespace: bigip-ctlr
spec:
  replicas: 1
  selector:
    matchLabels:
      app: f5-ipam-controller
  template:
    metadata:
      labels:
        app: f5-ipam-controller
    spec:
      containers:
      - args:
        - --orchestration
        - kubernetes
        - --ip-range
        - '{"Dev":"172.16.3.21-172.16.3.30","Test":"172.16.3.31-172.16.3.40","Production":"172.16.3.41-172.16.3.50","Default":"172.16.3.51-172.16.3.60" } '
        - --log-level
        - INFO
        command:
        - /app/bin/f5-ipam-controller
        image: f5networks/f5-ipam-controller:0.1.7
        imagePullPolicy: IfNotPresent
        name: f5-ipam-controller
        terminationMessagePath: /dev/termination-log
        volumeMounts:
        - mountPath: /app/ipamdb
          name: samplevol
      securityContext:
        fsGroup: 1200
        runAsGroup: 1200
        runAsUser: 1200
      serviceAccount: ipam-ctlr
      volumes:
      - name: samplevol
        persistentVolumeClaim:
          claimName: pvc-local

Steps To Reproduce

  1. Add a new label with a range of IP addresses in the FIC deployment
  2. The controller would restart and log the above

Observations (if any)

[IPv6] unable to create VS using IPv6 range

Setup Details

FIC Version : 0.1.2
CIS Version : 2.4.0
FIC Build: f5networks/f5-ipam-controller:latest
CIS Build: f5networks/k8s-bigip-ctlr:latest
BIGIP Version: Big IP 15.1.2.1
AS3 Version: 3.26
Orchestration: K8S
Orchestration Version: 1.20
Additional Setup details: Calico, NodePort

Description

Unable to create virtualService when using IPv6 range.

Steps To Reproduce

  1. Create range using IPv6 addresses
  2. Deploy VS referencing the range
  3. Confirm error

Expected Result

IPv6 address allocated from ipv6 range

Actual Result

Error

Diagnostic Information

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: f5-ipam-controller
  name: f5-ipam-controller
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: f5-ipam-controller
  template:
    metadata:
      labels:
        app: f5-ipam-controller
    spec:
      containers:
      - args:
        - --orchestration=kubernetes
        - --ip-range='{"Test":"10.192.75.113-10.192.75.116","Production":"10.192.125.30-10.192.125.50","ipv6":"240b:c0e0:105:2870:6518:0002:1:0001-240b:c0e0:105:2870:6518:0002:1:0020"}'
        - --log-level=DEBUG
        command:
        - /app/bin/f5-ipam-controller
        image:  f5networks/f5-ipam-controller:0.1.2
        imagePullPolicy: IfNotPresent
        name: f5-ipam-controller
      serviceAccount: ipam-ctlr
      serviceAccountName: ipam-ctlr
apiVersion: "cis.f5.com/v1"
kind: VirtualServer
metadata:
  name: f5-demo-mysite
  labels:
    f5cr: "true"
spec:
  host: mysite.f5demo.com
  ipamLabel: ipv6
  pools:
  - monitor:
      interval: 20
      recv: ""
      send: /
      timeout: 31
      type: http
    path: /
    service: f5-demo
    servicePort: 80

FIC error:

# kubectl logs f5-ipam-controller-6c9bcdcd8d-rxpqx -n kube-system
2021/05/16 13:55:26 [DEBUG] Creating IPAM Kubernetes Client
2021/05/16 13:55:26 [INFO] [INIT] Starting: F5 IPAM Controller - Version: 0.1.2, BuildInfo: azure-215-77b1d12be7e9c4c344e1747451d18be704a3b30b
2021/05/16 13:55:26 [DEBUG] [ipam] Creating Informers for Namespace kube-system
2021/05/16 13:55:26 [DEBUG] Created New IPAM Client
2021/05/16 13:55:26 [DEBUG] [MGR] Creating Manager with Provider: f5-ip-provider
2021/05/16 13:55:26 [DEBUG] [STORE] [id ipaddress status ipam_label]
2021/05/16 13:55:26 [DEBUG] [STORE]  1	 10.192.75.113 1 Test
2021/05/16 13:55:26 [DEBUG] [STORE]  2	 10.192.75.114 1 Test
2021/05/16 13:55:26 [DEBUG] [STORE]  3	 10.192.75.115 1 Test
2021/05/16 13:55:26 [DEBUG] [STORE]  4	 10.192.125.30 1 Production
2021/05/16 13:55:26 [DEBUG] [STORE]  5	 10.192.125.31 1 Production
2021/05/16 13:55:26 [DEBUG] [STORE]  6	 10.192.125.32 1 Production
2021/05/16 13:55:26 [DEBUG] [STORE]  7	 10.192.125.33 1 Production
2021/05/16 13:55:26 [DEBUG] [STORE]  8	 10.192.125.34 1 Production
2021/05/16 13:55:26 [DEBUG] [STORE]  9	 10.192.125.35 1 Production
2021/05/16 13:55:26 [DEBUG] [STORE]  10	 10.192.125.36 1 Production
2021/05/16 13:55:26 [DEBUG] [STORE]  11	 10.192.125.37 1 Production
2021/05/16 13:55:26 [DEBUG] [STORE]  12	 10.192.125.38 1 Production
2021/05/16 13:55:26 [DEBUG] [STORE]  13	 10.192.125.39 1 Production
2021/05/16 13:55:26 [DEBUG] [STORE]  14	 10.192.125.40 1 Production
2021/05/16 13:55:26 [DEBUG] [STORE]  15	 10.192.125.41 1 Production
2021/05/16 13:55:26 [DEBUG] [STORE]  16	 10.192.125.42 1 Production
2021/05/16 13:55:26 [DEBUG] [STORE]  17	 10.192.125.43 1 Production
2021/05/16 13:55:26 [DEBUG] [STORE]  18	 10.192.125.44 1 Production
2021/05/16 13:55:26 [DEBUG] [STORE]  19	 10.192.125.45 1 Production
2021/05/16 13:55:26 [DEBUG] [STORE]  20	 10.192.125.46 1 Production
2021/05/16 13:55:26 [DEBUG] [STORE]  21	 10.192.125.47 1 Production
2021/05/16 13:55:26 [DEBUG] [STORE]  22	 10.192.125.48 1 Production
2021/05/16 13:55:26 [DEBUG] [STORE]  23	 10.192.125.49 1 Production
2021/05/16 13:55:26 [DEBUG] [STORE]  24	 240b:c0e0:105:2870:6518:2:1:1 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  25	 240b:c0e0:105:2870:6518:2:1:2 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  26	 240b:c0e0:105:2870:6518:2:1:3 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  27	 240b:c0e0:105:2870:6518:2:1:4 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  28	 240b:c0e0:105:2870:6518:2:1:5 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  29	 240b:c0e0:105:2870:6518:2:1:6 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  30	 240b:c0e0:105:2870:6518:2:1:7 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  31	 240b:c0e0:105:2870:6518:2:1:8 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  32	 240b:c0e0:105:2870:6518:2:1:9 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  33	 240b:c0e0:105:2870:6518:2:1:a 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  34	 240b:c0e0:105:2870:6518:2:1:b 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  35	 240b:c0e0:105:2870:6518:2:1:c 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  36	 240b:c0e0:105:2870:6518:2:1:d 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  37	 240b:c0e0:105:2870:6518:2:1:e 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  38	 240b:c0e0:105:2870:6518:2:1:f 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  39	 240b:c0e0:105:2870:6518:2:1:10 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  40	 240b:c0e0:105:2870:6518:2:1:11 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  41	 240b:c0e0:105:2870:6518:2:1:12 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  42	 240b:c0e0:105:2870:6518:2:1:13 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  43	 240b:c0e0:105:2870:6518:2:1:14 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  44	 240b:c0e0:105:2870:6518:2:1:15 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  45	 240b:c0e0:105:2870:6518:2:1:16 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  46	 240b:c0e0:105:2870:6518:2:1:17 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  47	 240b:c0e0:105:2870:6518:2:1:18 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  48	 240b:c0e0:105:2870:6518:2:1:19 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  49	 240b:c0e0:105:2870:6518:2:1:1a 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  50	 240b:c0e0:105:2870:6518:2:1:1b 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  51	 240b:c0e0:105:2870:6518:2:1:1c 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  52	 240b:c0e0:105:2870:6518:2:1:1d 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  53	 240b:c0e0:105:2870:6518:2:1:1e 1 ipv6
2021/05/16 13:55:26 [DEBUG] [STORE]  54	 240b:c0e0:105:2870:6518:2:1:1f 1 ipv6
2021/05/16 13:55:26 [INFO] [CORE] Controller started
2021/05/16 13:55:26 [INFO] Starting IPAMClient Informer
I0516 13:55:26.362609       1 shared_informer.go:240] Waiting for caches to sync for F5 IPAMClient Controller
2021/05/16 13:55:26 [DEBUG] Enqueueing on Create: kube-system/ipam.172.28.15.147.k8s
2021/05/16 13:55:26 [DEBUG] Enqueueing on Create: kube-system/ipam.k8s
I0516 13:55:26.463485       1 shared_informer.go:247] Caches are synced for F5 IPAMClient Controller
2021/05/16 13:55:26 [DEBUG] K8S Orchestrator Started
2021/05/16 13:55:26 [DEBUG] Starting Response Worker
2021/05/16 13:55:26 [DEBUG] Starting Custom Resource Worker
2021/05/16 13:55:26 [DEBUG] Processing Key: &{0xc00016e000 <nil> Create}
2021/05/16 13:55:26 [DEBUG] Processing Key: &{0xc00016e148 <nil> Create}
2021/05/16 13:55:26 [DEBUG] [CORE] Allocated IP: 240b:c0e0:105:2870:6518:2:1:1 for Request:
Hostname: mysite.f5demo.com	Key: 	CIDR: 	IPAMLabel: ipv6	IPAddr: 	Operation: Create

2021/05/16 13:55:26 [ERROR] [IPMG] Unable to Create 'A' Record, as Invalid IP Address Provided
2021/05/16 13:55:26 [ERROR] Unable to Update F5IPAM: kube-system/ipam.172.28.15.147.k8s	 Error: F5IPAM.fic.f5.com "ipam.172.28.15.147.k8s" is invalid: status.IPStatus.ip: Invalid value: "": status.IPStatus.ip in body should match '^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$'
2021/05/16 13:55:26 [DEBUG] Updated: kube-system/ipam.172.28.15.147.k8s with Status. With IP: 240b:c0e0:105:2870:6518:2:1:1 for Request:
Hostname: mysite.f5demo.com	Key: 	CIDR: 	IPAMLabel: ipv6	IPAddr: 240b:c0e0:105:2870:6518:2:1:1	Operation: Create

Observations (if any)

ipam default deplyoment in GKE (w/o local storage) fails with releases >0.1.4

Setup Details

FIC Version : 0.1.5 and 0.1.6
CIS Version : not relevant
FIC Build: f5networks/f5-ipam-controller:lates // 0.1.5 and 0.1.6
CIS Build: not relevant
BIGIP Version: not relevant
AS3 Version: not relevant
Orchestration: K8S
Orchestration Version: v1.21.5-gke.1302
Additional Setup details: <Platform/CNI Plugins/ cluster nodes/ etc>

Description

container deployment fails with the default deployment yaml: https://clouddocs.f5.com/containers/latest/userguide/ipam/#f5-ipam-controller-deployment

sudo kubectl get pod -n kube-system | grep ipam
f5-ipam-controller-6f7d67b9b-rctvj 0/1 Error 2 17s

sudo kubectl logs -n kube-system f5-ipam-controller-6f7d67b9b-rctvj
2022/01/03 14:37:46 [INFO] [INIT] Starting: F5 IPAM Controller - Version: 0.1.6, BuildInfo: azure-1677-f86d2913adf51b4c8ebc04cac919203623abe5d6
2022/01/03 14:37:46 [ERROR] [STORE] Unable to create IPAM DB file: open /app/ipamdb/cis_ipam.sqlite3: no such file or directory
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x60 pc=0x5c73b5]

goroutine 1 [running]:
github.com/F5Networks/f5-ipam-controller/pkg/provider.(*IPAMProvider).Init(0xc000393ee0, 0x7ffe9359b96e, 0x8f, 0x7ff7df6fb098)
/go/src/github.com/F5Networks/f5-ipam-controller/pkg/provider/provider.go:60 +0xf5
github.com/F5Networks/f5-ipam-controller/pkg/provider.NewProvider(0x7ffe9359b96e, 0x8f, 0x2000107)
/go/src/github.com/F5Networks/f5-ipam-controller/pkg/provider/provider.go:44 +0xa5
github.com/F5Networks/f5-ipam-controller/pkg/manager.NewIPAMManager(0x7ffe9359b96e, 0x8f, 0x28, 0xc0003aaa70, 0x1)
/go/src/github.com/F5Networks/f5-ipam-controller/pkg/manager/f5ipammanager.go:43 +0x39
github.com/F5Networks/f5-ipam-controller/pkg/manager.NewManager(0x17cde2c, 0xe, 0x7ffe9359b96e, 0x8f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/F5Networks/f5-ipam-controller/pkg/manager/manager.go:53 +0x452
main.main()
/go/src/github.com/F5Networks/f5-ipam-controller/cmd/f5-ipam-controller/main.go:278 +0x4a5

In older releases <0.1.5 it's working as expected.

Steps To Reproduce - failed deployment

  1. try to deploy with latest image based on: https://clouddocs.f5.com/containers/latest/userguide/ipam/#f5-ipam-controller-deployment

Expected Result

deplyoment should create an ipam container running in 1 pod

Actual Result

pod can not get created

Diagnostic Information

2022/01/03 14:37:46 [ERROR] [STORE] Unable to create IPAM DB file: open /app/ipamdb/cis_ipam.sqlite3: no such file or directory
panic: runtime error: invalid memory address or nil pointer dereference

Steps To Reproduce - running deployment

  1. try to deploy with image 'f5networks/f5-ipam-controller:0.1.4' based on: https://clouddocs.f5.com/containers/latest/userguide/ipam/#f5-ipam-controller-deployment

Expected Result

deplyoment should create an ipam container running in 1 pod

Actual Result

pod is up and running

Diagnostic Information

sudo kubectl get pod -n kube-system | grep ipam
f5-ipam-controller-6f7d67b9b-rctvj 0/1 Terminating 4 2m38s
f5-ipam-controller-86757b4596-hc9dj 1/1 Running 0 13s

sudo kubectl logs -n kube-system f5-ipam-controller-86757b4596-hc9dj
2022/01/03 14:39:38 [INFO] [INIT] Starting: F5 IPAM Controller - Version: 0.1.4, BuildInfo: azure-453-9f505dd510b697a3b0058aefa7aace9ec4b519c3
2022/01/03 14:39:38 [INFO] [CORE] Controller started
2022/01/03 14:39:38 [INFO] Starting IPAMClient Informer
I0103 14:39:38.906049 1 shared_informer.go:240] Waiting for caches to sync for F5 IPAMClient Controller
E0103 14:39:39.019560 1 reflector.go:138] github.com/F5Networks/f5-ipam-controller/pkg/ipammachinery/informers.go:35: Failed to watch *v1.F5IPAM: failed to list *v1.F5IPAM: f5ipams.fic.f5.com is forbidden: User "system:serviceaccount:kube-system:ipam-ctlr" cannot list resource "f5ipams" in API group "fic.f5.com" in the namespace "kube-system"
E0103 14:39:40.309671 1 reflector.go:138] github.com/F5Networks/f5-ipam-controller/pkg/ipammachinery/informers.go:35: Failed to watch *v1.F5IPAM: failed to list *v1.F5IPAM: f5ipams.fic.f5.com is forbidden: User "system:serviceaccount:kube-system:ipam-ctlr" cannot list resource "f5ipams" in API group "fic.f5.com" in the namespace "kube-system"
E0103 14:39:43.422518 1 reflector.go:138] github.com/F5Networks/f5-ipam-controller/pkg/ipammachinery/informers.go:35: Failed to watch *v1.F5IPAM: failed to list *v1.F5IPAM: f5ipams.fic.f5.com is forbidden: User "system:serviceaccount:kube-system:ipam-ctlr" cannot list resource "f5ipams" in API group "fic.f5.com" in the namespace "kube-system"
E0103 14:39:48.758814 1 reflector.go:138] github.com/F5Networks/f5-ipam-controller/pkg/ipammachinery/informers.go:35: Failed to watch *v1.F5IPAM: failed to list *v1.F5IPAM: f5ipams.fic.f5.com is forbidden: User "system:serviceaccount:kube-system:ipam-ctlr" cannot list resource "f5ipams" in API group "fic.f5.com" in the namespace "kube-system"

Observations (if any)

Documentation RBAC - misleading

Why is this example on the main page of the IPAM user guide. I assume this is not needed as the best practice is to use the SA from CIS controller.

image

Also on the installation page, can you please provide an Installation example with manifests and not only with Helm

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.