Code Monkey home page Code Monkey logo

orange-opensource / nifikop Goto Github PK

View Code? Open in Web Editor NEW
128.0 17.0 34.0 13.94 MB

The NiFiKop NiFi Kubernetes operator makes it easy to run Apache NiFi on Kubernetes. Apache NiFI is a free, open-source solution that support powerful and scalable directed graphs of data routing, transformation, and system mediation logic.

Home Page: https://orange-opensource.github.io/nifikop/

License: Apache License 2.0

Smarty 0.12% Makefile 1.06% Dockerfile 0.76% Shell 0.51% Go 90.97% JavaScript 3.47% CSS 0.29% SCSS 2.69% Mustache 0.13%
nifi-operator kubernetes-operator golang nifi kubernetes

nifikop's Introduction

nifikop's People

Contributors

arttii avatar comtef avatar dependabot[bot] avatar erdrix avatar fdehay avatar jstewart612 avatar juldrixx avatar juldrixxbis avatar mertkayhan avatar mh013370 avatar npapapietro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nifikop's Issues

[Feature/Operator] Multi-k8s support

Feature Request

Is your feature request related to a problem? Please describe.

Regarding multi cluster k8s situation, it would be great if the operator could support multi-kubernetes deployment. It should allow :

  • For stateless dataflows multi-site deployment
  • NiFi cluster deployed on multi-site (bench performance to validate it)
  • One operator to manage them all !

Describe the solution you'd like to see

We have at least two ways of doing so :

  • Use admiralty sdk such as in casskop
  • Check istio Operator implementation for remote (seems more flexible).

Nodes State entry not removed after Scaledown ()

Bug Report

What did you do?
Scaledown : Gracefully remove node

Screenshot 2021-06-14 at 4 28 47 PM

What did you expect to see?
The entry in Status for node is to be removed.

What did you see instead? Under which circumstances?
Entry exist with state POD_REMOVING, as pod is already removed and no pod running.

Screenshot 2021-06-14 at 4 15 00 PM

Environment

  • nifikop version: 0.6.0

  • go version: go1.13.15

  • Kubernetes version information: v1.18.14

  • Kubernetes cluster kind:

  • NiFi version: 1.12.1

NiFi Cluster doesn't spin up

Bug Report

Getting error while deploying simple nifi cluster.
{"level":"info","ts":1607960170.9530003,"logger":"cmd","msg":"Operator Version: 0.3.1"}
{"level":"info","ts":1607960170.9530435,"logger":"cmd","msg":"Go Version: go1.14.4"}
{"level":"info","ts":1607960170.95305,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"}
{"level":"info","ts":1607960170.9530544,"logger":"cmd","msg":"Version of operator-sdk: v0.18.1"}
{"level":"info","ts":1607960170.9534504,"logger":"leader","msg":"Trying to become the leader."}
I1214 15:36:12.004143 1 request.go:621] Throttling request took 1.034656236s, request: GET:https://10.19.240.1:443/apis/scheduling.k8s.io/v1?timeout=32s
{"level":"info","ts":1607960172.0770478,"logger":"leader","msg":"Found existing lock with my name. I was likely restarted."}
{"level":"info","ts":1607960172.077083,"logger":"leader","msg":"Continuing as the leader."}
time="2020-12-14T15:36:12Z" level=info msg="Writing ready file."
{"level":"info","ts":1607960173.1828148,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"0.0.0.0:8383"}
{"level":"info","ts":1607960173.1836252,"logger":"cmd","msg":"Registering Components."}
{"level":"info","ts":1607960173.1841946,"logger":"cmd","msg":"Starting manager."}
{"level":"info","ts":1607960173.1846204,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
{"level":"info","ts":1607960173.184939,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"nifiregistryclient-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1607960173.1851072,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"nifiuser-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1607960173.1854053,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"nifiparametercontext-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1607960173.1857305,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"nificlustertask-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1607960173.1848311,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"nificluster-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1607960173.1863284,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"nifidataflow-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1607960173.2862976,"logger":"controller-runtime.controller","msg":"Starting Controller","controller":"nifiparametercontext-controller"}
{"level":"info","ts":1607960173.2863894,"logger":"controller-runtime.controller","msg":"Starting workers","controller":"nifiparametercontext-controller","worker count":1}
{"level":"info","ts":1607960173.286246,"logger":"controller-runtime.controller","msg":"Starting Controller","controller":"nifiregistryclient-controller"}
{"level":"info","ts":1607960173.2864208,"logger":"controller-runtime.controller","msg":"Starting workers","controller":"nifiregistryclient-controller","worker count":1}
{"level":"info","ts":1607960173.2872162,"logger":"controller-runtime.controller","msg":"Starting Controller","controller":"nifidataflow-controller"}
{"level":"info","ts":1607960173.2873166,"logger":"controller-runtime.controller","msg":"Starting workers","controller":"nifidataflow-controller","worker count":1}
{"level":"info","ts":1607960173.2871523,"logger":"controller-runtime.controller","msg":"Starting Controller","controller":"nificlustertask-controller"}
{"level":"info","ts":1607960173.2877488,"logger":"controller-runtime.controller","msg":"Starting workers","controller":"nificlustertask-controller","worker count":1}
{"level":"info","ts":1607960173.2880201,"logger":"controller_nificlustertask","msg":"Reconciling NifiCluster","Request.Namespace":"nifi","Request.Name":"simplenifi"}
{"level":"info","ts":1607960173.2876427,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"nificluster-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1607960173.2873774,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"nifiuser-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1607960173.388855,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"nificluster-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1607960173.3896422,"logger":"controller-runtime.controller","msg":"Starting Controller","controller":"nifiuser-controller"}
{"level":"info","ts":1607960173.4897692,"logger":"controller-runtime.controller","msg":"Starting workers","controller":"nifiuser-controller","worker count":1}
{"level":"info","ts":1607960173.4897482,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"nificluster-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1607960173.5903778,"logger":"controller-runtime.controller","msg":"Starting Controller","controller":"nificluster-controller"}
{"level":"info","ts":1607960173.5907698,"logger":"controller-runtime.controller","msg":"Starting workers","controller":"nificluster-controller","worker count":1}
{"level":"info","ts":1607960173.5911188,"logger":"controller_nificluster","msg":"Reconciling NifiCluster","Request.Namespace":"nifi","Request.Name":"simplenifi"}
{"level":"info","ts":1607960173.604301,"logger":"controller_nificluster","msg":"CR status updated","Request.Namespace":"nifi","Request.Name":"simplenifi","status":"ClusterReconciling"}
E1214 15:36:13.713654 1 runtime.go:78] Observed a panic: runtime.boundsError{x:1, y:1, signed:true, code:0x0} (runtime error: index out of range [1] with length 1)
goroutine 512 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x17436e0, 0xc0006ad1a0)
nifikop/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
nifikop/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82
panic(0x17436e0, 0xc0006ad1a0)
/usr/local/go/src/runtime/panic.go:969 +0x166
github.com/Orange-OpenSource/nifikop/pkg/util/zookeeper.GetPortAddress(...)
nifikop/pkg/util/zookeeper/common.go:32
github.com/Orange-OpenSource/nifikop/pkg/resources/nifi.(*Reconciler).pod(0xc000945d40, 0xc000000001, 0xc00037e480, 0xc000850c40, 0x1, 0x1, 0x1abca00, 0xc0008dd000, 0x1, 0x1)
nifikop/pkg/resources/nifi/pod.go:59 +0x35ad
github.com/Orange-OpenSource/nifikop/pkg/resources/nifi.(*Reconciler).Reconcile(0xc000945d40, 0x1abca00, 0xc0008dd000, 0x15ccee0, 0x1a5fb40)
nifikop/pkg/resources/nifi/nifi.go:179 +0xa09
github.com/Orange-OpenSource/nifikop/pkg/controller/nificluster.(*ReconcileNifiCluster).Reconcile(0xc0005fd300, 0xc00060a78c, 0x4, 0xc00060a770, 0xa, 0x0, 0xbfedff7b63399626, 0xc000162ea0, 0xc0000c06c8)
nifikop/pkg/controller/nificluster/nificluster_controller.go:174 +0x452
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0002126c0, 0x16ae860, 0xc0009b4a40, 0x17b3c00)
nifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256 +0x161
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0002126c0, 0x203000)
nifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232 +0xae
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc0002126c0)
nifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211 +0x2b
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0006f0d80)
nifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006f0d80, 0x1a7b080, 0xc00076a030, 0xc00000fe01, 0xc0001698c0)
nifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xa3
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006f0d80, 0x3b9aca00, 0x0, 0x18e5701, 0xc0001698c0)
nifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0xe2
k8s.io/apimachinery/pkg/util/wait.Until(0xc0006f0d80, 0x3b9aca00, 0xc0001698c0)
nifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
nifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:193 +0x305
panic: runtime error: index out of range [1] with length 1 [recovered]
panic: runtime error: index out of range [1] with length 1

goroutine 512 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
nifikop/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x105
panic(0x17436e0, 0xc0006ad1a0)
/usr/local/go/src/runtime/panic.go:969 +0x166
github.com/Orange-OpenSource/nifikop/pkg/util/zookeeper.GetPortAddress(...)
nifikop/pkg/util/zookeeper/common.go:32
github.com/Orange-OpenSource/nifikop/pkg/resources/nifi.(*Reconciler).pod(0xc000945d40, 0xc000000001, 0xc00037e480, 0xc000850c40, 0x1, 0x1, 0x1abca00, 0xc0008dd000, 0x1, 0x1)
nifikop/pkg/resources/nifi/pod.go:59 +0x35ad
github.com/Orange-OpenSource/nifikop/pkg/resources/nifi.(*Reconciler).Reconcile(0xc000945d40, 0x1abca00, 0xc0008dd000, 0x15ccee0, 0x1a5fb40)
nifikop/pkg/resources/nifi/nifi.go:179 +0xa09
github.com/Orange-OpenSource/nifikop/pkg/controller/nificluster.(*ReconcileNifiCluster).Reconcile(0xc0005fd300, 0xc00060a78c, 0x4, 0xc00060a770, 0xa, 0x0, 0xbfedff7b63399626, 0xc000162ea0, 0xc0000c06c8)
nifikop/pkg/controller/nificluster/nificluster_controller.go:174 +0x452
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0002126c0, 0x16ae860, 0xc0009b4a40, 0x17b3c00)
nifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256 +0x161
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0002126c0, 0x203000)
nifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232 +0xae
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc0002126c0)
nifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211 +0x2b
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0006f0d80)
nifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006f0d80, 0x1a7b080, 0xc00076a030, 0xc00000fe01, 0xc0001698c0)
nifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xa3
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006f0d80, 0x3b9aca00, 0x0, 0x18e5701, 0xc0001698c0)
nifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0xe2
k8s.io/apimachinery/pkg/util/wait.Until(0xc0006f0d80, 0x3b9aca00, 0xc0001698c0)
nifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
nifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:193 +0x305

What did you do?
#! /bin/bash
export GCP_PROJECT=${1}
export GCP_ZONE=us-central1-a
export CLUSTER_NAME=nifi-cluster

gcloud container clusters create $CLUSTER_NAME \
--cluster-version latest \
--machine-type=e2-medium \
--num-nodes 3 \
--zone $GCP_ZONE \
--project $GCP_PROJECT

gcloud container clusters get-credentials $CLUSTER_NAME \
--zone $GCP_ZONE \
--project $GCP_PROJECT

kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value core/account)

kubectl create namespace nifi

kubectl create namespace zookeeper

kubectl create namespace cert-manager

cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
parameters:
type: pd-standard
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
EOF

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: nifikop
EOF

helm install nifikop-zk bitnami/zookeeper \
--namespace=nifi \
--set resources.requests.memory=256Mi \
--set resources.requests.cpu=250m \
--set resources.limits.memory=256Mi \
--set resources.limits.cpu=250m \
--set networkPolicy.enabled=true \
--set replicaCount=3 \
--set namespaces={“nifi”}

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.1.0/cert-manager.yaml
kubectl apply -f https://raw.githubusercontent.com/Orange-OpenSource/nifikop/master/deploy/crds/v1/nifi.orange.com_nificlusters_crd.yaml
kubectl apply -f https://raw.githubusercontent.com/Orange-OpenSource/nifikop/master/deploy/crds/v1/nifi.orange.com_nifiusers_crd.yaml
kubectl apply -f https://raw.githubusercontent.com/Orange-OpenSource/nifikop/master/deploy/crds/v1/nifi.orange.com_nifiusergroups_crd.yaml
kubectl apply -f https://raw.githubusercontent.com/Orange-OpenSource/nifikop/master/deploy/crds/v1/nifi.orange.com_nifidataflows_crd.yaml
kubectl apply -f https://raw.githubusercontent.com/Orange-OpenSource/nifikop/master/deploy/crds/v1/nifi.orange.com_nifiparametercontexts_crd.yaml
kubectl apply -f https://raw.githubusercontent.com/Orange-OpenSource/nifikop/master/deploy/crds/v1/nifi.orange.com_nifiregistryclients_crd.yaml

helm install nifikop \
orange-incubator/nifikop \
--namespace=nifi \
--set namespaces={"nifi"} \
--set resources.requests.memory=256Mi \
--set resources.requests.cpu=250m \
--set resources.limits.memory=256Mi \
--set resources.limits.cpu=250m

cat <<EOF | kubectl create -n nifi -f -
apiVersion: nifi.orange.com/v1alpha1
kind: NifiCluster
metadata:
name: simplenifi
spec:
service:
headlessEnabled: true
zkAddress: "nifikop-zk-zookeeper:2181"
zkPath: "/simplenifi"
clusterImage: "apache/nifi:1.12,1"
oneNifiNodePerNode: false
nodeConfigGroups:
default_group:
isNode: true
storageConfigs:
- mountPath: "/opt/nifi/nifi-current/logs"
name: logs
pvcSpec:
accessModes:
- ReadWriteOnce
storageClassName: "local-storage"
resources:
requests:
storage: 10Gi
serviceAccountName: "nifikop"
resourcesRequirements:
limits:
cpu: "2"
memory: 3Gi
requests:
cpu: "1"
memory: 1Gi
nodes:
- id: 1
nodeConfigGroup: "default_group"
- id: 2
nodeConfigGroup: "default_group"
propagateLabels: true
nifiClusterTaskSpec:
retryDurationMinutes: 10
listenersConfig:
internalListeners:
- type: "http"
name: "http"
containerPort: 8080
- type: "cluster"
name: "cluster"
containerPort: 6007
- type: "s2s"
name: "s2s"
containerPort: 10000
EOF

What did you expect to see?
A clear and concise description of what you expected to happen (or insert a code snippet).
NiFi is up and running

What did you see instead? Under which circumstances?
A clear and concise description of what you expected to happen (or insert a code snippet).

Environment

  • nifikop version:
    0.3.1 - (runtime error: index out of range [1] with length 1)
    0.4.1-alpha, 0.4.2-alpha - No service is created

  • Kubernetes version information:

kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.9", GitCommit:"4fb7ed12476d57b8437ada90b4f93b17ffaeed99", GitTreeState:"clean", BuildDate:"2020-07-15T16:18:16Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.14-gke.1200", GitCommit:"7c407f5cc8632f9af5a2657f220963aa7f1c46e7", GitTreeState:"clean", BuildDate:"2020-12-01T09:20:59Z", GoVersion:"go1.13.15b4", Compiler:"gc", Platform:"linux/amd64"}

  • Kubernetes cluster kind:
    GCP Kubernetes Engine
    Master version 1.17.14-gke.1200

  • NiFi version:
    apache/nifi:1.12.1

nifikop tries to delete nifiusers in other namespaces while deleting nificluster

Bug Report

What did you do?

  • the operator is scoped to a specific namespace
  • the nificluster has nifiusers and nifigroups configured
  • when deleting the nificluster (cr) there comes the following error message:
2021-05-25T13:08:26.505Z	ERROR	controller-runtime.manager.controller.nificluster	Reconciler error	{"reconciler group": "nifi.orange.com", "reconciler kind": "NifiCluster", "name": "nifi", "namespace": "WHATEVER", "error": "nifiusers.nifi.orange.com is forbidden: User \"MYUSER\" cannot deletecollection resource \"nifiusers\" in API group \"nifi.orange.com\" in the namespace \"openshift-kube-controller-manager\""}
  • a deletion is not possible because the operator is stuck at this point

What did you expect to see?
Nifikop shouldn't try delete nifiusers in other namespaces where the operator is not scoped to.
The deletion of the nificluster should just work.

Environment

  • nifikop version: 0.6.1 and below

[NiFiDataflowTest] Extends Operator to manage Flow test validation

Feature Request

Is your feature request related to a problem? Please describe.

There is no way to automate NiFi datafllow validation test.

Describe the solution you'd like to see

Define a resource like NifiDataflowFunctionalTest to define a list of test to play.
Something like :

nifiDataflowTest:
  nifiDataflowRef: NifiDataflowSpec
  inputsData:
    - content: string
      attributes: map[string]string
      injectComponentRef: string
  checkAssertions:
    - connectionRef: string
      content: 
        kind: [exactlyMatch | regexMatch]
        value: string
      attributes: map[string]{kind: [exactlyMatch | regexMatch], value: string}
  disableComponentRefs: list(string)

With a logic :

  • Deploy the nifi flow specified in the nifiDataflowTest.nifiDataflowRef ,
  • Disable all components referenced in nifiDataflowTest.disableComponentRefs ,
  • Stop all components with an incoming connection listed in nifiDataflowTest.checkAssertions[*].connectionRef ,
  • Create a GenerateFlowfile for each elements of nifiDataflowTest.inputsData and create a connection to the component referenced in nifiDataflowTest.inputsData[*].injectComponentRef
  • Then start all the other components,
  • check, for all elements of nifiDataflowTest.checkAssertions[*].connectionRef,if the connection contains an element, if so, compare the content and attributes of the flowfile with the associated. assertion If it doesn't match = Test failed, if it does match, start the output component of the connection. And this until one assertion fails or all assertions have

Using own Certificate Example (NiFiKop)

Need a document to use and run a secured cluster from own specified certificates.
I tried spawning cluster with the certificates are pod are not come up, as i see the certificates are not created properly.
Doc Report
Screenshot 2021-07-12 at 10 04 42 AM

changes in CR,
Screenshot 2021-07-12 at 10 15 28 AM

Details in SS,

  1. image

Screenshot 2021-07-12 at 10 28 38 AM

  1. cert-manager logs
    logs_cert_manager.log

What did you do?
In sslSecrets create is false.
openssl genrsa -out MyRootCA.key 2048
openssl req -x509 -new -nodes -key MyRootCA.key -sha256 -days 1024 -out MyRootCA.pem

openssl genrsa -out MyClient1.key 2048
openssl req -new -key MyClient1.key -out MyClient1.csr
openssl x509 -req -in MyClient1.csr -CA MyRootCA.pem -CAkey MyRootCA.key -CAcreateserial -out MyClient1.pem -days 1024 -sha256

Finally createa secret in same name space and set the value to false in create.

What did you expect to see?
Certificate are created properly, cluster is going to be in running state.

What did you see instead? Under which circumstances?
Certificate are not created properly, cluster is not spawm

Environment

  • nifikop version: 0.6.0

    insert release or Git SHA here

  • go version: go1.13.15

  • Kubernetes version information: v1.18.14

    insert output of kubectl version here

  • Kubernetes cluster kind:

  • NiFi version: 1.13.2

Additional context
Hands on certificate is less, Guide me in right direction if am doing wrong, correct me.

[Feature/NiFiUser] Add pki manager / user support to operator

Feature Request

Is your feature request related to a problem? Please describe.

The aim of this feature is to add the ability to use different PKI/User.

Describe the solution you'd like to see

This could be done by moving the PKI type and issuerRef to the User CRD.

Additional context

Refer to the PR #337 and PR #354

Error retrieving resource lock nifi/f1c5ece8.example.com

I need some help to fix a access issue when retrieving resource lock.

Question

What did you do?
I'm trying to run Nifi using Terraform in GKE.

What did you expect to see?
Nifi cluster running properly.

What did you see instead? Under which circumstances?
I'm getting the following error:
error retrieving resource lock nifi/f1c5ece8.example.com: configmaps "f1c5ece8.example.com" is forbidden: User "system:serviceaccount:nifi:nifikop" cannot get resource "configmaps" in API group "" in the namespace "nifi"

Environment

  • nifikop version:

v0.6.0-release

  • Kubernetes version information:
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.16-gke.2100", GitCommit:"36d0b0a39224fef7a40df3d2bc61dfd96c8c7f6a", GitTreeState:"clean", BuildDate:"2021-03-16T09:15:29Z", GoVersion:"go1.13.15b4", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes cluster kind:

  • NiFi version:

1.13.2

Untrusted proxy error after logging in with OIDC

Type of question

General Context/ Troubleshooting

Question

What did you do?
Started a secured, 3 node cluster in EKS. OpenID connect is configured.

What did you expect to see?
After connecting to cluster from web browser and logging in from OpenID conncet, I expected to see the NiFi UI.

What did you see instead? Under which circumstances?
Instead I see:

Untrusted proxy nifi-cluster-1-node.nifi-cluster-headless.nifi.svc.cluster.local, O=cert-manager

Environment

Additional context
I am using an ELB that has stickness turned on so I think this issue here is with NiFi authorizers, not authenticating via OpenID connect. I'm using the self-signed certs issued by nifi and then I have a separate internal load balancer with an ACM cert and external-dns configured. This component is working.

I have initial admin e-mail configured as well.
I'm also using the suggested NiFi properties mapping config from the nifikop docs.

I've tried researching this untrusted proxy error and it seems there are two common suggestions:

  1. Make sure the host name in authorizers.xml is exact match of cert. (verified this multiple times, even tried patching explicitly what I saw on the cert)
  2. Provide /proxy permission to the initial admin user. (haven't been able to try this)

Since authorizers.xml is generated by nifikop, I don't think its a typo that's causing the issue here. Any thoughts on what to try next?

Thanks in advance!

Node Affinity attributes missing for Pods

External Services not coming up and not able to access NIFI UI - Node Affinity attributes missing for Pods - Please let us know how can node affinity set through values.yaml

[Feature/Operator] Enable prometheus exporter

Feature Request

Is your feature request related to a problem? Please describe.

Create a generic way to enable metrics export for Prometheus.

Describe the solution you'd like to see

The operator could reconcile a reportingTask on cluster side ?

Error while deploying simple nifi cluster.

Type of question

Getting error while deploying simple nifi cluster.
Unable to resolve simplenifi-headless serviec within DNS. Below message displayed in Operator logs.
"error":"Get "http://simplenifi-headless.nifi.svc.cluster.local:8080/nifi-api/controller/cluster\": dial tcp: lookup simplenifi-headless.nifi.svc.cluster.local on 10.96.0.10:53: no such host",

Question

What did you do?
Executed below steps:

  1. Referred the getting started doc, installed the prereqisites, 3 node zookeeper and cert-manager using helm install.
  2. Deployed the CRDs manully:
    kubectl apply -f https://raw.githubusercontent.com/Orange-OpenSource/nifikop/master/deploy/crds/v1/nifi.orange.com_nificlusters_crd.yaml
    kubectl apply -f https://raw.githubusercontent.com/Orange-OpenSource/nifikop/master/deploy/crds/v1/nifi.orange.com_nifiusers_crd.yaml
    kubectl apply -f https://raw.githubusercontent.com/Orange-OpenSource/nifikop/master/deploy/crds/v1/nifi.orange.com_nifiusergroups_crd.yaml
    kubectl apply -f https://raw.githubusercontent.com/Orange-OpenSource/nifikop/master/deploy/crds/v1/nifi.orange.com_nifidataflows_crd.yaml
    kubectl apply -f https://raw.githubusercontent.com/Orange-OpenSource/nifikop/master/deploy/crds/v1/nifi.orange.com_nifiparametercontexts_crd.yaml
    kubectl apply -f https://raw.githubusercontent.com/Orange-OpenSource/nifikop/master/deploy/crds/v1/nifi.orange.com_nifiregistryclients_crd.yaml
  3. Install the nifi operator using helm:
    helm install nifikop
    orange-incubator/nifikop
    --namespace=nifi
    --set namespaces={"nifi"}
    --set image.tag=v0.4.1-alpha-release
  4. Clone the repo, edit simplenificluster.yaml and then deployed a simple NiFi cluster.

Edit below properties:
spec.zkAddress: "nifikop-zk-zookeeper:2181"
spec.nodeConfigGroups.default_group.erviceAccountName: "nifikop"
spec.nodeConfigGroups.default_group.storageConfigs[].pvcSpec.storageClassName: "nfs-client"

Executed the deployment:
kubectl create -n nifi -f config/samples/simplenificluster.yaml

What did you expect to see?
Expecting two clusters nodes to be in running state as part of simplenifi deployment.

What did you see instead? Under which circumstances?
The simplenifi pod is in init state, there is no error reported in pod description.
Inspecting the operator logs, observing below error logs:
"error":"Get "http://simplenifi-headless.nifi.svc.cluster.local:8080/nifi-api/controller/cluster\": dial tcp: lookup simplenifi-headless.nifi.svc.cluster.local on 10.96.0.10:53: no such host",

Here is cluster status:

(base) ~/config/ams  kubectl -n nifi get all
NAME READY STATUS RESTARTS AGE
pod/nifikop-68646cd785-nxhkm 1/1 Running 0 17m
pod/nifikop-zk-zookeeper-0 1/1 Running 30 16d
pod/nifikop-zk-zookeeper-1 1/1 Running 30 16d
pod/nifikop-zk-zookeeper-2 1/1 Running 30 16d
pod/simplenifi-1-nodemn8fg 0/1 Init:0/1 0 6m11s
pod/simplenifi-2-node78nxj 0/1 Init:0/1 0 6m11s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nifikop-zk-zookeeper ClusterIP 10.98.53.129 2181/TCP,2888/TCP,3888/TCP 16d
service/nifikop-zk-zookeeper-headless ClusterIP None 2181/TCP,2888/TCP,3888/TCP 16d
service/simplenifi LoadBalancer 10.96.189.36 8080:32413/TCP,6007:32125/TCP,10000:32286/TCP 6m12s
service/simplenifi-headless ClusterIP None 8080/TCP,6007/TCP,10000/TCP 6m12s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nifikop 1/1 1 1 17m

NAME DESIRED CURRENT READY AGE
replicaset.apps/nifikop-68646cd785 1 1 1 17m

NAME READY AGE
statefulset.apps/nifikop-zk-zookeeper 3/3 16d
(base) ~/config/ams 

Environment

  • nifikop version:
    0.4.1-alpha-release

  • Kubernetes version information:

(base) ~/config/ams  kubectl --kubeconfig version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-12T01:09:16Z", GoVersion:"go1.15.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:13:49Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

  • Kubernetes cluster kind:
    On-prem k8s cluster, 3-nodes matter and 3-node worker

  • NiFi version:
    apache/nifi:1.12.1

Additional context
Observing similar behaviour in two separate environments.

[Bug] headlessServiceEnabled incorrect in cluster samples

Bug Report

What did you do?
Tried to deploy the sample cluster using the headless service spec as formatted here

What did you expect to see?
The sample cluster to deploy.

What did you see instead? Under which circumstances?
An error stating headlessServiceEnabled must be included

Environment

  • nifikop version:

0.2.0

Possible Solution
I refactored my NiFiCluster object to have a field headlessServiceEnabled instead of what the sample has here.

Looks like this is what should be in the config based on the this line in the CRD.

Additional context
I'm surprised no one else ran into this issue trying to create the sample?

Seems like a simple fix to all 3 samples, just would like confirmation I'm not missing something here.

Dataflow creation not working

Type of question

Are you asking about community best practices, how to implement a specific feature, or about general context and help around nifikop ?

how to implement a specific feature

Question

What did you do?
Deployed a nifi-registry, nifi cluster, registry client and parameter context. Created a process group and versioned it in nifi-registry. Now I'm trying to deploy a dataflow which references that process group as shown below.

apiVersion: nifi.orange.com/v1alpha1
kind: NifiDataflow
metadata:
  name: test
spec:
  parentProcessGroupID: "16cfd2ec-0174-1000-0000-00004b9b35cc"
  bucketId: "2f27bb26-83a4-4b2d-9ed9-78bebdd63c7b"
  flowId: "95f1ed2d-d194-41d0-b844-e5e20df02b3a"
  flowVersion: 1
  runOnce: false
  skipInvalidControllerService: true
  skipInvalidComponent: true
  clusterRef:
    name: nifi
    namespace: ns
  registryClientRef:
    name: nifi-registry-client
    namespace: ns
  parameterContextRef:
    name: dataflow-lifecycle-1
    namespace: ns
  updateStrategy: drain

What did you expect to see?
I expected to see the process group appear in the ui.

What did you see instead? Under which circumstances?

Internal error.

Nifi logs:

2021-02-10 12:02:45,936 INFO [NiFi Web Server-132] o.a.n.w.s.NiFiAuthenticationFilter Authentication success for nifi-controller.ns.mgt.cluster.local 2021-02-10 12:02:45,938 INFO [NiFi Web Server-138] o.a.n.w.s.NiFiAuthenticationFilter Attempting request for (CN=nifi-controller.ns.mgt.cluster.local) POST https://nifi-headless.ns.svc.cluster.local:8443/nifi-api/process-groups/eb584e3d-e779-42aa-93bf-9ce5638d0398/process-groups (source ip: x.x.x.x) 2021-02-10 12:02:45,938 INFO [NiFi Web Server-138] o.a.n.w.s.NiFiAuthenticationFilter Authentication success for nifi-controller.ns.mgt.cluster.local 2021-02-10 12:02:45,939 ERROR [NiFi Web Server-138] o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: java.lang.NullPointerException. Returning Internal Server Error response. java.lang.NullPointerException: null

Nifikop logs:

{"level":"info","ts":1612955178.3317182,"logger":"dataflow-method","msg":"Retrieving Nifi client for ns/nifi"} {"level":"info","ts":1612955178.34934,"logger":"accesspolicies-method","msg":"Retrieving Nifi client for ns/nifi"} {"level":"error","ts":1612955178.351857,"logger":"nifi_client","msg":"Error during talking to nifi node","error":"Non 201 response from nifi node: 500 Internal Server Error","errorVerbose":"Non 201 response from nifi node: 500 Internal Server Error\ngithub.com/Orange-OpenSource/nifikop/pkg/nificlient.errorCreateOperation\n\tnifikop/pkg/nificlient/common.go:51\ngithub.com/Orange-OpenSource/nifikop/pkg/nificlient.(*nifiClient).CreateProcessGroup\n\tnifikop/pkg/nificlient/processgroup.go:39\ngithub.com/Orange-OpenSource/nifikop/pkg/clientwrappers/dataflow.CreateDataflow\n\tnifikop/pkg/clientwrappers/dataflow/dataflow.go:73\ngithub.com/Orange-OpenSource/nifikop/pkg/controller/nifidataflow.(*ReconcileNifiDataflow).Reconcile\n\tnifikop/pkg/controller/nifidataflow/nifidataflow_controller.go:226\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tnifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tnifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\tnifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\tnifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\tnifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tnifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\tnifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1373","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\tnifikop/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/Orange-OpenSource/nifikop/pkg/nificlient.errorCreateOperation\n\tnifikop/pkg/nificlient/common.go:51\ngithub.com/Orange-OpenSource/nifikop/pkg/nificlient.(*nifiClient).CreateProcessGroup\n\tnifikop/pkg/nificlient/processgroup.go:39\ngithub.com/Orange-OpenSource/nifikop/pkg/clientwrappers/dataflow.CreateDataflow\n\tnifikop/pkg/clientwrappers/dataflow/dataflow.go:73\ngithub.com/Orange-OpenSource/nifikop/pkg/controller/nifidataflow.(*ReconcileNifiDataflow).Reconcile\n\tnifikop/pkg/controller/nifidataflow/nifidataflow_controller.go:226\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tnifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tnifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\tnifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\tnifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\tnifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tnifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\tnifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90"} {"level":"error","ts":1612955178.3519518,"logger":"dataflow-method","msg":"Create registry-client request failed since Nifi node returned non 201","error":"non 201 response from NiFi cluster","errorVerbose":"non 201 response from NiFi cluster\ngithub.com/Orange-OpenSource/nifikop/pkg/nificlient.init\n\tnifikop/pkg/nificlient/common.go:25\nruntime.doInit\n\t/usr/local/go/src/runtime/proc.go:5420\nruntime.doInit\n\t/usr/local/go/src/runtime/proc.go:5415\nruntime.doInit\n\t/usr/local/go/src/runtime/proc.go:5415\nruntime.doInit\n\t/usr/local/go/src/runtime/proc.go:5415\nruntime.doInit\n\t/usr/local/go/src/runtime/proc.go:5415\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:190\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1373","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\tnifikop/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/Orange-OpenSource/nifikop/pkg/clientwrappers.ErrorCreateOperation\n\tnifikop/pkg/clientwrappers/common.go:35\ngithub.com/Orange-OpenSource/nifikop/pkg/clientwrappers/dataflow.CreateDataflow\n\tnifikop/pkg/clientwrappers/dataflow/dataflow.go:74\ngithub.com/Orange-OpenSource/nifikop/pkg/controller/nifidataflow.(*ReconcileNifiDataflow).Reconcile\n\tnifikop/pkg/controller/nifidataflow/nifidataflow_controller.go:226\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tnifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tnifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\tnifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\tnifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\tnifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tnifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\tnifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90"} {"level":"error","ts":1612955178.3519928,"logger":"dataflow-method","msg":"could not communicate with nifi node","error":"non 201 response from NiFi cluster","errorVerbose":"non 201 response from NiFi cluster\ngithub.com/Orange-OpenSource/nifikop/pkg/nificlient.init\n\tnifikop/pkg/nificlient/common.go:25\nruntime.doInit\n\t/usr/local/go/src/runtime/proc.go:5420\nruntime.doInit\n\t/usr/local/go/src/runtime/proc.go:5415\nruntime.doInit\n\t/usr/local/go/src/runtime/proc.go:5415\nruntime.doInit\n\t/usr/local/go/src/runtime/proc.go:5415\nruntime.doInit\n\t/usr/local/go/src/runtime/proc.go:5415\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:190\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1373","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\tnifikop/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/Orange-OpenSource/nifikop/pkg/clientwrappers.ErrorCreateOperation\n\tnifikop/pkg/clientwrappers/common.go:39\ngithub.com/Orange-OpenSource/nifikop/pkg/clientwrappers/dataflow.CreateDataflow\n\tnifikop/pkg/clientwrappers/dataflow/dataflow.go:74\ngithub.com/Orange-OpenSource/nifikop/pkg/controller/nifidataflow.(*ReconcileNifiDataflow).Reconcile\n\tnifikop/pkg/controller/nifidataflow/nifidataflow_controller.go:226\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tnifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tnifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\tnifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\tnifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\tnifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tnifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\tnifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90"} {"level":"info","ts":1612955178.352029,"logger":"controller_nifidataflow","msg":"failure creating dataflow","Request.Namespace":"ns","Request.Name":"test"} {"level":"error","ts":1612955178.3520458,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"nifidataflow-controller","request":"ns/test","error":"non 201 response from NiFi cluster","errorVerbose":"non 201 response from NiFi cluster\ngithub.com/Orange-OpenSource/nifikop/pkg/nificlient.init\n\tnifikop/pkg/nificlient/common.go:25\nruntime.doInit\n\t/usr/local/go/src/runtime/proc.go:5420\nruntime.doInit\n\t/usr/local/go/src/runtime/proc.go:5415\nruntime.doInit\n\t/usr/local/go/src/runtime/proc.go:5415\nruntime.doInit\n\t/usr/local/go/src/runtime/proc.go:5415\nruntime.doInit\n\t/usr/local/go/src/runtime/proc.go:5415\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:190\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1373","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\tnifikop/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tnifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tnifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\tnifikop/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\tnifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\tnifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tnifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\tnifikop/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90"} {"level":"info","ts":1612955178.3731368,"logger":"controller_nificluster","msg":"CR status updated","Request.Namespace":"ns","Request.Name":"nifi","status":"ClusterRunning"}

Environment

  • nifikop version: 0.4.2 alpha-release

  • Kubernetes version information: OpenShift 4.6.z, K8s 1.19

  • NiFi version: 1.12.1

Additional context
I also got the same error for nifi versions 1.11.4 and 1.13.0.

[Feature/Operator] Scaledown - Change Liveness & Readiness

Feature Request

Is your feature request related to a problem? Please describe.

The current readiness & liveness for a NiFi node are based on the possibility to query the port, but we don't challenge if the node is connected and part of the cluster. This limitation is introduced by the fact that in a scale down situation, the pod will be detected as not Ready, and lead to a "freeze" situation. Nonetheless, this case is not the best one ...

Describe the solution you'd like to see

It would be interesting, as we work on pod and not on Statefulset, to apply a different liveness & Readiness in a case of scale down for the targeted node (and only for it).

No initialAdminUser in NifiCluster CRD

Bug Report

What did you do?
Installed CRD with:

kubectl apply -f https://raw.githubusercontent.com/Orange-OpenSource/nifikop/master/config/crd/bases/nifi.orange.com_nificlusters.yaml
kubectl apply -f https://raw.githubusercontent.com/Orange-OpenSource/nifikop/master/config/crd/bases/nifi.orange.com_nifiusers.yaml
kubectl apply -f https://raw.githubusercontent.com/Orange-OpenSource/nifikop/master/config/crd/bases/nifi.orange.com_nifiusergroups.yaml
kubectl apply -f https://raw.githubusercontent.com/Orange-OpenSource/nifikop/master/config/crd/bases/nifi.orange.com_nifidataflows.yaml
kubectl apply -f https://raw.githubusercontent.com/Orange-OpenSource/nifikop/master/config/crd/bases/nifi.orange.com_nifiparametercontexts.yaml
kubectl apply -f https://raw.githubusercontent.com/Orange-OpenSource/nifikop/master/config/crd/bases/nifi.orange.com_nifiregistryclients.yaml

And wanted to proceed with secured cluster installation, as per documentation:
https://orange-opensource.github.io/nifikop/docs/3_tasks/2_security/1_ssl

What did you expect to see?
Expected the documentation to be up to date.

What did you see instead? Under which circumstances?
kubectl explain nificluster --recursive | grep initialAdminUser does not show that the field is present

Environment

  • nifikop version:

0.6.0-release

Admin user not working with managedAdminUsers tag

Question

How can I fix this? Thanks.

What did you do?
I'm running Nifi inside GKE and using managedAdminUsers tag.

apiVersion: nifi.orange.com/v1alpha1
kind: NifiCluster
metadata:
  name: nifi
  namespace: nifi
spec:
  service:
    headlessEnabled: true
  zkAddress: "zookeeper.default.svc.cluster.local:2181"
  zkPath: "/sec"
  clusterImage: "apache/nifi:1.12.1"
  oneNifiNodePerNode: false
  managedAdminUsers:
  - identity : "[email protected]"
    name: "user"
  propagateLabels: true
  ...

What did you expect to see?
I expected to see the admin user under nifi/data/user.xml and nifi/data/authorizations.xml.

What did you see instead? Under which circumstances?
There is no user created using the above tag.

Environment

  • nifikop version:

0.6.0-release

  • Kubernetes version information:

Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.16-gke.502", GitCommit:"a2a88ab32201dca596d0cdb116bbba3f765ebd36", GitTreeState:"clean", BuildDate:"2021-03-08T22:06:24Z", GoVersion:"go1.13.15b4", Compiler:"gc", Platform:"linux/amd64"}

  • Kubernetes cluster kind:

GKE -> 1.18.16-gke.502

  • NiFi version:

1.12.1

no effect for componentId in NifiUser / NifiUserGroup (accessPolicies)

Bug Report

What did you do?
I want to grant permission on specific "components" (let's call it "itam". So it is a process group under "NiFi Flow") via CR "NifiUserGroup".
Doc is here: https://orange-opensource.github.io/nifikop/docs/v0.4.3/3_tasks/4_nifi_user_group

...
- type: component
      action: read
      resource: /
      componentType: process-groups
      componentId: 1ac3ab15-0177-1000-0000-000017feb4b2
#      componentId: "1ac3ab15-0177-1000-0000-000017feb4b2"
#      componentId: "itam"

I tried different things for componentId.

What did you expect to see?
User policy should be for example:
"Component policy for process Group itam"

What did you see instead? Under which circumstances?
No such policy.
If I don't specify "componentId" then it is:
"Component policy for process Group NiFi Flow"

Environment

  • nifikop version: nifikop: v0.4.2-alpha-release

  • go version: -

  • Kubernetes version information: OpenShift 4.6.z, K8s 1.19

  • Kubernetes cluster kind: ?

  • NiFi version: nifi-1.11.4-RC1

Possible Solution
It seems the componentId is not set here:

Would be very nice if someone can help!

arm64 support

Feature Request

Is your feature request related to a problem? Please describe.
I'm always frustrated when a software package does not support the arm64 architecture.

Describe the solution you'd like to see
Release arm64 as part of your container build processes.

Additional context
arm64 is frequently found in raspberry pi kubernetes clusters and on the new Mac computers with arm64 processors.

Missing externalServices field in NifiCluster CRD

Bug Report

What did you do?
Followed the installation these steps to deploy the sample cluster using kubectl apply -n nifi -f simplenificluster.yaml

What did you expect to see?

nificluster.nifi.orange.com/simplenifi created

What did you see instead? Under which circumstances?

error: error validating "simple-nifi-cluster.yaml": error validating data: ValidationError(NifiCluster.spec): unknown field "externalServices" in com.orange.nifi.v1alpha1.NifiCluster.spec; if you choose to ignore these errors, turn validation off with --validate=false

Environment

  • nifikop version:
CHART           APP VERSION
nifikop-0.5.2   0.5.2-release
  • Kubernetes version information:
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:04:18Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes cluster kind:
    Vanilla Kubernetes deployed with kops.

Possible Solution

Either enable the externalServices in the chart CRD or put an alternative in the documentation.

cannot list resource "namespaces" in API group "" at the cluster scope

Is someone running the nifikop operator pointing to a single namespace on openshift?

Since nifikop version v0.4.3-release including v0.6.1-release there is an error message that it has not enough permission to list namespaces on cluster level:

E0521 09:27:33.457184   11200 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "myUser" cannot list resource "namespaces" in API group "" at the cluster scope

It seems to be related to nifiuser and nifiusergroup.
I tried to debug it but no success...
What I have identified, it occurs in file controllers/nifiusergroup_controller.go around the line 146

	r.Recorder.Event(instance, corev1.EventTypeNormal, "Reconciling",
		fmt.Sprintf("Reconciling user group %s", instance.Name))

It seems a little bit random.

CRD Error

Bug Report

What did you see instead? Under which circumstances?
On apply crd got this error

customresourcedefinition.apiextensions.k8s.io/nifiusers.nifi.orange.com configured
The CustomResourceDefinition "nificlusters.nifi.orange.com" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[initContainers].items.properties[ports].items.properties[protocol].default: Required value: this property is in x-kubernetes-list-map-keys, so it must have a default or be a required property

Environment

  • nifikop version:
  • go version:
  • Kubernetes version information:
  • Kubernetes cluster kind: 1.18

  • NiFi version: using master branch

Possible Solution

Additional context
Add any other context about the problem here.

mount/use existing pvc on nifi nodes

Type of question

Are you asking about community best practices, how to implement a specific feature, or about general context and help around nifikop ?
Best practice how to mount an existing pvc on nifi

Question

What did you do?

At first I created a pvc:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: fstp-pvc
  namespace: usecase
  labels:
    pvc: fstp
spec:
  storageClassName: "ceph-fs-storage"
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

Then I tried to mount it via labels though the nificlusters.nifi.orange.com:

...
    storageConfigs:
      - mountPath: "/opt/fstp"
        name: fstp-pvc
        pvcSpec:
          accessModes:
            - ReadWriteMany
          selector:
            matchLabels:
              pvc: fstp
...

What did you expect to see?
Nifi mounts the existing pvc.

What did you see instead? Under which circumstances?

No nifi node is scheduled by the operator.

logs from the operator:

PersistentVolumeClaim \"nifi-0-storagebb7tt\" is invalid: spec.resources[storage]: Required value","Request.Namespace":"usecase","Request.Name":"nifi"}

{"level":"error","ts":1603277145.6576192,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"nificluster-controller","request":"usecase/nifi","error":"failed to reconcile resource: creating resource failed: PersistentVolumeClaim \"nifi-0-storagebb7tt\" is invalid: spec.resources[storage]: Required value","errorVerbose":"creating resource failed: PersistentVolumeClaim \"nifi-0-storagebb7tt\" is invalid: spec.resources[storage]: Required value\nfailed to reconcile 

Environment

  • nifikop version:

image: orangeopensource/nifikop:v0.2.0-release

  • Kubernetes version information:

v1.16.7

  • Kubernetes cluster kind:

nificlusters.nifi.orange.com

  • NiFi version:

1.11.4

[Feature/Operator] Specific Liveness & Readiness command

Feature Request

Is your feature request related to a problem? Please describe.

In the current solution, the readiness & liveness simply call the nifi-api and check if a response from nifi node is here.
Because of the need to check if the node is in the cluster to say "it is ready", this will cause issues when we decommission the node.

Describe the solution you'd like to see

As we are working at the pod level, we could imagine checking if the node is part of the cluster in normal cases, and in the case of a decommissioned node, changing the liveness and readiness script to check if we can target it?

spec.commonName: Too long: must have at most 64 bytes

I got the error message:
"error":"could not create user certificate: admission webhook \"webhook.cert-manager.io\" denied the request: spec.commonName: Too long: must have at most 64 bytes"

I think you are aware of this possible problem (mentioned in #21).
I think its not a good idea to allow more than 64 bytes for the CN or DNS because of RFC standards.

If I have a look on the CN: "nifi-0-node.nifi-headless.name-space-longername.svc.cluster.local" (65 bytes)
I can separate it in the following pieces:

nifi (name already quite short)
-0-node (to have a unique name for the nifi nodes)
nifi-headless (quite long)
namespace-longer-name (=namespace)
.svc.cluster.local (k8s specific. probably not changable)

What is your plan about that topic? Can we shorten something?

[NiFiParameterContext] Update configuration failed

Bug Report

What did you do?

Tried to update a NiFiParameterContext configuration, with ControllerServices referencing some of it parameters.

What did you expect to see?

A clean and successful parameter context update

What did you see instead? Under which circumstances?

The update was blocked due to some active controller services.

Environment

  • nifikop version: 0.6.2
  • NiFi version: 1.13.2

Possible Solution

Test if the current update is blocked by some controller constraints, if so, disable the controller service until the parameter context successfully update.

[Bug/Operator] Unable to remove a NifiCluster due to PKI Finalizer

Bug Report

What did you do?

I removed a secure NiFiCluster.

What did you expect to see?

The cluster and all associated resources removed.

What did you see instead? Under which circumstances?

The operator loop on ca cert resource that is no found (unknown namespace)

Environment

  • nifikop version: 0.2.0

Nifi https cluster with certificate authentication

Type of question

Help around nifikop

Question

What did you do?

i'm trying to set a nifi https cluster with nifikop, i deployed the sample without problems https://github.com/Orange-OpenSource/nifikop/blob/master/config/samples/tls_secured_nificluster.yaml but when i delete the OIDC in overrideConfigs, nifi give me Connection refused on 8443 port.

The zookeeper cluster and cert-manager are deployed as "Get Started" configuration:
https://orange-opensource.github.io/nifikop/docs/2_setup/1_getting_started
and the cluster is exposed like https://orange-opensource.github.io/nifikop/docs/5_references/1_nifi_cluster/7_external_service_config on port 8443.

I tried also to set this configuration https://orange-opensource.github.io/nifikop/docs/3_tasks/2_security/1_ssl but doesn't work ( if you set initialAdminUser you can't deploy because doesn't exist anymore)

Do i need any configuration for certificate authentication?

What did you expect to see?

Nifi asking for authentication

What did you see instead? Under which circumstances?

Connection refused over exposed service.

Environment

  • nifikop version: 0.5.2

  • Kubernetes version information: 1.20.4

  • NiFi version: 1.12.1

Mounting configmaps into the pod

Question

I was wondering is there a simple way to copy files from say a configmap into the nifi pod? I need to be able to copy a non-public root ca inside there to be able to talk my oidc provider. This does not seem to be possible right now. I think a possibility to define the volumes would address this usecase.

[Feature/Operator] Support shareProcessNamespace feature

Feature Request

Is your feature request related to a problem? Please describe.

If we want to be able to debug operator or cluster's nodes, we need to use kubectl alpha debug command with ephemeral containers instead of kubectl exec. To do so, we need to enable shareProcessNamespace into pods.

Describe the solution you'd like to see

Add this field into the CRD and in chart template.

Additional context

Check with security team if there are any restrictions about letting it enabled !

Reconcile Error - Nifi cluster communication error: could not connect to nifi nodes

Bug Report

I have create and install the following resources, previous to deploy the nifi cluster:

kubectl create ns zookeeper
kubectl create ns nifi
kubectl create ns nifikop

helm install zookeeper bitnami/zookeeper \
    --set resources.requests.memory=256Mi \
    --set resources.requests.cpu=250m \
    --set resources.limits.memory=256Mi \
    --set resources.limits.cpu=250m \
    --set global.storageClass=standard \
    --set networkPolicy.enabled=true \
    --set replicaCount=3 \
    --namespace=zookeeper

### Install the CustomResourceDefinitions and cert-manager itself
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.2.0/cert-manager.yaml

# create the resources from:
https://github.com/Orange-OpenSource/nifikop/tree/master/helm/nifikop/crds
kubectl apply -f nifi.orange.com_nificlusters.yaml
kubectl apply -f nifi.orange.com_nifidataflows.yaml
kubectl apply -f nifi.orange.com_nifiparametercontexts.yaml
kubectl apply -f nifi.orange.com_nifiregistryclients.yaml
kubectl apply -f nifi.orange.com_nifiusergroups.yaml
kubectl apply -f nifi.orange.com_nifiusers.yaml

I have setup the operator from the project, executing the following commands:

make build; make run

What did you do?
I am trying to deploy the operator and create a basic Nifi cluster with the manifest:

apiVersion: nifi.orange.com/v1alpha1
kind: NifiCluster
metadata:
  name: simplenifi
  namespace: nifi
spec:
  service:
    headlessEnabled: true
  zkAddress: "zookeeper.zookeeper:2181"
  zkPath: "/simplenifi"
  clusterImage: "apache/nifi:1.12.1"
  oneNifiNodePerNode: false
  nodeConfigGroups:
    default_group:
      isNode: true
      storageConfigs:
        - mountPath: "/opt/nifi/nifi-current/logs"
          name: logs
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "standard"
            resources:
              requests:
                storage: 10Gi
      serviceAccountName: "default"
      resourcesRequirements:
        limits:
          cpu: "2"
          memory: 3Gi
        requests:
          cpu: "1"
          memory: 1Gi
  nodes:
    - id: 0
      nodeConfigGroup: "default_group"
    - id: 1
      nodeConfigGroup: "default_group"
    - id: 2
      nodeConfigGroup: "default_group"
  propagateLabels: true
  nifiClusterTaskSpec:
    retryDurationMinutes: 10
  listenersConfig:
    internalListeners:
      - type: "http"
        name: "http"
        containerPort: 8080
      - type: "cluster"
        name: "cluster"
        containerPort: 6007
      - type: "s2s"
        name: "s2s"
        containerPort: 10000

Once I deploy the previous manifest, I got the following error in the operator:

 ERROR   nifi_client     Error during talking to nifi node       {"error": "Get \"http://simplenifi-headless.nifi.svc.cluster.local:8080/nifi-api/controller/cluster\": dial tcp: lookup simplenifi-headless.nifi.svc.cluster.local: no such host"}
github.com/go-logr/zapr.(*zapLogger).Error

The pods and the services in the ns nifi seems ok:

kubectl get all -n nifi
# output
NAME                         READY   STATUS    RESTARTS   AGE
pod/simplenifi-0-nodezt7dz   1/1     Running   0          48m
pod/simplenifi-1-node5jgxz   1/1     Running   0          48m
pod/simplenifi-2-node9w2xm   1/1     Running   0          48m

NAME                          TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                       AGE
service/simplenifi-headless   ClusterIP   None         <none>        8080/TCP,6007/TCP,10000/TCP   48m

What did you expect to see?
Dont get errors from the operator and being accessible forwarding port to the Nifi service
(I use 8082 due to the operator running in dev mode (make run), use the 8080 in my local.

kubectl port-forward service/simplenifi-headless 8082:8080 -n nifi

I got:

E0623 09:53:31.095165  419131 portforward.go:400] an error occurred forwarding 8082 -> 8080: error forwarding port 8080 to pod 927babdcc7ac70b423116a70ab2ae202b5c4bf9b79198ba29821086c55fde040, uid : failed to execute portforward in network namespace "/var/run/netns/cni-bee50d24-38ae-5b57-bb9b-19bd2cf00ca6": failed to connect to localhost:8080 inside namespace "927babdcc7ac70b423116a70ab2ae202b5c4bf9b79198ba29821086c55fde040", IPv4: dial tcp4 127.0.0.1:8080: connect: connection refused IPv6 dial tcp6 [::1]:8080: connect: connection refused 

What did you see instead? Under which circumstances?
The error mentioned.

Environment

  • nifikop version: master

  • go version: 1.15 (dockerfile)

  • Kubernetes version information: v1.20.7 but also with 1.16, 1.18, 1.20
    Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-13T02:40:46Z", GoVersion:"go1.16.3", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2", GitTreeState:"clean", BuildDate:"2021-05-27T23:27:49Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}

  • Kubernetes cluster kind:
    v1.20.7 but also with 1.16, 1.18, 1.20

  • NiFi version: 1.12.1

Thanks in advance

[BUG] NiFiKop doesn't manage several NiFiClusters with the same NiFiUser

Bug Report

What did you do?
I created 2 NiFiClusters with the same user as NiFi administrator.

What did you expect to see?
2 different NiFiUsers with differents resource name in Kubernetes.

What did you see instead? Under which circumstances?
Only one NiFiUser resource was created in Kubernetes (for the 1st NiFiCluster). For the 2nd NiFiCluster, the operator couldn't create a new NiFiUser because a resource with the same type and name was already existing.

Environment

  • nifikop version:

v0.4.2-alpha

  • go version:

1.15

  • Kubernetes version information:

v1.16.15-gke.6000

  • Kubernetes cluster kind:

GKE

  • NiFi version:

1.12.1

Possible Solution
Add NiFiCluster name in the NiFiUser resource name.

Need access to nifikop slack channel

Type of question

Unable to register with NifiKop Slack workspace?

Question

What did you do?
In community support, visit below URL:
https://nifikop.slack.com/
Need cred to register to work space.

What did you expect to see?
Expecting Nifikop workspace to get added to Skack, but credentials didn't work.

What did you see instead? Under which circumstances?
Its reporting my email id is not registered.

Environment

  • nifikop version:

    insert release or Git SHA here

  • Kubernetes version information:

    insert output of kubectl version here

  • Kubernetes cluster kind:

  • NiFi version:

Additional context
Add any other context about the question here.

Accessing Nifi

Type of question

I'm not able to access Nifi from outside of containers.

Question

I'd followed the quick start page but the external IP of LoadBalancer is in Pending state forever.
I'm able to see Nifi page when run the following command inside of container: curl http://nifi-headless.nifi.svc.cluster.local:8080

Could you help me to point what I'm doing wrong, please?

NAMESPACE      NAME                                           READY   STATUS    RESTARTS   AGE
cert-manager   pod/cert-manager-cainjector-6d9776489b-svvdf   1/1     Running   0          18m
cert-manager   pod/cert-manager-d7d8fb5c9-skk29               1/1     Running   0          18m
cert-manager   pod/cert-manager-webhook-6d6d6f9-grm8w         1/1     Running   0          18m
default        pod/zookeeper-0                                1/1     Running   0          19m
default        pod/zookeeper-1                                1/1     Running   0          19m
default        pod/zookeeper-2                                1/1     Running   0          19m
kube-system    pod/coredns-74ff55c5b-9fh6d                    1/1     Running   0          117m
kube-system    pod/etcd-minikube                              1/1     Running   0          117m
kube-system    pod/kube-apiserver-minikube                    1/1     Running   0          117m
kube-system    pod/kube-controller-manager-minikube           1/1     Running   0          117m
kube-system    pod/kube-proxy-c6pd7                           1/1     Running   0          117m
kube-system    pod/kube-scheduler-minikube                    1/1     Running   0          117m
kube-system    pod/storage-provisioner                        1/1     Running   1          117m
nifi           pod/nifi-1-nodehf8hv                           1/1     Running   0          14m
nifi           pod/nifikop-55b6f94469-qt4wr                   1/1     Running   0          16m

NAMESPACE      NAME                           TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                       AGE
cert-manager   service/cert-manager           ClusterIP      10.97.146.110    <none>        9402/TCP                      18m
cert-manager   service/cert-manager-webhook   ClusterIP      10.105.32.254    <none>        443/TCP                       18m
default        service/kubernetes             ClusterIP      10.96.0.1        <none>        443/TCP                       117m
default        service/zookeeper              ClusterIP      10.100.136.32    <none>        2181/TCP,2888/TCP,3888/TCP    19m
default        service/zookeeper-headless     ClusterIP      None             <none>        2181/TCP,2888/TCP,3888/TCP    19m
kube-system    service/kube-dns               ClusterIP      10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP        117m
nifi           service/clusterip              ClusterIP      10.108.181.8     <none>        8080/TCP                      14m
nifi           service/loadbalancer           LoadBalancer   10.101.103.135   <pending>     8080:30779/TCP                14m
nifi           service/nifi-headless          ClusterIP      None             <none>        8080/TCP,6007/TCP,10000/TCP   14m
nifi           service/nodepart               NodePort       10.97.250.218    <none>        8080:31958/TCP                14m

Environment

  • nifikop version: v0.5.2-release

  • Kubernetes version information:

Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
  • Kubernetes cluster kind: docker

  • NiFi version: v1.12.1

[Bug/Operator] Bad DNS names for services in all-node mode

Bug Report

What did you do?

I set up the NiFi cluster with Spec.Service.HeadlessEnabled to false in a secured mode.

What did you expect to see?

Having the DNS names for each node well configured.

What did you see instead? Under which circumstances?

The DNS for service node are formatted as :
<cluster-name>-<node id>-node.<cluster-name>-all-node.<namespace>.svc.<cluster domain>

But the service itself is :
<cluster-name>-<node id>.<namespace>.svc.<cluster domain>

Environment

  • nifikop version: 929403c

  • go version: 1.14

  • Kubernetes version information: 1.18

  • Kubernetes cluster kind: GKE and Scratch

  • NiFi version: 1.11.4

Possible Solution

We need to rethink what is expected from the all-node service mode, and set hostname correctly

[Feature/Operator] Sidecar configuration

Feature Request

Is your feature request related to a problem? Please describe.

If we want to add additional sidecar to configure some extra process (logging etc.) we need to give users the ability to do it.

Describe the solution you'd like to see

By replicating the way this stuff has been done for Casskop.

Simplenifi cluster is running but unaccessible

Bug Report

What did you do?
I've installed simple nifi cluster following https://orange-opensource.github.io/nifikop/docs/2_setup/1_getting_started

What did you expect to see?
Running nifi cluster with 2 nodes accessible through web UI

NAME READY STATUS RESTARTS AGE
pod/nifikop-586867994d-lkmgc 1/1 Running 0 6h56m
pod/nifikop-586867994d-pvnmn 0/1 Terminating 0 25h
pod/simplenifi-1-nodew5925 1/1 Running 0 6h52m
pod/simplenifi-2-nodegt8rh 1/1 Running 0 22h
pod/zookeeper-0 1/1 Running 1 6h52m
pod/zookeeper-1 1/1 Running 1 6h52m
pod/zookeeper-2 1/1 Running 1 6h52m

What did you see instead? Under which circumstances?
UI is not accessible through svc service/simplenifi-all-node. Moreover I failed to curl http:localhost:8080 from inside a container

$ curl http://localhost:8080/nifi
curl: (7) Failed to connect to localhost port 8080: Connection refused

Environment

  • nifikop version: 0.5.1

  • Kubernetes version information:

1.18

  • Kubernetes cluster kind:
    Yandex cloud

I can't start the operator using manifest

Bug Report

What did you do?
I tried to start the operator using the manifest and it doesn't anything...
I execute the following commands:

# Create ns
kubectl create ns zookeeper
kubectl create ns nifi
kubectl create ns nifikop

# Install zk
helm install zookeeper bitnami/zookeeper \
    --set resources.requests.memory=256Mi \
    --set resources.requests.cpu=250m \
    --set resources.limits.memory=256Mi \
    --set resources.limits.cpu=250m \
    --set global.storageClass=standard \
    --set networkPolicy.enabled=true \
    --set replicaCount=3 \
    --namespace=zookeeper

# Install the CustomResourceDefinitions and cert-manager itself
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.2.0/cert-manager.yaml

# Apply the manifest
kubectl apply -f config/crd/bases/nifi.orange.com_nificlusters.yaml
kubectl apply -f config/crd/bases/nifi.orange.com_nifidataflows.yaml
kubectl apply -f config/crd/bases/nifi.orange.com_nifiparametercontexts.yaml
kubectl apply -f config/crd/bases/nifi.orange.com_nifiregistryclients.yaml
kubectl apply -f config/crd/bases/nifi.orange.com_nifiusers.yaml
kubectl apply -f config/crd/bases/nifi.orange.com_nifiusergroups.yaml

What did you expect to see?
The resources created by the operator, but also, I cannot see anything in the events.

What did you see instead? Under which circumstances?
Nothing

Environment

  • nifikop version:
    0.6.2-release
    also with master

  • go version:
    1.15 - 1.16

local: go version go1.16.5 linux/amd64

  • Kubernetes version information:
    I tried with minikube and kind, in versions 1.16 - 1.18 - 1.21
  • Kubernetes cluster kind:

  • NiFi version:
    default

Possible Solution
I tried to build the project in dev mode and I couldn't too..

 ERROR   controller-runtime.manager.controller.nifidataflow      Reconciler error        {"reconciler group": "nifi.orange.com", "reconciler kind": "NifiDataflow", "name": "dataflow-lifecycle", "namespace": "default", "error": "unable to get: nifikop/squidflow because of unknown namespace for the cache"} github.com/go-logr/zapr.(*zapLogger).Error

Additional context
Add any other context about the problem here.

Deploying Secure Cluster on AKS

Bug Report

Hello. This is a very interesting project 👍

I am trying to follow https://orange-opensource.github.io/nifikop/blog/secured_nifi_cluster_on_gcp/ , but deploy it on Azure Kubernetes Service.

I've deployed:

  • AKS cluster
  • zookeeper
  • cert-manager and issuer
  • storage class with WaitForFirstConsumer (and updated the yaml file)
  • registered a client with openid provider (using KeyCloak)

I've updated the nifi cluster resource yaml file with appropriate values from above.

When I try to deploy it, I don't see any pod resources even created.

Any suggestions? What's the best way to debug why no pods are even being created? kubectl describe on the nificluster resource doesn't provide any useful information.

I was able to deploy a working cluster on AKS using simple nifi cluster sample (not secured).

Thanks for any suggestions and help!

[BUG] Parameter Context with no value set when the value field is missing or empty

Bug Report

What did you do?
I deployed a NiFi Dataflow with a NiFi Parameter Context. In the NiFi Parameter Context, one of the parameter did not have a value field or the value field was empty.

What did you expect to see?
In the NiFi, i should have seen a parameter with an empty string as value.

What did you see instead? Under which circumstances?
In the NiFi, i have seen a parameter with no value set.

Environment

  • nifikop version:

v0.5.2

  • Kubernetes version information:

v1.16.15-gke.7800

  • Kubernetes cluster kind:

GKE

  • NiFi version:

1.12.1

Possible Solution
Set value to empty string as default value in every circumstance.

Example

apiVersion: nifi.orange.com/v1alpha1
kind: NifiParameterContext
metadata:
  creationTimestamp: "2021-02-27T17:45:05Z"
  finalizers:
  - finalizer.nifiparametercontexts.nifi.orange.com
  generation: 2
  labels:
    app.kubernetes.io/managed-by: Helm
    heritage: Helm
    nifiCluster: ojse54e49.instances
    release: squid-mapping-enrichment-c641f193d0ab6716e7e3
    uuid: 54e4901008be4dfebe88ef420efe75a7
  name: oem-jg-sqdf-enri-pc
  namespace: instances
  uid: ccf960b4-a59a-4315-b4d0-a65370974d79
spec:
  clusterRef:
    name: ojse54e49
    namespace: instances
  description: 'Parameter context for the cluster: ojse54e49'
  parameters:
  - description: SquidFlow instance name
    name: squidflow_instance_name
    value: squidflow
  - description: Record Path of the timestamp field dating the event
    name: ts_field
status:
  id: e49d25a6-0177-1000-0000-000027658bd4
  version: 2

Custom Resource error message

When attempting to apply
kubectl apply -f https://raw.githubusercontent.com/Orange-OpenSource/nifikop/master/deploy/crds/v1/nifi.orange.com_nificlusters_crd.yaml

The error message below is returned
The CustomResourceDefinition "nificlusters.nifi.orange.com" is invalid: spec.validation.openAPIV3Schema.properties[spec].properties[initContainers].items.properties[ports].items.properties[protocol].default: Required value: this property is in x-kubernetes-list-map-keys, so it must have a default or be a required property

kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:51:19Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

[root@k8-1 v1]# kubelet --version
Kubernetes v1.20.0

[Feature/Chart] Custom annotations for Operator deployment

Feature Request

Is your feature request related to a problem? Please describe.

The aim of this feature is to allow users of the helm chart providing custom annotations for operator deployment. Via annotations, users can configure metadata processed by other parties like logs collectors, kube-janitor TTL etc...

[BUG] PVCs/PVs for one pod are created in different zones

Bug Report

What did you do?
Create the sample cluster

What did you expect to see?
Cluster nodes going up

What did you see instead? Under which circumstances?
Several nodes stay in "Pending" state

Environment
GKE, Nifikop 0.5.2

  • Kubernetes version information:
    GKE 1.18

  • Kubernetes cluster kind:
    Regional GKE cluster

  • NiFi version:
    irrelevant

Possible Solution
Enhance PVC management

Additional context
It seems the issue is linked to PVC (cluster wide) -> PV (zone) mapping. When there are two PVCs defined for the pod, they might be in two different availability zones.

Nodes unreachable certificate has expired, but certificate looks OK

Type of question

Are you asking about community best practices, how to implement a specific feature, or about general context and help around nifikop ?

Context, Help around nifikop

Question

What did you do?
I setup a cluster from NiFiKop running since August 7th. I haven't don't anything/ ran any flows since then (was working on other projects). Now that I've come back after 4 months we are seeing invalid cert issues from the self-signed certificates on the NiFi nodes.

It looks like at the 60 day market (October 5th) a certificate request was issued and fulfilled successfully. All certificates for the nodes and manager also reflect this and are not expired.

However, the cluster manager is showing:

2020-11-25T19:52:26.789089356Z {"level":"info","ts":1606333946.7889802,"logger":"controller_nificluster","msg":"Nodes unreachable, may still be starting up","Request.Namespace":"nifi","Request.Name":"nifi-cluster"}
2020-11-25T19:52:41.789209189Z {"level":"info","ts":1606333961.7891295,"logger":"controller_nificluster","msg":"Reconciling NifiCluster","Request.Namespace":"nifi","Request.Name":"nifi-cluster"}
2020-11-25T19:52:41.799961322Z {"level":"info","ts":1606333961.7998602,"logger":"controller_nificluster","msg":"CR status updated","Request.Namespace":"nifi","Request.Name":"nifi-cluster","status":"ClusterReconciling"}
2020-11-25T19:52:41.799987703Z {"level":"info","ts":1606333961.7999,"logger":"controller_nificluster","msg":"Reconciling cert-manager PKI","Request.Namespace":"nifi","Request.Name":"nifi-cluster","component":"nifi","clusterName":"nifi-cluster","clusterNamespace":"nifi"}
2020-11-25T19:52:41.824777142Z {"level":"info","ts":1606333961.8247044,"logger":"controller_nificluster","msg":"resource updated","Request.Namespace":"nifi","Request.Name":"nifi-cluster","component":"nifi","clusterName":"nifi-cluster","clusterNamespace":"nifi","kind":"*v1.Secret","name":"nifi-cluster-config-0"}
2020-11-25T19:52:41.849512258Z {"level":"info","ts":1606333961.8494139,"logger":"controller_nificluster","msg":"resource updated","Request.Namespace":"nifi","Request.Name":"nifi-cluster","component":"nifi","clusterName":"nifi-cluster","clusterNamespace":"nifi","kind":"*v1.Secret","name":"nifi-cluster-config-1"}
2020-11-25T19:52:41.873392674Z {"level":"info","ts":1606333961.8733115,"logger":"controller_nificluster","msg":"resource updated","Request.Namespace":"nifi","Request.Name":"nifi-cluster","component":"nifi","clusterName":"nifi-cluster","clusterNamespace":"nifi","kind":"*v1.Secret","name":"nifi-cluster-config-2"}
2020-11-25T19:52:41.879906021Z {"level":"info","ts":1606333961.8798394,"logger":"scale-methods","msg":"Retrieving Nifi client for nifi/nifi-cluster"}
2020-11-25T19:52:41.901447302Z {"level":"error","ts":1606333961.9012797,"logger":"nifi_client","msg":"Error during talking to nifi node","error":"Get \"https://nifi-cluster-headless.nifi.svc.cluster.local:8443/nifi-api/controller/cluster\": x509: certificate has expired or is not yet valid: current time 2020-11-25T19:52:41Z is after 2020-11-05T10:12:08Z","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/Users/chris/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\ngithub.com/Orange-OpenSource/nifikop/pkg/nificlient.(*nifiClient).DescribeCluster\n\tnifikop/pkg/nificlient/system.go:46\ngithub.com/Orange-OpenSource/nifikop/pkg/nificlient.(*nifiClient).Build\n\tnifikop/pkg/nificlient/client.go:83\ngithub.com/Orange-OpenSource/nifikop/pkg/nificlient.NewFromCluster\n\tnifikop/pkg/nificlient/client.go:105\ngithub.com/Orange-OpenSource/nifikop/pkg/controller/common.NewNodeConnection\n\tnifikop/pkg/controller/common/controller_common.go:68\ngithub.com/Orange-OpenSource/nifikop/pkg/scale.EnsureRemovedNodes\n\tnifikop/pkg/scale/scale.go:224\ngithub.com/Orange-OpenSource/nifikop/pkg/resources/nifi.(*Reconciler).Reconcile\n\tnifikop/pkg/resources/nifi/nifi.go:199\ngithub.com/Orange-OpenSource/nifikop/pkg/controller/nificluster.(*ReconcileNifiCluster).Reconcile\n\tnifikop/pkg/controller/nificluster/nificluster_controller.go:176\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/Users/chris/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:256\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/chris/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/Users/chris/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/Users/chris/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/Users/chris/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/chris/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/chris/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90"}
2020-11-25T19:52:41.901511637Z {"level":"info","ts":1606333961.9013927,"logger":"controller_nificluster","msg":"Nodes unreachable, may still be starting up","Request.Namespace":"nifi","Request.Name":"nifi-cluster"}

And when connecting to the cluster via web browser I see The Flow Controller is initializing the Data Flow. despite all the NiFi Pods being ready.

What did you expect to see?
I expected the controller to recognized the certificate as being updated on the nodes.

What did you see instead? Under which circumstances?
Based on the fact the certificates appear up to date using kubectl describe on the certificates, it seems like the controller is looking for an "old" certificate somewhere? I'm not sure what cert the controller is looking for, but given the headless address, I believe it's hitting one of the Nifi nodes.

Environment

  • nifikop version:

689aabb

  • Kubernetes version information:

Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-12T01:08:32Z", GoVersion:"go1.15.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.12-eks-31566f", GitCommit:"31566f851673e809d4d667b7235ed87587d37722", GitTreeState:"clean", BuildDate:"2020-10-20T23:25:14Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}

  • Kubernetes cluster kind: EKS

  • NiFi version:

Additional context

Listener config:

  • externalDNS: false
  • httpMode: https
  • clusterDoman: "cluster.local"
  • sslSecrets:
    • create: true
    • tlsSecretName: "nifi-tls"

Other Config:

  • clusterSecure: true
  • siteToSiteSecure: true

1. I don't see a secret nifi-tls anywhere, is this secretName needed when create is set to true? We are using the self-signed issuer and I see those as created OK, just not the secret.
2. Any idea why it seems our certificates have renewed but the nifi controller is seeing a cert that expired November 5th? (90 days from the issuance of the original cert)

FYI: The secret in the log above is the configmap. I modified this on a fork to be a secret to hide sensitive info in the config files when in version control. This was working back in August, I could connect to secured cluster. I'm concerned as to why the certificate renewal didn't appear to take.

Any help appreciated, thanks!

[Documentation/Operator] Add documentation about external DNS

Type of question

Are you asking about community best practices, how to implement a specific feature, or about general context and help around nifikop ?

It's about documentation

Question

What did you do?

I want to use external-dns with the operator

What did you expect to see?

How to use external-dns to work with the operator.

[Feature/Chart] Enable multi namespace scoped

Feature Request

Is your feature request related to a problem? Please describe.

Add in the chart the ability to declare multiple namespace on which the operator would watch.

Describe the solution you'd like to see

Only add a loop on the role and role binding to add them to each namespace.

Plain secure cluster setup not working

Type of question

Are you asking about community best practices, how to implement a specific feature, or about general context and help around nifikop ?

Question

What did you do?

Deploying nifiko+cert-manager and deploying the following crds:

apiVersion: nifi.orange.com/v1alpha1
kind: NifiCluster
metadata:
  name: sslnifi
  namespace: usecase
spec:
  service:
    headlessEnabled: true
  zkAddresse: "zookeeper.usecase:2181"
  zkPath: "/ssllnifi"
  clusterImage: "apache/nifi:1.11.4"
  clusterSecure: true
  siteToSiteSecure: true
  oneNifiNodePerNode: false
  initialAdminUser: [email protected]
  propagateLabels: true
  nifiClusterTaskSpec:
    retryDurationMinutes: 10
  readOnlyConfig:
    # NifiProperties configuration that will be applied to the node.
    nifiProperties:
      webProxyHosts:
        - some-url:8443
      # Additionnals nifi.properties configuration that will override the one produced based
      # on template and configurations.
      overrideConfigs: |
        nifi.security.user.oidc.discovery.url=https://keycloak.url/auth/realms/dapc/.well-known/openid-configuration
        nifi.security.user.oidc.client.id=nifi
        nifi.security.user.oidc.client.secret=token
        
        nifi.security.identity.mapping.pattern.dn=CN=([^,]*)(?:, (?:O|OU)=.*)?
        nifi.security.identity.mapping.value.dn=$1
        nifi.security.identity.mapping.transform.dn=NONE
  nodeConfigGroups:
    default_group:
      isNode: true
      storageConfigs:
        - mountPath: "/opt/nifi/nifi-current/logs"
          name: logs
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "ceph-block-storage"
            resources:
              requests:
                storage: 10Gi
        - mountPath: "/opt/nifi/data"
          name: data
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "ceph-block-storage"
            resources:
              requests:
                storage: 10Gi
        - mountPath: "/opt/nifi/flowfile_repository"
          name: flowfile-repository
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "ceph-block-storage"
            resources:
              requests:
                storage: 10Gi
        - mountPath: "/opt/nifi/nifi-current/conf"
          name: conf
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "ceph-block-storage"
            resources:
              requests:
                storage: 10Gi
        - mountPath: "/opt/nifi/content_repository"
          name: content-repository
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "ceph-block-storage"
            resources:
              requests:
                storage: 10Gi
        - mountPath: "/opt/nifi/provenance_repository"
          name: provenance-repository
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "ceph-block-storage"
            resources:
              requests:
                storage: 10Gi
      serviceAccountName: "default"
      resourcesRequirements:
        limits:
          cpu: "1"
          memory: 3Gi
        requests:
          cpu: "0.1"
          memory: 0.5Gi
  nodes:
    - id: 0
      nodeConfigGroup: "default_group"
    - id: 1
      nodeConfigGroup: "default_group"
    - id: 2
      nodeConfigGroup: "default_group"
  listenersConfig:
    internalListeners:
      - type: "https"
        name: "https"
        containerPort: 8443
      - type: "cluster"
        name: "cluster"
        containerPort: 6007
      - type: "s2s"
        name: "s2s"
        containerPort: 10000
    sslSecrets:
      tlsSecretName: "test-nifikop"
      create: true


What did you expect to see?
A secure cluster starting and working

What did you see instead? Under which circumstances?
The nifi-nodes start, but nothing is really reachable on 8443. When doing kubectl -n usecase port-forward svc/sslnifi-headless 8443, I get An error occurred during a connection to localhost:8443. PR_END_OF_FILE_ERROR

What primarily confuses me and makes me think something is going wrong, is the following log from the operator:

2020-08-19T09:23:38.748733801Z {"level":"info","ts":1597829018.7486365,"logger":"controller_nificlustertask","msg":"nifi cluster communication error: could not connect to nifi nodes: sslnifi-headless.usecase.svc.cluster.local:8443: Non 200 response from nifi node: 403 Forbidden"}

Environment

  • nifikop version: v0.2.0-release

  • Kubernetes version information:
    Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6+k3s1", GitCommit:"6f56fa1d68a5a48b8b6fdefa8eb7ead2015a4b3a", GitTreeState:"clean", BuildDate:"2020-07-16T20:46:15Z", GoVersion:"go1.13.11", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.7", GitCommit:"be3d344ed06bff7a4fc60656200a93c74f31f9a4", GitTreeState:"clean", BuildDate:"2020-02-11T19:24:46Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}

  • Kubernetes cluster kind: kubespray

  • NiFi version: 1.11.4

OIDC: The login request identifier was not found in the request

Type of question

Need a help around nifikop and OIDC

Question

What did you do?
Hello, I have a problem with Nifi & OIDC
Im getting this message: The login request identifier was not found in the request. Unable to continue.

2021-02-11 15:25:45,778 INFO [NiFi Web Server-19] o.a.n.w.a.c.IllegalArgumentExceptionMapper java.lang.IllegalArgumentException: The login request identifier was not found in the request. Unable to continue.}. Returning Bad Request} response.
java.lang.IllegalArgumentException: The login request identifier was not found in the request. Unable to continue.
	at org.apache.nifi.web.api.AccessResource.oidcExchange(AccessResource.java:306)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
...
2021-02-11 15:26:09,483 WARN [NiFi Web Server-18] o.a.n.w.a.c.IllegalStateExceptionMapper java.lang.IllegalStateException: Cannot create group 'nifi-sslnifi.managed-nodes' with users that don't exist.. Returning Conflict response.
java.lang.IllegalStateException: Cannot create group 'nifi-sslnifi.managed-nodes' with users that don't exist.
	at org.apache.nifi.authorization.AuthorizerFactory$1$1$1.addGroup(AuthorizerFactory.java:275)
	at org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO.createUserGroup(StandardPolicyBasedAuthorizerDAO.java:253)
	at org.apache.nifi.web.dao.impl.StandardPolicyBasedAuthorizerDAO$$FastClassBySpringCGLIB$$ea190383.invoke(<generated>)
	at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)

What did you expect to see?
Access to Nifi UI with my credentials
What did you see instead? Under which circumstances?
The login request identifier was not found in the request. Unable to continue.

Environment

  • nifikop version:
    v0.5.2-release
  • Kubernetes version information:

Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:41:49Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}

  • Kubernetes cluster kind:
    docker-desktop

  • NiFi version:
    apache/nifi:1.12.1

Additional context

apiVersion: nifi.orange.com/v1alpha1
kind: NifiCluster
metadata:
  name: sslnifi
spec:
  service:
    headlessEnabled: true
  zkAddress: "nifi-cluster-zookeeper:2181"
  zkPath: "/ssllnifi"
  clusterImage: "apache/nifi:1.12.1"
  oneNifiNodePerNode: false
  managedAdminUsers:
    -  identity : "<myname1>@gmail.com"
       name: "<myname1>"
  managedReaderUsers:
     - identity : "<myname>@gmail.com"
       name: "<myname>"
  propagateLabels: true
  nifiClusterTaskSpec:
    retryDurationMinutes: 10
  readOnlyConfig:
    # NifiProperties configuration that will be applied to the node.
    nifiProperties:
      webProxyHosts:
        - nifi.hhorbit.com
      # Additionnals nifi.properties configuration that will override the one produced based
      # on template and configurations.
      overrideConfigs: |
        nifi.security.user.oidc.discovery.url=https://accounts.google.com/.well-known/openid-configuration
        nifi.security.user.oidc.client.id=172548331581-cv249kc5s981mdvv4oajrkg21a6pv2en.apps.googleusercontent.com
        nifi.security.user.oidc.client.secret=l06y6ZL2m2Ivwe1JjDkHwTw-
        nifi.security.identity.mapping.pattern.dn=CN=([^,]*)(?:, (?:O|OU)=.*)?
        nifi.security.identity.mapping.value.dn=$1
        nifi.security.identity.mapping.transform.dn=NONE
  nodeConfigGroups:
    default_group:
      isNode: true
      storageConfigs:
        - mountPath: "/opt/nifi/nifi-current/logs"
          name: logs
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "standard"
            resources:
              requests:
                storage: 10Gi
        - mountPath: "/opt/nifi/data"
          name: data
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "standard"
            resources:
              requests:
                storage: 10Gi
        - mountPath: "/opt/nifi/flowfile_repository"
          name: flowfile-repository
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "standard"
            resources:
              requests:
                storage: 10Gi
        - mountPath: "/opt/nifi/nifi-current/conf"
          name: conf
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "standard"
            resources:
              requests:
                storage: 10Gi
        - mountPath: "/opt/nifi/content_repository"
          name: content-repository
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "standard"
            resources:
              requests:
                storage: 10Gi
        - mountPath: "/opt/nifi/provenance_repository"
          name: provenance-repository
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "standard"
            resources:
              requests:
                storage: 10Gi
      serviceAccountName: "default"
  nodes:
    - id: 0
      nodeConfigGroup: "default_group"
    - id: 2
      nodeConfigGroup: "default_group"
  listenersConfig:
    internalListeners:
      - type: "https"
        name: "https"
        containerPort: 8443
      - type: "cluster"
        name: "cluster"
        containerPort: 6007
      - type: "s2s"
        name: "s2s"
        containerPort: 10000
    sslSecrets:
     tlsSecretName: "test-nifikop"
     create: true

ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
  name: nifi-ingress
  namespace: nifi
spec:
  rules:
  - host: nifi.hhorbit.com
    http:
      paths:
      - backend:
          service:
            name: sslnifi-headless
            port:
              number: 8443
        path: /
        pathType: Prefix

nifi@sslnifi-0-node:/opt/nifi/nifi-current$ cat ../data/users.xml

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<tenants>
    <groups>
        <group identifier="91e20849-0177-1000-0000-00002729af34" name="nifi-sslnifi.managed-readers">
            <user identifier="91e20103-0177-1000-ffff-ffffb917ab0a"/>
        </group>
    </groups>
    <users>
        <user identifier="38ca2751-725d-39c1-a298-be545cb1a416" identity="sslnifi-0-node.sslnifi-headless.nifi.svc.cluster.local"/>
        <user identifier="6b78b042-ccd2-398d-87a9-65476b87d032" identity="sslnifi-controller.nifi.mgt.cluster.local"/>
        <user identifier="ac4b3e9b-4220-36ec-b87a-8791580d3203" identity="sslnifi-2-node.sslnifi-headless.nifi.svc.cluster.local"/>
        <user identifier="91e1f922-0177-1000-ffff-fffff503fda3" identity="<myname>@gmail.com"/>
        <user identifier="91e20103-0177-1000-ffff-ffffb917ab0a" identity="<myname1>@gmail.com"/>
    </users>
</tenants>

nifi@sslnifi-0-node:/opt/nifi/nifi-current$ cat ../data/authorizations.xml

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<authorizations>
    <policies>
        <policy identifier="f99bccd1-a30e-3e4a-98a2-dbc708edc67f" resource="/flow" action="R">
            <user identifier="6b78b042-ccd2-398d-87a9-65476b87d032"/>
        </policy>
        <policy identifier="b8775bd4-704a-34c6-987b-84f2daf7a515" resource="/restricted-components" action="W">
            <user identifier="6b78b042-ccd2-398d-87a9-65476b87d032"/>
        </policy>
        <policy identifier="627410be-1717-35b4-a06f-e9362b89e0b7" resource="/tenants" action="R">
            <user identifier="6b78b042-ccd2-398d-87a9-65476b87d032"/>
        </policy>
        <policy identifier="15e4e0bd-cb28-34fd-8587-f8d15162cba5" resource="/tenants" action="W">
            <user identifier="6b78b042-ccd2-398d-87a9-65476b87d032"/>
        </policy>
        <policy identifier="ff96062a-fa99-36dc-9942-0f6442ae7212" resource="/policies" action="R">
            <user identifier="6b78b042-ccd2-398d-87a9-65476b87d032"/>
        </policy>
        <policy identifier="ad99ea98-3af6-3561-ae27-5bf09e1d969d" resource="/policies" action="W">
            <user identifier="6b78b042-ccd2-398d-87a9-65476b87d032"/>
        </policy>
        <policy identifier="2e1015cb-0fed-3005-8e0d-722311f21a03" resource="/controller" action="R">
            <user identifier="6b78b042-ccd2-398d-87a9-65476b87d032"/>
        </policy>
        <policy identifier="c6322e6c-4cc1-3bcc-91b3-2ed2111674cf" resource="/controller" action="W">
            <user identifier="6b78b042-ccd2-398d-87a9-65476b87d032"/>
        </policy>
        <policy identifier="287edf48-da72-359b-8f61-da5d4c45a270" resource="/proxy" action="W">
            <user identifier="38ca2751-725d-39c1-a298-be545cb1a416"/>
            <user identifier="ac4b3e9b-4220-36ec-b87a-8791580d3203"/>
        </policy>
    </policies>
</authorizations>

[Documentation] NiFiKop and private GKE

Type of question

Are you asking about community best practices, how to implement a specific feature, or about general context and help around nifikop ?

About help around NiFiKop on GKE.

Question

What did you do?

I want to deploy NiFiKop on a private GKE cluster.

What did you expect to see?

Have a documentation explaining the requirements and configurations needed to deploy NiFiCluster in a private GKE cluster.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.