cdwv / efk-stack-helm Goto Github PK
View Code? Open in Web Editor NEWHelm chart to deploy a working logging solution using the ElasticSearch - Fluentd - Kibana stack on Kubernetes
Helm chart to deploy a working logging solution using the ElasticSearch - Fluentd - Kibana stack on Kubernetes
Just installed it and tried to open the Kibana dashboard and then got this:
Login is currently disabled. Administrators should consult the Kibana logs for more details.
The login form is non-interactive and I cannot log in. The pod logs only show 200 OKs until that.
When running helm install -n efk-stack . --set "rbac.enabled=true" --set "kibana.ingress.enabled=true"
the Fluentd daemonset is not created properly.
Please consult kubectl describe ds efk-stack-fluentd-elasticsearch
which returns
Name: efk-stack-fluentd-elasticsearch
Selector: app=fluentd-elasticsearch,release=efk-stack
Node-Selector: <none>
Labels: app=fluentd-elasticsearch
chart=elasticsearch-fluentd-kibana
heritage=Tiller
release=efk-stack
Annotations: <none>
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Scheduled with Up-to-date Pods: 0
Number of Nodes Scheduled with Available Pods: 0
Number of Nodes Misscheduled: 0
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=fluentd-elasticsearch
release=efk-stack
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
Service Account: efk-stack-fluentd-elasticsearch
Containers:
efk-stack-fluentd-elasticsearch:
Image: k8s.gcr.io/fluentd-elasticsearch:v2.0.4
Port: <none>
Host Port: <none>
Limits:
memory: 500Mi
Requests:
cpu: 100m
memory: 200Mi
Environment:
FLUENTD_ARGS: --no-supervisor -q
Mounts:
/etc/fluent/config.d from config-volume (rw)
/var/lib/docker/containers from varlibdockercontainers (ro)
/var/log from varlog (rw)
Volumes:
varlog:
Type: HostPath (bare host directory volume)
Path: /var/log
HostPathType:
varlibdockercontainers:
Type: HostPath (bare host directory volume)
Path: /var/lib/docker/containers
HostPathType:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: efk-stack-fluentd-elasticsearch-config
Optional: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 4s (x15 over 1m) daemonset-controller Error creating: pods "efk-stack-fluentd-elasticsearch-" is forbidden: error looking up service account default/efk-stack-fluentd-elasticsearch: serviceaccount "efk-stack-fluentd-elasticsearch" not found
At the same time kubectl get sa
gives
NAME SECRETS AGE
default 1 18m
efk-stack-elasticsearch 1 12m
So there seems to be a confusion in the SA naming.
When ever I try and deploy this chart the Readiness probe fails.
helm install --name efk .
NAME: efk
LAST DEPLOYED: Fri Feb 22 12:32:37 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
efk-fluentd-elasticsearch-config 6 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
efk-elasticsearch ClusterIP 172.20.190.54 <none> 9200/TCP 1s
efk-kibana ClusterIP 172.20.207.84 <none> 5601/TCP 1s
==> v1beta2/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
efk-kibana 1 1 1 0 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
efk-kibana-69cdf67b6f-c2ncb 0/1 ContainerCreating 0 0s
NOTES:
kubectl describe services efk-kibana
Name: efk-kibana
Namespace: default
Labels: app=kibana
chart=elasticsearch-fluentd-kibana
heritage=Tiller
release=efk
Annotations:
Selector: app=kibana,release=efk
Type: ClusterIP
IP: 172.20.207.84
Port: kibana-ui 5601/TCP
TargetPort: kibana-ui/TCP
Endpoints:
Session Affinity: None
Events:
kubectl describe pods efk-kibana-69cdf67b6f-c2ncb
Name: efk-kibana-69cdf67b6f-c2ncb
Namespace: default
Priority: 0
PriorityClassName:
Node: k8snode01/10.34.88.166
Start Time: Fri, 22 Feb 2019 12:32:32 +0000
Labels: app=kibana
pod-template-hash=69cdf67b6f
release=efk
Annotations: cni.projectcalico.org/podIP=172.16.3.75/32
Status: Running
IP: 172.16.3.75
Controlled By: ReplicaSet/efk-kibana-69cdf67b6f
Containers:
efk-kibana:
Container ID: docker://f25dd9c82d300b6bb9b9810f3d1a437e2d39ee3244ca1675d185c499def462dd
Image: docker.elastic.co/kibana/kibana:6.2.4
Image ID: docker://sha256:327c6538ba4c2dd9a7bc509c29e7cb57a0f121a00935401bbe7e8a96b9a46ddf
Port: 5601/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 22 Feb 2019 12:34:17 +0000
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Fri, 22 Feb 2019 12:32:33 +0000
Finished: Fri, 22 Feb 2019 12:34:16 +0000
Ready: False
Restart Count: 1
Limits:
cpu: 1
Requests:
cpu: 100m
Liveness: http-get http://:kibana-ui/ delay=45s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:kibana-ui/ delay=40s timeout=1s period=10s #success=1 #failure=3
Environment:
ELASTICSEARCH_URL: http://efk-elasticsearch:9200
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-czthw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-czthw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-czthw
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
Normal Scheduled 3m default-scheduler Successfully assigned default/efk-kibana-69cdf67b6f-c2ncb to k8snode01
Normal Pulled 1m (x2 over 3m) kubelet, k8snode01 Container image "docker.elastic.co/kibana/kibana:6.2.4" already present on machine
Normal Created 1m (x2 over 3m) kubelet, k8snode01 Created container
Normal Started 1m (x2 over 3m) kubelet, k8snode01 Started container
Normal Killing 1m kubelet, k8snode01 Killing container with id docker://efk-kibana:Container failed liveness probe.. Container will be killed and recreated.
Warning Unhealthy 24s (x6 over 2m) kubelet, k8snode01 Liveness probe failed: Get http://172.16.3.75:5601/: dial tcp 172.16.3.75:5601: connect: connection refused
Warning Unhealthy 15s (x9 over 2m) kubelet, k8snode01 Readiness probe failed: Get http://172.16.3.75:5601/: dial tcp 172.16.3.75:5601: connect: connection refused
Hi, apparently your templates/NOTES.txt
is telling people to forward the wrong port to access the Kibana dashboard, as in kubectl port-forward $POD_NAME 8080:80
while port 80
should be 5601
AFAIK.
It seems the template sets a default value 1 for elasticssearch statefulset rather than takes it from the values, thus the elasticsearch.replicaCount
settings dont make any effect.
I'm trying to make use of fluentd plugins that are not part of the image used.
Is there anyway to do that other then using my own images?
Documentation fails to provide that.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.