kalmhq / kalm Goto Github PK
View Code? Open in Web Editor NEWKalm | Kubernetes AppLication Manager
Home Page: https://kalm.dev
License: Apache License 2.0
Kalm | Kubernetes AppLication Manager
Home Page: https://kalm.dev
License: Apache License 2.0
It seems I have messed up the "Finish setup" steps and can't get a username/password back. How can I cleanly uninstall kalm to try to install it again?
Having the component imagePullPolicy
as IfNotPresent
means that an update to an image tag, e.g. latest
, won't results in a proper refresh when the component is scaled or restarted.
Excerpt from Kalm created pod, kubectl describe
command:
spec:
containers:
- image: $CONTAINER_REGISTRY/$CONTAINER_IMAGE:$IMAGE_TAG
imagePullPolicy: IfNotPresent
Just as with helm3, the ability to generate yaml client side may vastly improve tool acceptance and efforts spent selling the idea to operations and similar.
Installing a cluster wide operator, maybe not as easy. And if Kalm is about easy....
The issue I am seeing is that kalm
is very opinionated. It doesn't give me option to provide namespace. For each app, a new namespace is created.
Also, How can I browse existing applications, deployments on my cluster through kalm?
branch: https://github.com/kalmhq/kalm/tree/operator
Error:
Internal error occurred: failed calling webhook "vcomponent.kb.io": Post https://kalm-webhook-service.kalm-system.svc:443/validate-core-kalm-dev-v1alpha1-component?timeout=30s: x509: certificate signed by unknown authority
Details:
2020-08-03T17:57:49.403+0800 ERROR controller-runtime.controller Reconciler error {"controller": "kalmoperatorconfig", "name": "reconcile-caused-by-dp-change-in-essential-ns-kalm", "namespace": "kalm-system", "error": "Internal error occurred: failed calling webhook \"vcomponent.kb.io\": Post https://kalm-webhook-service.kalm-system.svc:443/validate-core-kalm-dev-v1alpha1-component?timeout=30s: x509: certificate signed by unknown authority"}
github.com/go-logr/zapr.(*zapLogger).Error
/Users/liumingmin/.go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/Users/liumingmin/.go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/Users/liumingmin/.go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:209
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
/Users/liumingmin/.go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:188
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
/Users/liumingmin/.go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
/Users/liumingmin/.go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156
k8s.io/apimachinery/pkg/util/wait.JitterUntil
/Users/liumingmin/.go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133
k8s.io/apimachinery/pkg/util/wait.Until
/Users/liumingmin/.go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90
how to reproduce:
# in a new minikube cluster
# minikube delete
# minikube start
# in branch: operator
make install
# in repo root dir
kubectl apply -f kalm-install-kalmoperatorconfig.yaml
Wait till the last step of install: install Kalm as a component, the error will occur.
The installation will finally be ok after several failures.
Repro:
Result:
Started the "finish setup" process but it broke and I don't have an user now. I can still get into the admin with port forward but I can't access apps (times out) or single sign-on. It seems that it created the main domain with all its certs (e.g. kalm-cert and sso-domain-*) and I can go to the domain but I can't login. How can I create a user manually?
After Domain config
This error in dex pod,and then restart
failed to initialize storage: failed to inspect service account token: jwt claim "kubernetes.io/serviceaccount/namespace" not found
Hi. Project looks exciting! Is there any plan to support multi tenancy logic? If yes, it could be very lightweight alternative to Openshift. Looking forward to see how this project is going to evolve!
Here are the logs:
2020-09-06T14:48:08.699330038Z E0906 14:48:08.693285 1 leaderelection.go:320] error retrieving resource lock kalm-operator/kalm-operator: context deadline exceeded
2020-09-06T14:48:08.699399910Z I0906 14:48:08.693395 1 leaderelection.go:277] failed to renew lease kalm-operator/kalm-operator: timed out waiting for the condition
2020-09-06T14:48:08.699413232Z 2020-09-06T14:48:08.693Z ERROR setup problem running manager {"error": "leader election lost"}
2020-09-06T14:48:08.699440164Z github.com/go-logr/zapr.(*zapLogger).Error
2020-09-06T14:48:08.699452875Z /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
2020-09-06T14:48:08.699460283Z main.main
2020-09-06T14:48:08.699468612Z /workspace/main.go:97
2020-09-06T14:48:08.699478565Z runtime.main
2020-09-06T14:48:08.699485542Z /usr/local/go/src/runtime/proc.go:203
2020-09-06T14:53:28.342223345Z 2020-09-06T14:53:28.341Z ERROR controller-runtime.manager Failed to get API Group-Resources {"error": "the server has received too many requests and has asked us to try again later"}
2020-09-06T14:53:28.342317663Z github.com/go-logr/zapr.(*zapLogger).Error
2020-09-06T14:53:28.342341490Z /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
2020-09-06T14:53:28.342350340Z sigs.k8s.io/controller-runtime/pkg/manager.New
2020-09-06T14:53:28.342357754Z /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/manager.go:258
2020-09-06T14:53:28.342364539Z main.main
2020-09-06T14:53:28.342370112Z /workspace/main.go:70
2020-09-06T14:53:28.342374413Z runtime.main
2020-09-06T14:53:28.342378348Z /usr/local/go/src/runtime/proc.go:203
2020-09-06T14:53:28.342383493Z 2020-09-06T14:53:28.341Z ERROR setup unable to start manager {"error": "the server has received too many requests and has asked us to try again later"}
2020-09-06T14:53:28.342390498Z github.com/go-logr/zapr.(*zapLogger).Error
2020-09-06T14:53:28.342395748Z /go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
2020-09-06T14:53:28.342401600Z main.main
2020-09-06T14:53:28.342406018Z /workspace/main.go:80
2020-09-06T14:53:28.342410231Z runtime.main
2020-09-06T14:53:28.342414268Z /usr/local/go/src/runtime/proc.go:203
This is after a fresh install in k3s (without traefik). I can get into the admin web interface but http://localhost:3010/applications
just waits forever and doesn't work.
Is the project still installable? My execution of the script on your documentation did not do anything
When getting a login page with http://localhost:3020/login
I'm presented with a request for a token.
The "View Instructions" link (https://kalm.dev/docs/install#step-4-admin-service-account) has no information on how to get a token.
I installed Kalm on a Rancher k8s cluster. The access by kubectl port-forward ...
is working fine, but when I tryed to Finish The Setup Steps, Kalm can't show the load balancer IP address, as showed bellow:
My k8s cluster is behind a nginx acting as reverse proxy. I created a entry on my DNS to point to this reverse proxy, and from there, to the actual k8s cluster nodes. When I try to access the URL pointing to Kalm, I receive the following message on the browser:
When I check and continue
, I receive the message on the image above.
If I continue anyway
on Kalm setup screen, after a while it shows all green but still not working.
Please help me
Unable to see created pods / deploymnets through command line on kalm dashboard.
Could some body please help.
Just tried installing this using your curl command, it got to 3/4 steps and just hangs. I notice in the kalm-system namespace, the kalm pod fails gets the following errors before going into crashloopbackoff:
2020-09-09T03:39:42.229Z ERROR Error updating metrics {"error": "Get https://10.96.0.1:443/apis/metrics.k8s.io/v1beta1/pods: dial tcp 10.96.0.1:443: connect: connection refused"}
github.com/go-logr/zapr.(*zapLogger).Error
/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
github.com/kalmhq/kalm/api/log.Error
/workspace/api/log/logger.go:42
github.com/kalmhq/kalm/api/resources.StartMetricScraper
/workspace/api/resources/metric_scraper.go:64
main.startMetricServer
/workspace/api/main.go:137
2020-09-09T03:39:47.229Z ERROR Error scraping pod metrics {"error": "Get https://10.96.0.1:443/apis/metrics.k8s.io/v1beta1/pods: dial tcp 10.96.0.1:443: connect: connection refused"}
github.com/go-logr/zapr.(*zapLogger).Error
/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
github.com/kalmhq/kalm/api/log.Error
/workspace/api/log/logger.go:42
github.com/kalmhq/kalm/api/resources.update
/workspace/api/resources/metric_scraper.go:73
github.com/kalmhq/kalm/api/resources.StartMetricScraper
/workspace/api/resources/metric_scraper.go:62
main.startMetricServer
/workspace/api/main.go:137
2020-09-09T03:39:47.229Z ERROR Error updating metrics {"error": "Get https://10.96.0.1:443/apis/metrics.k8s.io/v1beta1/pods: dial tcp 10.96.0.1:443: connect: connection refused"}
github.com/go-logr/zapr.(*zapLogger).Error
/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
github.com/kalmhq/kalm/api/log.Error
/workspace/api/log/logger.go:42
github.com/kalmhq/kalm/api/resources.StartMetricScraper
/workspace/api/resources/metric_scraper.go:64
main.startMetricServer
/workspace/api/main.go:137
if i kill the pod and let it restart, sometimes it fails with crashloopbackoff, sometimes the istio-proxy container fails with OOM:
2020-09-09T03:39:45.136643Z info sds resource:default new connection
2020-09-09T03:39:45.136857Z info sds Skipping waiting for ingress gateway secret
2020-09-09T03:39:48.545442Z info cache Root cert has changed, start rotating root cert for SDS clients
2020-09-09T03:39:48.545524Z info cache GenerateSecret default
2020-09-09T03:39:48.545766Z info sds resource:default pushed key/cert pair to proxy
2020-09-09T03:39:51.237322Z info sds resource:ROOTCA new connection
2020-09-09T03:39:51.237468Z info sds Skipping waiting for ingress gateway secret
2020-09-09T03:39:51.237512Z info cache Loaded root cert from certificate ROOTCA
2020-09-09T03:39:51.237670Z info sds resource:ROOTCA pushed root cert to proxy
2020-09-09T03:39:57.831084Z warning envoy filter [src/envoy/http/authn/http_filter_factory.cc:83] mTLS PERMISSIVE mode is used, connection can be either plaintext or TLS, and client cert can be omitted. Please consider to upgrade to mTLS STRICT mode for more secure configuration that only allows TLS connection with client cert. See https://istio.io/docs/tasks/security/mtls-migration/
2020-09-09T03:39:57.834243Z warning envoy filter [src/envoy/http/authn/http_filter_factory.cc:83] mTLS PERMISSIVE mode is used, connection can be either plaintext or TLS, and client cert can be omitted. Please consider to upgrade to mTLS STRICT mode for more secure configuration that only allows TLS connection with client cert. See https://istio.io/docs/tasks/security/mtls-migration/
2020-09-09T03:39:58.531777Z info sds resource:ROOTCA connection is terminated: rpc error: code = Canceled desc = context canceled
2020-09-09T03:39:58.531765Z info transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2020-09-09T03:39:58.531780Z info sds resource:default connection is terminated: rpc error: code = Canceled desc = context canceled
2020-09-09T03:39:58.531880Z error sds Remote side closed connection
2020-09-09T03:39:58.531847Z error sds Remote side closed connection
2020-09-09T03:39:58.532186Z warn Envoy may have been out of memory killed. Check memory usage and limits.
2020-09-09T03:39:58.532231Z error Epoch 0 exited with error: signal: killed
2020-09-09T03:39:58.532241Z info No more active epochs, terminating
The kalm pod then goes into a CrashLoopBackoff.
My system has 64GB of ram free, so its not a system issue
Normal Scheduled <unknown> default-scheduler Successfully assigned kalm-system/kalm-5f58d8bd9-6pqzt to homelab-a
Normal Pulling 10m kubelet, homelab-a Pulling image "docker.io/istio/proxyv2:1.6.1"
Normal Pulled 10m kubelet, homelab-a Successfully pulled image "docker.io/istio/proxyv2:1.6.1"
Normal Created 10m kubelet, homelab-a Created container istio-init
Normal Started 10m kubelet, homelab-a Started container istio-init
Normal Created 10m kubelet, homelab-a Created container kalm
Normal Pulled 10m kubelet, homelab-a Container image "kalmhq/kalm:v0.1.0-alpha.5" already present on machine
Normal Pulling 10m kubelet, homelab-a Pulling image "docker.io/istio/proxyv2:1.6.1"
Normal Started 10m kubelet, homelab-a Started container kalm
Normal Pulled 10m kubelet, homelab-a Successfully pulled image "docker.io/istio/proxyv2:1.6.1"
Normal Created 10m kubelet, homelab-a Created container istio-proxy
Normal Started 10m kubelet, homelab-a Started container istio-proxy
Warning Unhealthy 9m51s (x14 over 10m) kubelet, homelab-a Readiness probe failed: Get http://10.1.0.42:15021/healthz/ready: dial tcp 10.1.0.42:15021: connect: connection refused
Warning BackOff 21s (x35 over 9m21s) kubelet, homelab-a Back-off restarting failed container
free -m
total used free shared buff/cache available
Mem: 128713 61142 1289 2558 66281 65946
Swap: 0 0 0
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:10:16Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
This happens when attempting to apply using a supported version of the Kubernetes server but an unsupported Kubernetes client and feels more like a UX issue to me.
Referencing the command from https://kalm.dev/docs/install:
curl -sL https://get.kalm.dev | bash
This results in an endless loop of:
Awaiting installation of CRDs
error: SchemaError(io.k8s.api.policy.v1beta1.PodDisruptionBudgetList): invalid object doesn't have additional properties
Output of kubectl version
follows:
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.5", GitCommit:"32ac1c9073b132b8ba18aa830f46b77dcceb0723", GitTreeState:"clean", BuildDate:"2018-06-21T11:46:00Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-30T20:19:45Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
While the approrpriate error is dumped to stdout:
error: SchemaError(io.k8s.api.policy.v1beta1.PodDisruptionBudgetList): invalid object doesn't have additional properties
This is quickly overridden by the endless Awaiting installation of CRDs
. I see a commented sleep 1
which would help with this problem but a better way would be to error out if the kubectl apply -f ...
exits with a non-zero exit code. Could raise an MR to fix this behaviour if a maintainer could indicate what the project's preference for this behaviour is?
version: v0.1.0-alpha.5
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x197e49a]
goroutine 40070 [running]:
github.com/kalmhq/kalm/api/resources.(*Builder).List(...)
/workspace/api/resources/common.go:266
github.com/kalmhq/kalm/api/resources.(*Builder).GetProtectedEndpointsChannel.func1(0x0, 0x0, 0x0, 0x0, 0xc001b2cb90)
/workspace/api/resources/sso.go:59 +0x4a
created by github.com/kalmhq/kalm/api/resources.(*Builder).GetProtectedEndpointsChannel
/workspace/api/resources/sso.go:57 +0xe0
stream closed
echo: http: panic serving 127.0.0.1:35600: runtime error: invalid memory address or nil pointer dereference
goroutine 40582 [running]:
net/http.(*conn).serve.func1(0xc0002f30e0)
/usr/local/go/src/net/http/server.go:1795 +0x139
panic(0x1d03fe0, 0x31dfe40)
/usr/local/go/src/runtime/panic.go:679 +0x1b2
github.com/kalmhq/kalm/api/resources.(*Builder).List(...)
/workspace/api/resources/common.go:266
github.com/kalmhq/kalm/api/resources.(*Builder).ListNodes(0x0, 0x22dbce0, 0xc0005d05a0, 0x0)
/workspace/api/resources/node.go:164 +0x4a
github.com/kalmhq/kalm/api/handler.(*ApiHandler).handleListNodes(0xc00026e5a0, 0x22dbce0, 0xc0005d05a0, 0xc0005d0620, 0xc0005d0620)
/workspace/api/handler/nodes.go:8 +0x51
github.com/kalmhq/kalm/api/handler.(*ApiHandler).AuthClientMiddleware.func1(0x22dbce0, 0xc0005d05a0, 0xc0000ba040, 0xc0000ba040)
/workspace/api/handler/middleware.go:31 +0x13e
github.com/labstack/echo/v4.(*Echo).add.func1(0x22dbce0, 0xc0005d05a0, 0x223a440, 0xc0000ba040)
/go/pkg/mod/github.com/labstack/echo/[email protected]/echo.go:512 +0x8a
github.com/labstack/echo/v4/middleware.StaticWithConfig.func1.1(0x22dbce0, 0xc0005d05a0, 0x1fa77c8, 0x20)
/go/pkg/mod/github.com/labstack/echo/[email protected]/middleware/static.go:169 +0x2b9
github.com/labstack/echo/v4/middleware.CORSWithConfig.func1.1(0x22dbce0, 0xc0005d05a0, 0xf, 0xc000b43980)
/go/pkg/mod/github.com/labstack/echo/[email protected]/middleware/cors.go:121 +0x477
github.com/kalmhq/kalm/api/server.middlewareLogging.func1(0x22dbce0, 0xc0005d05a0, 0xffffffffffffffff, 0xc0011451e0)
/workspace/api/server/server.go:72 +0x233
github.com/labstack/echo/v4/middleware.GzipWithConfig.func1.1(0x22dbce0, 0xc0005d05a0, 0x0, 0x0)
/go/pkg/mod/github.com/labstack/echo/[email protected]/middleware/compress.go:92 +0x1eb
github.com/labstack/echo/v4.(*Echo).ServeHTTP.func1(0x22dbce0, 0xc0005d05a0, 0x1, 0x0)
/go/pkg/mod/github.com/labstack/echo/[email protected]/echo.go:617 +0x110
github.com/labstack/echo/v4/middleware.RemoveTrailingSlashWithConfig.func1.1(0x22dbce0, 0xc0005d05a0, 0x1, 0x1)
/go/pkg/mod/github.com/labstack/echo/[email protected]/middleware/slash.go:118 +0x19f
github.com/labstack/echo/v4.(*Echo).ServeHTTP(0xc00000c1e0, 0x2282640, 0xc00099f340, 0xc000035200)
/go/pkg/mod/github.com/labstack/echo/[email protected]/echo.go:623 +0x16c
golang.org/x/net/http2/h2c.h2cHandler.ServeHTTP(0x223b320, 0xc00000c1e0, 0xc0000978c0, 0x2282640, 0xc00099f340, 0xc000035200)
/go/pkg/mod/golang.org/x/[email protected]/http2/h2c/h2c.go:98 +0x44b
net/http.serverHandler.ServeHTTP(0xc00012a0e0, 0x2282640, 0xc00099f340, 0xc000035200)
/usr/local/go/src/net/http/server.go:2831 +0xa4
net/http.(*conn).serve(0xc0002f30e0, 0x2288b80, 0xc0002e7940)
/usr/local/go/src/net/http/server.go:1919 +0x875
created by net/http.(*Server).Serve
/usr/local/go/src/net/http/server.go:2957 +0x384
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x197d2da]
goroutine 44500 [running]:
github.com/kalmhq/kalm/api/resources.(*Builder).List(...)
/workspace/api/resources/common.go:266
github.com/kalmhq/kalm/api/resources.(*Builder).getRoleBindingListChannel.func1(0x0, 0x0, 0x0, 0xc000dca270)
/workspace/api/resources/rolebinding.go:25 +0xba
created by github.com/kalmhq/kalm/api/resources.(*Builder).getRoleBindingListChannel
/workspace/api/resources/rolebinding.go:23 +0xd6
In the demo gif in github homepage, I noticed we can access the app I deployed in kalm through xxx.kapp.live
.
This is a little confusing me, how could I accomplish this since I didn't find any resolve in public DNS server?
Do I need to config my DNS config of my desktop?
PS: I know this this not a bug, but I didn't find a place to ask this question..
I have created a cluster using RKE. Now I want to point a load balancer, provided by my provider, to my kalm instance. I created a LB from 443 to 3010 on each node, but have no success reaching the kalm dashboard. Only the kubectl-localhost-port-forwarding-method works. I can't access it publicly via LB. Am I missing something?
It has done kalm-operator and cert-manager, and now it just sits there doing nothing. How can I help diagnose this issue?
I'm looking for a way to use Kalm to deploy and manage pods but where the HTTP-based container applications will have access to X-Forwarded-For
, X-Originating-IP
, X-Remote-IP
, and/or X-Remote-Addr
.
Similar to #138 but prometheus is failing with the following error:
$ kubectl get pods -A -w
NAMESPACE NAME READY STATUS RESTARTS AGE
cert-manager cert-manager-7cb75cf6b4-gbmfz 1/1 Running 0 2m11s
cert-manager cert-manager-cainjector-759496659c-76tm4 1/1 Running 0 2m11s
cert-manager cert-manager-webhook-7c75b89bf6-hkvzb 1/1 Running 0 2m11s
istio-operator istio-operator-7c96dd898b-9t9dz 1/1 Running 0 2m10s
istio-system istio-ingressgateway-7bf98d4db8-54sbf 1/1 Running 0 56s
istio-system istiod-d474486d7-7mvdg 1/1 Running 0 76s
istio-system prometheus-5767f54db5-hl57v 0/2 ContainerCreating 0 55s
istio-system prometheus-7dcd44bbcf-wr88t 0/2 ContainerCreating 0 54s
kalm-operator kalm-operator-559c67b785-87cnj 2/2 Running 0 2m39s
kube-system coredns-66bff467f8-wsmdr 1/1 Running 0 3m33s
kube-system coredns-66bff467f8-xv4b6 1/1 Running 0 3m33s
kube-system etcd-kalm-control-plane 1/1 Running 0 3m48s
kube-system kindnet-82fn9 1/1 Running 0 3m17s
kube-system kindnet-ckbhx 1/1 Running 0 3m33s
kube-system kindnet-j5xfx 1/1 Running 2 3m16s
kube-system kindnet-srtzq 1/1 Running 0 3m17s
kube-system kube-apiserver-kalm-control-plane 1/1 Running 0 3m48s
kube-system kube-controller-manager-kalm-control-plane 1/1 Running 0 3m48s
kube-system kube-proxy-5k7lp 1/1 Running 0 3m17s
kube-system kube-proxy-fbhcb 1/1 Running 0 3m33s
kube-system kube-proxy-jtdmx 1/1 Running 0 3m17s
kube-system kube-proxy-jzkfb 1/1 Running 0 3m16s
kube-system kube-scheduler-kalm-control-plane 1/1 Running 0 3m48s
local-path-storage local-path-provisioner-bd4bb6b75-znm7d 1/1 Running 0 3m33s
$ kubectl logs -f prometheus-5767f54db5-hl57v -n istio-system -c prometheus
level=warn ts=2020-09-09T15:09:52.183Z caller=main.go:283 deprecation_notice="'storage.tsdb.retention' flag is deprecated use 'storage.tsdb.retention.time' instead."
level=info ts=2020-09-09T15:09:52.183Z caller=main.go:330 msg="Starting Prometheus" version="(version=2.15.1, branch=HEAD, revision=8744510c6391d3ef46d8294a7e1f46e57407ab13)"
level=info ts=2020-09-09T15:09:52.183Z caller=main.go:331 build_context="(go=go1.13.5, user=root@4b1e33c71b9d, date=20191225-01:04:15)"
level=info ts=2020-09-09T15:09:52.183Z caller=main.go:332 host_details="(Linux 4.19.76-linuxkit #1 SMP Tue May 26 11:42:35 UTC 2020 x86_64 prometheus-5767f54db5-hl57v (none))"
level=info ts=2020-09-09T15:09:52.183Z caller=main.go:333 fd_limits="(soft=1048576, hard=1048576)"
level=info ts=2020-09-09T15:09:52.183Z caller=main.go:334 vm_limits="(soft=unlimited, hard=unlimited)"
level=error ts=2020-09-09T15:09:52.183Z caller=query_logger.go:107 component=activeQueryTracker msg="Failed to create directory for logging active queries"
level=error ts=2020-09-09T15:09:52.184Z caller=query_logger.go:85 component=activeQueryTracker msg="Error opening query log file" file=data/queries.active err="open data/queries.active: no such file or directory"
panic: Unable to create mmap-ed active query log
goroutine 1 [running]:
github.com/prometheus/prometheus/promql.NewActiveQueryTracker(0x24dda5b, 0x5, 0x14, 0x2c62100, 0xc0006bf890, 0x2c62100)
/app/promql/query_logger.go:115 +0x48c
main.main()
/app/cmd/prometheus/main.go:362 +0x5229
I'm using a Kind cluster to install Kalm with:
$ cat kind.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
authorization-mode: "AlwaysAllow"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- role: worker
- role: worker
- role: worker
$ kind create cluster --name kalm --config kind.yaml
...
$ curl -sL https://get.kalm.dev | bash
Initializing Kalm - 3/4 modules ready:
✔ kalm-operator
✔ cert-manager
✔ istio-system
I went through "FINISH THE SETUP STEPS", but on the last step clicked one time too many and forgot to record the generated user-name and password. (Is there anyway to retrieve this information after leaving the screen?)
I tried to redo the FINISH THE SETUP steps, but could never get back to the state where the button is available. For example I tried to toggle and delete things in Admin/Single Sign-On page, but could not find anyway to get back to a state where I can generate a new login.
Edit: after more experimentation, I found that localhost:3010/setup contains a "reset" button. However I don't think there is a way to find this URL except by accident.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.