Code Monkey home page Code Monkey logo

Comments (4)

binaryYuki avatar binaryYuki commented on July 21, 2024

附日志:

NAMESPACE      NAME                                             READY   STATUS             RESTARTS        AGEcert-manager   cert-manager-655bf9748f-svr76                    1/1     Running            0               24m
cert-manager   cert-manager-cainjector-7985fb445b-zzm68         0/1     CrashLoopBackOff   9 (86s ago)     24m
cert-manager   cert-manager-webhook-6dc9656f89-2jqcq            0/1     CrashLoopBackOff   10 (102s ago)   24m
kube-system    cilium-4r4gx                                     1/1     Running            0               25m
kube-system    cilium-operator-59b49dc487-zphvr                 1/1     Running            1 (22m ago)     25m
kube-system    coredns-565d847f94-2cl2z                         0/1     Running            0               25m
kube-system    coredns-565d847f94-664pl                         0/1     Running            0               25m
kube-system    etcd-instance-20240409-1035                      1/1     Running            0               25m
kube-system    kube-apiserver-instance-20240409-1035            1/1     Running            0               25m
kube-system    kube-controller-manager-instance-20240409-1035   1/1     Running            1 (22m ago)     25m
kube-system    kube-proxy-k6wvl                                 1/1     Running            0               25m
kube-system    kube-scheduler-instance-20240409-1035            1/1     Running            1 (22m ago)     25m
kube-system    metrics-server-755f77d4d5-jm4lr                  0/1     CrashLoopBackOff   9 (59s ago)     24m
openebs        openebs-localpv-provisioner-79f4c678cd-9qc69     0/1     CrashLoopBackOff   9 (102s ago)    24m
root@instance-20240409-1035:/home/ubuntu# kubectl describe pod metrics-server-755f77d4d5-jm4lr 
Error from server (NotFound): pods "metrics-server-755f77d4d5-jm4lr" not found
root@instance-20240409-1035:/home/ubuntu# kubectl describe pod 
No resources found in default namespace.
root@instance-20240409-1035:/home/ubuntu# kubectl describe pod -A
Name:             cert-manager-655bf9748f-svr76
Namespace:        cert-manager
Priority:         0
Service Account:  cert-manager
Node:             instance-20240409-1035/10.0.0.118
Start Time:       Tue, 09 Apr 2024 09:58:19 +0000
Labels:           app=cert-manager
                  app.kubernetes.io/component=controller
                  app.kubernetes.io/instance=cert-manager
                  app.kubernetes.io/name=cert-manager
                  app.kubernetes.io/version=v1.8.0
                  pod-template-hash=655bf9748f
Annotations:      prometheus.io/path: /metrics
                  prometheus.io/port: 9402
                  prometheus.io/scrape: true
Status:           Running
IP:               10.0.0.153
IPs:
  IP:           10.0.0.153
Controlled By:  ReplicaSet/cert-manager-655bf9748f
Containers:
  cert-manager:
    Container ID:  containerd://4d5e1661db748d837a912289d83e1907267e206bd087d7ac6f9a7c9a45cc0d06
    Image:         quay.io/jetstack/cert-manager-controller:v1.8.0
    Image ID:      sealos.hub:5000/jetstack/cert-manager-controller@sha256:2927259596823c8ccb4ebec10aef48955880c9be13dbc5f70ccb05a11a91eee9
    Port:          9402/TCP
    Host Port:     0/TCP
    Args:
      --v=2
      --cluster-resource-namespace=$(POD_NAMESPACE)
      --leader-election-namespace=kube-system
    State:          Running
      Started:      Tue, 09 Apr 2024 09:58:26 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      POD_NAMESPACE:  cert-manager (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tng7d (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-tng7d:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  25m   default-scheduler  Successfully assigned cert-manager/cert-manager-655bf9748f-svr76 to instance-20240409-1035
  Normal  Pulling    25m   kubelet            Pulling image "quay.io/jetstack/cert-manager-controller:v1.8.0"
  Normal  Pulled     25m   kubelet            Successfully pulled image "quay.io/jetstack/cert-manager-controller:v1.8.0" in 2.549649125s (3.921623501s including waiting)
  Normal  Created    25m   kubelet            Created container cert-manager
  Normal  Started    25m   kubelet            Started container cert-manager


Name:             cert-manager-cainjector-7985fb445b-zzm68
Namespace:        cert-manager
Priority:         0
Service Account:  cert-manager-cainjector
Node:             instance-20240409-1035/10.0.0.118
Start Time:       Tue, 09 Apr 2024 09:58:19 +0000
Labels:           app=cainjector
                  app.kubernetes.io/component=cainjector
                  app.kubernetes.io/instance=cert-manager
                  app.kubernetes.io/name=cainjector
                  app.kubernetes.io/version=v1.8.0
                  pod-template-hash=7985fb445b
Annotations:      <none>
Status:           Running
IP:               10.0.0.192
IPs:
  IP:           10.0.0.192
Controlled By:  ReplicaSet/cert-manager-cainjector-7985fb445b
Containers:
  cert-manager:
    Container ID:  containerd://f4206b3d2f2fc46b40a3888ddba13b23a0e2e7ab8cff5a4cef8af810ebf97d9a
    Image:         quay.io/jetstack/cert-manager-cainjector:v1.8.0
    Image ID:      sealos.hub:5000/jetstack/cert-manager-cainjector@sha256:c747297ef5de75cb4b00d92edaee451e8068bc3d34b2e5e5502810935a1590c4
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=2
      --leader-election-namespace=kube-system
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 09 Apr 2024 10:21:11 +0000
      Finished:     Tue, 09 Apr 2024 10:21:11 +0000
    Ready:          False
    Restart Count:  9
    Environment:
      POD_NAMESPACE:  cert-manager (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bfklh (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-bfklh:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                 From               Message
  ----     ------     ----                ----               -------
  Normal   Scheduled  25m                 default-scheduler  Successfully assigned cert-manager/cert-manager-cainjector-7985fb445b-zzm68 to instance-20240409-1035
  Normal   Pulling    25m                 kubelet            Pulling image "quay.io/jetstack/cert-manager-cainjector:v1.8.0"
  Normal   Pulled     25m                 kubelet            Successfully pulled image "quay.io/jetstack/cert-manager-cainjector:v1.8.0" in 1.401366613s (1.401510614s including waiting)
  Normal   Created    21m (x5 over 25m)   kubelet            Created container cert-manager
  Normal   Started    21m (x5 over 25m)   kubelet            Started container cert-manager
  Normal   Pulled     21m (x4 over 25m)   kubelet            Container image "quay.io/jetstack/cert-manager-cainjector:v1.8.0" already present on machine
  Warning  BackOff    1s (x106 over 23m)  kubelet            Back-off restarting failed container


Name:             cert-manager-webhook-6dc9656f89-2jqcq
Namespace:        cert-manager
Priority:         0
Service Account:  cert-manager-webhook
Node:             instance-20240409-1035/10.0.0.118
Start Time:       Tue, 09 Apr 2024 09:58:19 +0000
Labels:           app=webhook
                  app.kubernetes.io/component=webhook
                  app.kubernetes.io/instance=cert-manager
                  app.kubernetes.io/name=webhook
                  app.kubernetes.io/version=v1.8.0
                  pod-template-hash=6dc9656f89
Annotations:      <none>
Status:           Running
IP:               10.0.0.110
IPs:
  IP:           10.0.0.110
Controlled By:  ReplicaSet/cert-manager-webhook-6dc9656f89
Containers:
  cert-manager:
    Container ID:  containerd://d2d8355c8f032f400fffefa09d01cf6d46b700997a957b74fbae374703e517d4
    Image:         quay.io/jetstack/cert-manager-webhook:v1.8.0
    Image ID:      sealos.hub:5000/jetstack/cert-manager-webhook@sha256:b0a6937263daf19958855ad4c687317ff96f5ab36f2a557231969a6ca352afac
    Port:          10250/TCP
    Host Port:     0/TCP
    Args:
      --v=2
      --secure-port=10250
      --dynamic-serving-ca-secret-namespace=$(POD_NAMESPACE)
      --dynamic-serving-ca-secret-name=cert-manager-webhook-ca
      --dynamic-serving-dns-names=cert-manager-webhook,cert-manager-webhook.cert-manager,cert-manager-webhook.cert-manager.svc
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 09 Apr 2024 10:20:55 +0000
      Finished:     Tue, 09 Apr 2024 10:20:55 +0000
    Ready:          False
    Restart Count:  10
    Liveness:       http-get http://:6080/livez delay=60s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:6080/healthz delay=5s timeout=1s period=5s #success=1 #failure=3
    Environment:
      POD_NAMESPACE:  cert-manager (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96d7w (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-96d7w:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  25m                  default-scheduler  Successfully assigned cert-manager/cert-manager-webhook-6dc9656f89-2jqcq to instance-20240409-1035
  Normal   Pulling    25m                  kubelet            Pulling image "quay.io/jetstack/cert-manager-webhook:v1.8.0"
  Normal   Pulled     24m                  kubelet            Successfully pulled image "quay.io/jetstack/cert-manager-webhook:v1.8.0" in 1.041179064s (4.892909431s including waiting)
  Normal   Created    24m                  kubelet            Created container cert-manager
  Normal   Started    24m                  kubelet            Started container cert-manager
  Warning  Unhealthy  23m (x17 over 24m)   kubelet            Readiness probe failed: Get "http://10.0.0.110:6080/healthz": dial tcp 10.0.0.110:6080: connect: connection refused
  Warning  Unhealthy  23m (x3 over 23m)    kubelet            Liveness probe failed: Get "http://10.0.0.110:6080/livez": dial tcp 10.0.0.110:6080: connect: connection refused
  Normal   Killing    23m                  kubelet            Container cert-manager failed liveness probe, will be restarted
  Warning  BackOff    5m1s (x95 over 23m)  kubelet            Back-off restarting failed container


Name:                 cilium-4r4gx
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      cilium
Node:                 instance-20240409-1035/10.0.0.118
Start Time:           Tue, 09 Apr 2024 09:57:34 +0000
Labels:               controller-revision-hash=5b77d79447
                      k8s-app=cilium
                      pod-template-generation=1
Annotations:          container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites: unconfined
                      container.apparmor.security.beta.kubernetes.io/cilium-agent: unconfined
                      container.apparmor.security.beta.kubernetes.io/clean-cilium-state: unconfined
                      container.apparmor.security.beta.kubernetes.io/mount-cgroup: unconfined
Status:               Running
IP:                   10.0.0.118
IPs:
  IP:           10.0.0.118
Controlled By:  DaemonSet/cilium
Init Containers:
  mount-cgroup:
    Container ID:  containerd://2884d23e2fbb3b243b90b029f29a27c81aa2edd30a3f5fd3d9553b57f3731055
    Image:         quay.io/cilium/cilium:v1.12.14
    Image ID:      sealos.hub:5000/cilium/cilium@sha256:7ce0308f599539515252a06a34549d9861861fb6392beb98b7458683e47dca8e
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -ec
      cp /usr/bin/cilium-mount /hostbin/cilium-mount;
      nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
      rm /hostbin/cilium-mount
      
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 09 Apr 2024 09:57:53 +0000
      Finished:     Tue, 09 Apr 2024 09:57:53 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      CGROUP_ROOT:  /run/cilium/cgroupv2
      BIN_PATH:     /opt/cni/bin
    Mounts:
      /hostbin from cni-path (rw)
      /hostproc from hostproc (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ml8xf (ro)
  apply-sysctl-overwrites:
    Container ID:  containerd://f75e23877dd9e8477c25fe4dfaa70dc4df5fd8f79e5aadff3094492a1937b53c
    Image:         quay.io/cilium/cilium:v1.12.14
    Image ID:      sealos.hub:5000/cilium/cilium@sha256:7ce0308f599539515252a06a34549d9861861fb6392beb98b7458683e47dca8e
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -ec
      cp /usr/bin/cilium-sysctlfix /hostbin/cilium-sysctlfix;
      nsenter --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-sysctlfix";
      rm /hostbin/cilium-sysctlfix
      
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 09 Apr 2024 09:58:01 +0000
      Finished:     Tue, 09 Apr 2024 09:58:01 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      BIN_PATH:  /opt/cni/bin
    Mounts:
      /hostbin from cni-path (rw)
      /hostproc from hostproc (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ml8xf (ro)
  mount-bpf-fs:
    Container ID:  containerd://98d684ff89657cd35be6cc6665abdc1e81c2e4c577e293a42181c274d3319804
    Image:         quay.io/cilium/cilium:v1.12.14
    Image ID:      sealos.hub:5000/cilium/cilium@sha256:7ce0308f599539515252a06a34549d9861861fb6392beb98b7458683e47dca8e
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/bash
      -c
      --
    Args:
      mount | grep "/sys/fs/bpf type bpf" || mount -t bpf bpf /sys/fs/bpf
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 09 Apr 2024 09:58:02 +0000
      Finished:     Tue, 09 Apr 2024 09:58:02 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /sys/fs/bpf from bpf-maps (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ml8xf (ro)
  clean-cilium-state:
    Container ID:  containerd://4046039585c8cdec2cb6b98f2d79e2effe7228d9021c0c78cff4144ff058c50d
    Image:         quay.io/cilium/cilium:v1.12.14
    Image ID:      sealos.hub:5000/cilium/cilium@sha256:7ce0308f599539515252a06a34549d9861861fb6392beb98b7458683e47dca8e
    Port:          <none>
    Host Port:     <none>
    Command:
      /init-container.sh
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 09 Apr 2024 09:58:04 +0000
      Finished:     Tue, 09 Apr 2024 09:58:04 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     100m
      memory:  100Mi
    Environment:
      CILIUM_ALL_STATE:         <set to the key 'clean-cilium-state' of config map 'cilium-config'>      Optional: true
      CILIUM_BPF_STATE:         <set to the key 'clean-cilium-bpf-state' of config map 'cilium-config'>  Optional: true
      KUBERNETES_SERVICE_HOST:  apiserver.cluster.local
      KUBERNETES_SERVICE_PORT:  6443
    Mounts:
      /run/cilium/cgroupv2 from cilium-cgroup (rw)
      /sys/fs/bpf from bpf-maps (rw)
      /var/run/cilium from cilium-run (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ml8xf (ro)
  install-cni-binaries:
    Container ID:  containerd://4f1832231a8f6c167f63aeaf3342512d1bfa130062ee8983a51a5229128355d3
    Image:         quay.io/cilium/cilium:v1.12.14
    Image ID:      sealos.hub:5000/cilium/cilium@sha256:7ce0308f599539515252a06a34549d9861861fb6392beb98b7458683e47dca8e
    Port:          <none>
    Host Port:     <none>
    Command:
      /install-plugin.sh
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 09 Apr 2024 09:58:04 +0000
      Finished:     Tue, 09 Apr 2024 09:58:05 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
      memory:     10Mi
    Environment:  <none>
    Mounts:
      /host/opt/cni/bin from cni-path (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ml8xf (ro)
Containers:
  cilium-agent:
    Container ID:  containerd://77985d178df9343391c6b470c049030d024f8c72b41e9ffb19392ece8098bfd2
    Image:         quay.io/cilium/cilium:v1.12.14
    Image ID:      sealos.hub:5000/cilium/cilium@sha256:7ce0308f599539515252a06a34549d9861861fb6392beb98b7458683e47dca8e
    Port:          <none>
    Host Port:     <none>
    Command:
      cilium-agent
    Args:
      --config-dir=/tmp/cilium/config-map
    State:          Running
      Started:      Tue, 09 Apr 2024 09:58:06 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://127.0.0.1:9879/healthz delay=0s timeout=5s period=30s #success=1 #failure=10
    Readiness:      http-get http://127.0.0.1:9879/healthz delay=0s timeout=5s period=30s #success=1 #failure=3
    Startup:        http-get http://127.0.0.1:9879/healthz delay=0s timeout=1s period=2s #success=1 #failure=105
    Environment:
      K8S_NODE_NAME:               (v1:spec.nodeName)
      CILIUM_K8S_NAMESPACE:       kube-system (v1:metadata.namespace)
      CILIUM_CLUSTERMESH_CONFIG:  /var/lib/cilium/clustermesh/
      CILIUM_CNI_CHAINING_MODE:   <set to the key 'cni-chaining-mode' of config map 'cilium-config'>  Optional: true
      CILIUM_CUSTOM_CNI_CONF:     <set to the key 'custom-cni-conf' of config map 'cilium-config'>    Optional: true
      KUBERNETES_SERVICE_HOST:    apiserver.cluster.local
      KUBERNETES_SERVICE_PORT:    6443
    Mounts:
      /host/etc/cni/net.d from etc-cni-netd (rw)
      /host/proc/sys/kernel from host-proc-sys-kernel (rw)
      /host/proc/sys/net from host-proc-sys-net (rw)
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /sys/fs/bpf from bpf-maps (rw)
      /tmp/cilium/config-map from cilium-config-path (ro)
      /var/lib/cilium/clustermesh from clustermesh-secrets (ro)
      /var/lib/cilium/tls/hubble from hubble-tls (ro)
      /var/run/cilium from cilium-run (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ml8xf (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  cilium-run:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/cilium
    HostPathType:  DirectoryOrCreate
  bpf-maps:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/bpf
    HostPathType:  DirectoryOrCreate
  hostproc:
    Type:          HostPath (bare host directory volume)
    Path:          /proc
    HostPathType:  Directory
  cilium-cgroup:
    Type:          HostPath (bare host directory volume)
    Path:          /run/cilium/cgroupv2
    HostPathType:  DirectoryOrCreate
  cni-path:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  DirectoryOrCreate
  etc-cni-netd:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  DirectoryOrCreate
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  clustermesh-secrets:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  cilium-clustermesh
    Optional:    true
  cilium-config-path:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      cilium-config
    Optional:  false
  host-proc-sys-net:
    Type:          HostPath (bare host directory volume)
    Path:          /proc/sys/net
    HostPathType:  Directory
  host-proc-sys-kernel:
    Type:          HostPath (bare host directory volume)
    Path:          /proc/sys/kernel
    HostPathType:  Directory
  hubble-tls:
    Type:                Projected (a volume that contains injected data from multiple sources)
    SecretName:          hubble-server-certs
    SecretOptionalName:  0x40008f9c9a
  kube-api-access-ml8xf:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  25m                default-scheduler  Successfully assigned kube-system/cilium-4r4gx to instance-20240409-1035
  Normal   Pulling    25m                kubelet            Pulling image "quay.io/cilium/cilium:v1.12.14"
  Normal   Pulled     25m                kubelet            Successfully pulled image "quay.io/cilium/cilium:v1.12.14" in 18.483941886s (18.483995566s including waiting)
  Normal   Created    25m                kubelet            Created container mount-cgroup
  Normal   Started    25m                kubelet            Started container mount-cgroup
  Normal   Pulled     25m                kubelet            Container image "quay.io/cilium/cilium:v1.12.14" already present on machine
  Normal   Created    25m                kubelet            Created container apply-sysctl-overwrites
  Normal   Started    25m                kubelet            Started container apply-sysctl-overwrites
  Normal   Pulled     25m                kubelet            Container image "quay.io/cilium/cilium:v1.12.14" already present on machine
  Normal   Created    25m                kubelet            Created container mount-bpf-fs
  Normal   Started    25m                kubelet            Started container mount-bpf-fs
  Normal   Pulled     25m                kubelet            Container image "quay.io/cilium/cilium:v1.12.14" already present on machine
  Normal   Created    25m                kubelet            Created container clean-cilium-state
  Normal   Started    25m                kubelet            Started container clean-cilium-state
  Normal   Pulled     25m                kubelet            Container image "quay.io/cilium/cilium:v1.12.14" already present on machine
  Normal   Created    25m                kubelet            Created container install-cni-binaries
  Normal   Started    25m                kubelet            Started container install-cni-binaries
  Normal   Pulled     25m                kubelet            Container image "quay.io/cilium/cilium:v1.12.14" already present on machine
  Normal   Created    25m                kubelet            Created container cilium-agent
  Normal   Started    25m                kubelet            Started container cilium-agent
  Warning  Unhealthy  25m (x6 over 25m)  kubelet            Startup probe failed: Get "http://127.0.0.1:9879/healthz": dial tcp 127.0.0.1:9879: connect: connection refused


Name:                 cilium-operator-59b49dc487-zphvr
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      cilium-operator
Node:                 instance-20240409-1035/10.0.0.118
Start Time:           Tue, 09 Apr 2024 09:57:34 +0000
Labels:               io.cilium/app=operator
                      name=cilium-operator
                      pod-template-hash=59b49dc487
Annotations:          <none>
Status:               Running
IP:                   10.0.0.118
IPs:
  IP:           10.0.0.118
Controlled By:  ReplicaSet/cilium-operator-59b49dc487
Containers:
  cilium-operator:
    Container ID:  containerd://7de041179905c067c96595a64f2ae5c9f6cbfafb79d9173be63fa02ad51f27ce
    Image:         quay.io/cilium/operator:v1.12.14
    Image ID:      sealos.hub:5000/cilium/operator@sha256:e29749f441e3450a4bf4a15033dedc55e8928c3075b229d4c65d2a9768a4db8c
    Port:          <none>
    Host Port:     <none>
    Command:
      cilium-operator
    Args:
      --config-dir=/tmp/cilium/config-map
      --debug=$(CILIUM_DEBUG)
    State:       Running
      Started:   Tue, 09 Apr 2024 10:00:05 +0000
    Last State:  Terminated
      Reason:    Error
      Message:   Rs="[10.0.0.0/8]" ipv6CIDRs="[]" subsys=ipam-allocator-clusterpool
level=info msg="Removing Cilium Node Taints or Setting Cilium Is Up Condition for Kubernetes Nodes" k8sNamespace=kube-system label-selector="k8s-app=cilium" remove-cilium-node-taints=true set-cilium-is-up-condition=true subsys=cilium-operator
level=info msg="Starting to synchronize CiliumNode custom resources" subsys=cilium-operator
level=info msg="Starting to garbage collect stale CiliumNode custom resources" subsys=watchers
level=info msg="Garbage collected status/nodes in Cilium Clusterwide Network Policies found=0, gc=0" subsys=cilium-operator
level=info msg="Garbage collected status/nodes in Cilium Network Policies found=0, gc=0" subsys=cilium-operator
level=info msg="CiliumNodes caches synced with Kubernetes" subsys=cilium-operator
level=info msg="Starting to garbage collect stale CiliumEndpoint custom resources" subsys=cilium-operator
level=info msg="Starting CRD identity garbage collector" interval=15m0s subsys=cilium-operator
level=info msg="Starting CNP derivative handler" subsys=cilium-operator
level=info msg="Starting CCNP derivative handler" subsys=cilium-operator
level=info msg="Initialization complete" subsys=cilium-operator
Failed to update lock: etcdserver: request timed out
level=error msg="Failed to update lock: etcdserver: request timed out" subsys=klog
level=info msg="failed to renew lease kube-system/cilium-operator-resource-lock: timed out waiting for the condition" subsys=klog
Failed to release lock: Operation cannot be fulfilled on leases.coordination.k8s.io "cilium-operator-resource-lock": the object has been modified; please apply your changes to the latest version and try again
level=error msg="Failed to release lock: Operation cannot be fulfilled on leases.coordination.k8s.io \"cilium-operator-resource-lock\": the object has been modified; please apply your changes to the latest version and try again" subsys=klog
level=info msg="Leader election lost" operator-id=instance-20240409-1035-mVfidDJfRI subsys=cilium-operator

      Exit Code:    1
      Started:      Tue, 09 Apr 2024 09:58:03 +0000
      Finished:     Tue, 09 Apr 2024 09:59:47 +0000
    Ready:          True
    Restart Count:  1
    Liveness:       http-get http://127.0.0.1:9234/healthz delay=60s timeout=3s period=10s #success=1 #failure=3
    Environment:
      K8S_NODE_NAME:             (v1:spec.nodeName)
      CILIUM_K8S_NAMESPACE:     kube-system (v1:metadata.namespace)
      CILIUM_DEBUG:             <set to the key 'debug' of config map 'cilium-config'>  Optional: true
      KUBERNETES_SERVICE_HOST:  apiserver.cluster.local
      KUBERNETES_SERVICE_PORT:  6443
    Mounts:
      /tmp/cilium/config-map from cilium-config-path (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sp4jp (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  cilium-config-path:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      cilium-config
    Optional:  false
  kube-api-access-sp4jp:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 op=Exists
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  25m                default-scheduler  Successfully assigned kube-system/cilium-operator-59b49dc487-zphvr to instance-20240409-1035
  Normal   Pulling    25m                kubelet            Pulling image "quay.io/cilium/operator:v1.12.14"
  Normal   Pulled     25m                kubelet            Successfully pulled image "quay.io/cilium/operator:v1.12.14" in 8.590662366s (26.380157037s including waiting)
  Warning  Unhealthy  23m (x3 over 24m)  kubelet            Liveness probe failed: Get "http://127.0.0.1:9234/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
  Normal   Killing    23m                kubelet            Container cilium-operator failed liveness probe, will be restarted
  Normal   Pulled     23m                kubelet            Container image "quay.io/cilium/operator:v1.12.14" already present on machine
  Normal   Created    23m (x2 over 25m)  kubelet            Created container cilium-operator
  Normal   Started    23m (x2 over 25m)  kubelet            Started container cilium-operator


Name:                 coredns-565d847f94-2cl2z
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      coredns
Node:                 instance-20240409-1035/10.0.0.118
Start Time:           Tue, 09 Apr 2024 09:58:16 +0000
Labels:               k8s-app=kube-dns
                      pod-template-hash=565d847f94
Annotations:          <none>
Status:               Running
IP:                   10.0.0.60
IPs:
  IP:           10.0.0.60
Controlled By:  ReplicaSet/coredns-565d847f94
Containers:
  coredns:
    Container ID:  containerd://cae4df91ea8354ab3b146bcf7b274a61297e567dffa124adbb5f0f2192a8e588
    Image:         registry.k8s.io/coredns/coredns:v1.9.3
    Image ID:      sealos.hub:5000/coredns/coredns@sha256:85c4677668f548da72cc0194663ae9d412cac8f239551ca90c0941c9c1ad8685
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Running
      Started:      Tue, 09 Apr 2024 09:58:22 +0000
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vndjm (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  kube-api-access-vndjm:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 CriticalAddonsOnly op=Exists
                             node-role.kubernetes.io/control-plane:NoSchedule
                             node-role.kubernetes.io/master:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  25m                 default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
  Normal   Scheduled         25m                 default-scheduler  Successfully assigned kube-system/coredns-565d847f94-2cl2z to instance-20240409-1035
  Normal   Pulled            25m                 kubelet            Container image "registry.k8s.io/coredns/coredns:v1.9.3" already present on machine
  Normal   Created           25m                 kubelet            Created container coredns
  Normal   Started           25m                 kubelet            Started container coredns
  Warning  Unhealthy         25m                 kubelet            Readiness probe failed: Get "http://10.0.0.60:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy         0s (x171 over 25m)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 503


Name:                 coredns-565d847f94-664pl
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      coredns
Node:                 instance-20240409-1035/10.0.0.118
Start Time:           Tue, 09 Apr 2024 09:58:16 +0000
Labels:               k8s-app=kube-dns
                      pod-template-hash=565d847f94
Annotations:          <none>
Status:               Running
IP:                   10.0.0.83
IPs:
  IP:           10.0.0.83
Controlled By:  ReplicaSet/coredns-565d847f94
Containers:
  coredns:
    Container ID:  containerd://62679cd5ea971640c21216ecfd1c069e5451a2ba6cf831577218b318ea543078
    Image:         registry.k8s.io/coredns/coredns:v1.9.3
    Image ID:      sealos.hub:5000/coredns/coredns@sha256:85c4677668f548da72cc0194663ae9d412cac8f239551ca90c0941c9c1ad8685
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Running
      Started:      Tue, 09 Apr 2024 09:58:21 +0000
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9hzx6 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  kube-api-access-9hzx6:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 CriticalAddonsOnly op=Exists
                             node-role.kubernetes.io/control-plane:NoSchedule
                             node-role.kubernetes.io/master:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  25m (x2 over 25m)   default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
  Normal   Scheduled         25m                 default-scheduler  Successfully assigned kube-system/coredns-565d847f94-664pl to instance-20240409-1035
  Normal   Pulled            25m                 kubelet            Container image "registry.k8s.io/coredns/coredns:v1.9.3" already present on machine
  Normal   Created           25m                 kubelet            Created container coredns
  Normal   Started           25m                 kubelet            Started container coredns
  Warning  Unhealthy         25m (x2 over 25m)   kubelet            Readiness probe failed: Get "http://10.0.0.83:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy         0s (x170 over 25m)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 503


Name:                 etcd-instance-20240409-1035
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 instance-20240409-1035/10.0.0.118
Start Time:           Tue, 09 Apr 2024 09:57:10 +0000
Labels:               component=etcd
                      tier=control-plane
Annotations:          kubeadm.kubernetes.io/etcd.advertise-client-urls: https://10.0.0.118:2379
                      kubernetes.io/config.hash: 255ac202a4b6285ecf9f95463fa3c5f3
                      kubernetes.io/config.mirror: 255ac202a4b6285ecf9f95463fa3c5f3
                      kubernetes.io/config.seen: 2024-04-09T09:57:00.446193904Z
                      kubernetes.io/config.source: file
Status:               Running
IP:                   10.0.0.118
IPs:
  IP:           10.0.0.118
Controlled By:  Node/instance-20240409-1035
Containers:
  etcd:
    Container ID:  containerd://fb8b18f601c4f3460adf82c17a42220e9d3d7bf2b924f9e0599683427ed03d25
    Image:         registry.k8s.io/etcd:3.5.6-0
    Image ID:      sealos.hub:5000/etcd@sha256:d8aa60417894c5563344589602f2ba5c63613d112c34a694cb84ff1e529c1975
    Port:          <none>
    Host Port:     <none>
    Command:
      etcd
      --advertise-client-urls=https://10.0.0.118:2379
      --cert-file=/etc/kubernetes/pki/etcd/server.crt
      --client-cert-auth=true
      --data-dir=/var/lib/etcd
      --experimental-initial-corrupt-check=true
      --experimental-watch-progress-notify-interval=5s
      --initial-advertise-peer-urls=https://10.0.0.118:2380
      --initial-cluster=instance-20240409-1035=https://10.0.0.118:2380
      --key-file=/etc/kubernetes/pki/etcd/server.key
      --listen-client-urls=https://127.0.0.1:2379,https://10.0.0.118:2379
      --listen-metrics-urls=http://0.0.0.0:2381
      --listen-peer-urls=https://10.0.0.118:2380
      --name=instance-20240409-1035
      --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
      --peer-client-cert-auth=true
      --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
      --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
      --snapshot-count=10000
      --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    State:          Running
      Started:      Tue, 09 Apr 2024 09:57:01 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
      memory:     100Mi
    Liveness:     http-get http://0.0.0.0:2381/health%3Fexclude=NOSPACE&serializable=true delay=10s timeout=15s period=10s #success=1 #failure=8
    Startup:      http-get http://0.0.0.0:2381/health%3Fserializable=false delay=10s timeout=15s period=10s #success=1 #failure=24
    Environment:  <none>
    Mounts:
      /etc/kubernetes/pki/etcd from etcd-certs (rw)
      /var/lib/etcd from etcd-data (rw)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  etcd-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki/etcd
    HostPathType:  DirectoryOrCreate
  etcd-data:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/etcd
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute op=Exists
Events:            <none>


Name:                 kube-apiserver-instance-20240409-1035
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 instance-20240409-1035/10.0.0.118
Start Time:           Tue, 09 Apr 2024 09:57:10 +0000
Labels:               component=kube-apiserver
                      tier=control-plane
Annotations:          kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.0.0.118:6443
                      kubernetes.io/config.hash: 5d0e812d4276731d589ddecbc2f799fe
                      kubernetes.io/config.mirror: 5d0e812d4276731d589ddecbc2f799fe
                      kubernetes.io/config.seen: 2024-04-09T09:57:09.539430631Z
                      kubernetes.io/config.source: file
Status:               Running
IP:                   10.0.0.118
IPs:
  IP:           10.0.0.118
Controlled By:  Node/instance-20240409-1035
Containers:
  kube-apiserver:
    Container ID:  containerd://8c0455b07745403222ac4461d4ca98a74c03c61fdbf7717e28ada76b8b680106
    Image:         registry.k8s.io/kube-apiserver:v1.25.6
    Image ID:      sealos.hub:5000/kube-apiserver@sha256:563e1fc5acd5e5f1b3e5914fe146a236f0c1feb92c3cc8966b5e1fd776fe2d4f
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-apiserver
      --advertise-address=10.0.0.118
      --allow-privileged=true
      --audit-log-format=json
      --audit-log-maxage=7
      --audit-log-maxbackup=10
      --audit-log-maxsize=100
      --audit-log-path=/var/log/kubernetes/audit.log
      --audit-policy-file=/etc/kubernetes/audit-policy.yml
      --authorization-mode=Node,RBAC
      --client-ca-file=/etc/kubernetes/pki/ca.crt
      --enable-admission-plugins=NodeRestriction
      --enable-aggregator-routing=true
      --enable-bootstrap-token-auth=true
      --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
      --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
      --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
      --etcd-servers=https://127.0.0.1:2379
      --feature-gates=EphemeralContainers=true
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
      --requestheader-allowed-names=front-proxy-client
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
      --requestheader-extra-headers-prefix=X-Remote-Extra-
      --requestheader-group-headers=X-Remote-Group
      --requestheader-username-headers=X-Remote-User
      --secure-port=6443
      --service-account-issuer=https://kubernetes.default.svc.cluster.local
      --service-account-key-file=/etc/kubernetes/pki/sa.pub
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
      --service-cluster-ip-range=10.96.0.0/22
      --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
      --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    State:          Running
      Started:      Tue, 09 Apr 2024 09:57:01 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        250m
    Liveness:     http-get https://10.0.0.118:6443/livez delay=10s timeout=15s period=10s #success=1 #failure=8
    Readiness:    http-get https://10.0.0.118:6443/readyz delay=0s timeout=15s period=1s #success=1 #failure=3
    Startup:      http-get https://10.0.0.118:6443/livez delay=10s timeout=15s period=10s #success=1 #failure=24
    Environment:  <none>
    Mounts:
      /etc/ca-certificates from etc-ca-certificates (ro)
      /etc/kubernetes from audit (rw)
      /etc/kubernetes/pki from k8s-certs (ro)
      /etc/localtime from localtime (ro)
      /etc/pki from etc-pki (ro)
      /etc/ssl/certs from ca-certs (ro)
      /usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
      /usr/share/ca-certificates from usr-share-ca-certificates (ro)
      /var/log/kubernetes from audit-log (rw)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  audit:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes
    HostPathType:  DirectoryOrCreate
  audit-log:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log/kubernetes
    HostPathType:  DirectoryOrCreate
  ca-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  DirectoryOrCreate
  etc-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ca-certificates
    HostPathType:  DirectoryOrCreate
  etc-pki:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/pki
    HostPathType:  DirectoryOrCreate
  k8s-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki
    HostPathType:  DirectoryOrCreate
  localtime:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/localtime
    HostPathType:  File
  usr-local-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/local/share/ca-certificates
    HostPathType:  DirectoryOrCreate
  usr-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/share/ca-certificates
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute op=Exists
Events:
  Type     Reason     Age                 From     Message
  ----     ------     ----                ----     -------
  Warning  Unhealthy  24m (x4 over 24m)   kubelet  Liveness probe failed: HTTP probe failed with statuscode: 500
  Warning  Unhealthy  10m (x35 over 25m)  kubelet  Readiness probe failed: HTTP probe failed with statuscode: 500


Name:                 kube-controller-manager-instance-20240409-1035
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 instance-20240409-1035/10.0.0.118
Start Time:           Tue, 09 Apr 2024 09:57:10 +0000
Labels:               component=kube-controller-manager
                      tier=control-plane
Annotations:          kubernetes.io/config.hash: 1de705f356ebbf18bae10f071a771ac6
                      kubernetes.io/config.mirror: 1de705f356ebbf18bae10f071a771ac6
                      kubernetes.io/config.seen: 2024-04-09T09:57:09.539431911Z
                      kubernetes.io/config.source: file
Status:               Running
IP:                   10.0.0.118
IPs:
  IP:           10.0.0.118
Controlled By:  Node/instance-20240409-1035
Containers:
  kube-controller-manager:
    Container ID:  containerd://0393aa990fe41b2a5360ae4721abbb66198d0a0709069456032a9ac9c3acf9ae
    Image:         registry.k8s.io/kube-controller-manager:v1.25.6
    Image ID:      sealos.hub:5000/kube-controller-manager@sha256:6b8a8b7135b88f8d1ef427e8ff63385c0b05bc938d7aed208cac5d3b49adee2b
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-controller-manager
      --allocate-node-cidrs=true
      --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
      --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
      --bind-address=0.0.0.0
      --client-ca-file=/etc/kubernetes/pki/ca.crt
      --cluster-cidr=100.64.0.0/10
      --cluster-name=kubernetes
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
      --cluster-signing-duration=876000h
      --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
      --controllers=*,bootstrapsigner,tokencleaner
      --feature-gates=EphemeralContainers=true
      --kubeconfig=/etc/kubernetes/controller-manager.conf
      --leader-elect=true
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
      --root-ca-file=/etc/kubernetes/pki/ca.crt
      --service-account-private-key-file=/etc/kubernetes/pki/sa.key
      --service-cluster-ip-range=10.96.0.0/22
      --use-service-account-credentials=true
    State:          Running
      Started:      Tue, 09 Apr 2024 09:59:59 +0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 09 Apr 2024 09:57:01 +0000
      Finished:     Tue, 09 Apr 2024 09:59:56 +0000
    Ready:          True
    Restart Count:  1
    Requests:
      cpu:        200m
    Liveness:     http-get https://:10257/healthz delay=10s timeout=15s period=10s #success=1 #failure=8
    Startup:      http-get https://:10257/healthz delay=10s timeout=15s period=10s #success=1 #failure=24
    Environment:  <none>
    Mounts:
      /etc/ca-certificates from etc-ca-certificates (ro)
      /etc/kubernetes/controller-manager.conf from kubeconfig (ro)
      /etc/kubernetes/pki from k8s-certs (ro)
      /etc/localtime from localtime (ro)
      /etc/pki from etc-pki (ro)
      /etc/ssl/certs from ca-certs (ro)
      /usr/libexec/kubernetes/kubelet-plugins/volume/exec from flexvolume-dir (rw)
      /usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
      /usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  ca-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  DirectoryOrCreate
  etc-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ca-certificates
    HostPathType:  DirectoryOrCreate
  etc-pki:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/pki
    HostPathType:  DirectoryOrCreate
  flexvolume-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/libexec/kubernetes/kubelet-plugins/volume/exec
    HostPathType:  DirectoryOrCreate
  k8s-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki
    HostPathType:  DirectoryOrCreate
  kubeconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/controller-manager.conf
    HostPathType:  FileOrCreate
  localtime:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/localtime
    HostPathType:  File
  usr-local-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/local/share/ca-certificates
    HostPathType:  DirectoryOrCreate
  usr-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/share/ca-certificates
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute op=Exists
Events:
  Type     Reason     Age                From     Message
  ----     ------     ----               ----     -------
  Warning  Unhealthy  23m (x4 over 24m)  kubelet  Liveness probe failed: Get "https://10.0.0.118:10257/healthz": net/http: TLS handshake timeout
  Warning  Unhealthy  23m                kubelet  Liveness probe failed: Get "https://10.0.0.118:10257/healthz": read tcp 10.0.0.118:2768->10.0.0.118:10257: read: connection reset by peer
  Warning  Unhealthy  23m                kubelet  Liveness probe failed: Get "https://10.0.0.118:10257/healthz": dial tcp 10.0.0.118:10257: connect: connection refused
  Normal   Pulled     23m                kubelet  Container image "registry.k8s.io/kube-controller-manager:v1.25.6" already present on machine
  Normal   Created    23m                kubelet  Created container kube-controller-manager
  Normal   Started    23m                kubelet  Started container kube-controller-manager


Name:                 kube-proxy-k6wvl
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      kube-proxy
Node:                 instance-20240409-1035/10.0.0.118
Start Time:           Tue, 09 Apr 2024 09:57:34 +0000
Labels:               controller-revision-hash=5f6bcf49c
                      k8s-app=kube-proxy
                      pod-template-generation=1
Annotations:          <none>
Status:               Running
IP:                   10.0.0.118
IPs:
  IP:           10.0.0.118
Controlled By:  DaemonSet/kube-proxy
Containers:
  kube-proxy:
    Container ID:  containerd://cae85050bd6fe87e5b64a610e09dea726b7b381bcda28e2d1edd7bb9c0114f66
    Image:         registry.k8s.io/kube-proxy:v1.25.6
    Image ID:      sealos.hub:5000/kube-proxy@sha256:21a119d3450366e8b3405a4bb0c6585889086eed37f1c04a076f9d8d3db20597
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/local/bin/kube-proxy
      --config=/var/lib/kube-proxy/config.conf
      --hostname-override=$(NODE_NAME)
    State:          Running
      Started:      Tue, 09 Apr 2024 09:57:35 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy from kube-proxy (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6n877 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-proxy:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  kube-api-access-6n877:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  25m   default-scheduler  Successfully assigned kube-system/kube-proxy-k6wvl to instance-20240409-1035
  Normal  Pulled     25m   kubelet            Container image "registry.k8s.io/kube-proxy:v1.25.6" already present on machine
  Normal  Created    25m   kubelet            Created container kube-proxy
  Normal  Started    25m   kubelet            Started container kube-proxy


Name:                 kube-scheduler-instance-20240409-1035
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 instance-20240409-1035/10.0.0.118
Start Time:           Tue, 09 Apr 2024 09:57:09 +0000
Labels:               component=kube-scheduler
                      tier=control-plane
Annotations:          kubernetes.io/config.hash: f6f8901ecf2a1baa407655f583adee6a
                      kubernetes.io/config.mirror: f6f8901ecf2a1baa407655f583adee6a
                      kubernetes.io/config.seen: 2024-04-09T09:57:09.539433271Z
                      kubernetes.io/config.source: file
Status:               Running
IP:                   10.0.0.118
IPs:
  IP:           10.0.0.118
Controlled By:  Node/instance-20240409-1035
Containers:
  kube-scheduler:
    Container ID:  containerd://4ebba71736ed3ebdfc6ffd40c714e33b6c106bd81e0c16948bba001e754b388a
    Image:         registry.k8s.io/kube-scheduler:v1.25.6
    Image ID:      sealos.hub:5000/kube-scheduler@sha256:2290d4128c2ffc0d5e8b3f21312703f02e4f94e46666e8c8143bbacb21c35269
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-scheduler
      --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
      --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
      --bind-address=0.0.0.0
      --feature-gates=EphemeralContainers=true
      --kubeconfig=/etc/kubernetes/scheduler.conf
      --leader-elect=true
    State:          Running
      Started:      Tue, 09 Apr 2024 10:00:05 +0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 09 Apr 2024 09:57:01 +0000
      Finished:     Tue, 09 Apr 2024 09:59:47 +0000
    Ready:          True
    Restart Count:  1
    Requests:
      cpu:        100m
    Liveness:     http-get https://:10259/healthz delay=10s timeout=15s period=10s #success=1 #failure=8
    Startup:      http-get https://:10259/healthz delay=10s timeout=15s period=10s #success=1 #failure=24
    Environment:  <none>
    Mounts:
      /etc/kubernetes/scheduler.conf from kubeconfig (ro)
      /etc/localtime from localtime (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kubeconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/scheduler.conf
    HostPathType:  FileOrCreate
  localtime:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/localtime
    HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute op=Exists
Events:
  Type     Reason     Age                From     Message
  ----     ------     ----               ----     -------
  Warning  Unhealthy  23m (x5 over 24m)  kubelet  Liveness probe failed: Get "https://10.0.0.118:10259/healthz": net/http: TLS handshake timeout
  Warning  Unhealthy  23m                kubelet  Liveness probe failed: Get "https://10.0.0.118:10259/healthz": read tcp 10.0.0.118:9190->10.0.0.118:10259: read: connection reset by peer
  Warning  Unhealthy  23m                kubelet  Liveness probe failed: Get "https://10.0.0.118:10259/healthz": dial tcp 10.0.0.118:10259: connect: connection refused
  Normal   Pulled     23m                kubelet  Container image "registry.k8s.io/kube-scheduler:v1.25.6" already present on machine
  Normal   Created    23m                kubelet  Created container kube-scheduler
  Normal   Started    23m                kubelet  Started container kube-scheduler


Name:                 metrics-server-755f77d4d5-jm4lr
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      metrics-server
Node:                 instance-20240409-1035/10.0.0.118
Start Time:           Tue, 09 Apr 2024 09:58:26 +0000
Labels:               app.kubernetes.io/instance=metrics-server
                      app.kubernetes.io/name=metrics-server
                      pod-template-hash=755f77d4d5
Annotations:          <none>
Status:               Running
IP:                   10.0.0.46
IPs:
  IP:           10.0.0.46
Controlled By:  ReplicaSet/metrics-server-755f77d4d5
Containers:
  metrics-server:
    Container ID:  containerd://7a11751ddbeeaabe845d1de3c47619762743d7f959e0172d23747122464132fb
    Image:         registry.k8s.io/metrics-server/metrics-server:v0.6.4
    Image ID:      sealos.hub:5000/metrics-server/metrics-server@sha256:401a4b9796f3f80c1f03d22cd7b1a26839f515a36032ef49b682c237e5848ab3
    Port:          10250/TCP
    Host Port:     0/TCP
    Args:
      --secure-port=10250
      --cert-dir=/tmp
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
      --kubelet-use-node-status-port
      --metric-resolution=15s
      --kubelet-insecure-tls=true
      --kubelet-preferred-address-types=InternalIP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Tue, 09 Apr 2024 10:21:37 +0000
      Finished:     Tue, 09 Apr 2024 10:21:38 +0000
    Ready:          False
    Restart Count:  9
    Requests:
      cpu:        100m
      memory:     200Mi
    Liveness:     http-get https://:https/livez delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get https://:https/readyz delay=20s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /tmp from tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nvjpr (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-nvjpr:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  25m                   default-scheduler  Successfully assigned kube-system/metrics-server-755f77d4d5-jm4lr to instance-20240409-1035
  Normal   Pulling    25m                   kubelet            Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.6.4"
  Normal   Pulled     23m                   kubelet            Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.6.4" in 9.687327685s (1m48.296207273s including waiting)
  Warning  Unhealthy  23m                   kubelet            Liveness probe failed: Get "https://10.0.0.46:10250/livez": dial tcp 10.0.0.46:10250: connect: connection refused
  Normal   Started    22m (x4 over 23m)     kubelet            Started container metrics-server
  Normal   Created    21m (x5 over 23m)     kubelet            Created container metrics-server
  Normal   Pulled     21m (x4 over 23m)     kubelet            Container image "registry.k8s.io/metrics-server/metrics-server:v0.6.4" already present on machine
  Warning  BackOff    4m58s (x92 over 23m)  kubelet            Back-off restarting failed container


Name:             openebs-localpv-provisioner-79f4c678cd-9qc69
Namespace:        openebs
Priority:         0
Service Account:  openebs
Node:             instance-20240409-1035/10.0.0.118
Start Time:       Tue, 09 Apr 2024 09:58:22 +0000
Labels:           app=openebs
                  component=localpv-provisioner
                  name=openebs-localpv-provisioner
                  openebs.io/component-name=openebs-localpv-provisioner
                  openebs.io/version=3.4.0
                  pod-template-hash=79f4c678cd
                  release=openebs
Annotations:      <none>
Status:           Running
IP:               10.0.0.172
IPs:
  IP:           10.0.0.172
Controlled By:  ReplicaSet/openebs-localpv-provisioner-79f4c678cd
Containers:
  openebs-localpv-provisioner:
    Container ID:  containerd://9a5f7260419da747578ee2a96dcb0987828e04f7a89be8ad6b36662923b1c187
    Image:         openebs/provisioner-localpv:3.4.0
    Image ID:      sealos.hub:5000/openebs/provisioner-localpv@sha256:ac6da48d6f93933f735e86567d9d857349e4fb17e809afba39ebf9d5d9eb66d0
    Port:          <none>
    Host Port:     <none>
    Args:
      --bd-time-out=$(BDC_BD_BIND_RETRIES)
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 09 Apr 2024 10:20:55 +0000
      Finished:     Tue, 09 Apr 2024 10:20:55 +0000
    Ready:          False
    Restart Count:  9
    Liveness:       exec [sh -c test `pgrep -c "^provisioner-loc.*"` = 1] delay=30s timeout=1s period=60s #success=1 #failure=3
    Environment:
      BDC_BD_BIND_RETRIES:          12
      OPENEBS_NAMESPACE:            openebs
      NODE_NAME:                     (v1:spec.nodeName)
      OPENEBS_SERVICE_ACCOUNT:       (v1:spec.serviceAccountName)
      OPENEBS_IO_ENABLE_ANALYTICS:  true
      OPENEBS_IO_BASE_PATH:         /var/openebs/local
      OPENEBS_IO_HELPER_IMAGE:      openebs/linux-utils:3.4.0
      OPENEBS_IO_INSTALLER_TYPE:    charts-helm
      LEADER_ELECTION_ENABLED:      true
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hs2l9 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-hs2l9:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  25m                  default-scheduler  Successfully assigned openebs/openebs-localpv-provisioner-79f4c678cd-9qc69 to instance-20240409-1035
  Normal   Pulling    25m                  kubelet            Pulling image "openebs/provisioner-localpv:3.4.0"
  Normal   Pulled     23m                  kubelet            Successfully pulled image "openebs/provisioner-localpv:3.4.0" in 1m38.32376667s (1m41.541746387s including waiting)
  Normal   Created    21m (x5 over 23m)    kubelet            Created container openebs-localpv-provisioner
  Normal   Started    21m (x5 over 23m)    kubelet            Started container openebs-localpv-provisioner
  Normal   Pulled     21m (x4 over 23m)    kubelet            Container image "openebs/provisioner-localpv:3.4.0" already present on machine
  Warning  BackOff    5m2s (x88 over 23m)  kubelet            Back-off restarting failed container

from sealos.

willzhang avatar willzhang commented on July 21, 2024
  • 一次性安装多个组件的时候,sealos会按顺序安装,但是sealos run很快,很多组件几乎是并行安装的,但是前一个应用安装成功并不代表pod已经完全启动,sealos无法保证各个组件的依赖关系以及上一个组件是否就绪提供服务。
  • 比如metrics-server还未准备就绪,后续的组件或者某些奇怪的东西依赖metrics-server,就有可能出现couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request 这种奇怪的报错。
  • 我们之前是每个sealos run的时候想办法等待上一个组件已经准备就绪提供服务,再往下安装,比如使用 kubectl rollout status xxx
  • 或者在封装sealos 镜像的时候在 entrypoint.sh 执行 helm install --wait or sealos run xxx -e HELM_OPTS="--wait"等等,确保每个组件的pod完全running在去执行下一个sealos run命令。

from sealos.

binaryYuki avatar binaryYuki commented on July 21, 2024

Hi,

感谢您的解答!

我目前只发现这个问题会出现在尝试将 sealos 安装在 aarch 架构的 ubuntu 上 我尝试解决一下

谢谢!

from sealos.

sealos-ci-robot avatar sealos-ci-robot commented on July 21, 2024

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Hi,

Thank you for your answer!

I only found that this problem will occur when trying to install sealos on aarch architecture ubuntu. I will try to solve it.

Thanks!

from sealos.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.