Code Monkey home page Code Monkey logo

metrics-server's Introduction

Kubernetes Metrics Server

Metrics Server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.

Metrics Server collects resource metrics from Kubelets and exposes them in Kubernetes apiserver through Metrics API for use by Horizontal Pod Autoscaler and Vertical Pod Autoscaler. Metrics API can also be accessed by kubectl top, making it easier to debug autoscaling pipelines.

Caution

Metrics Server is meant only for autoscaling purposes. For example, don't use it to forward metrics to monitoring solutions, or as a source of monitoring solution metrics. In such cases please collect metrics from Kubelet /metrics/resource endpoint directly.

Metrics Server offers:

  • A single deployment that works on most clusters (see Requirements)
  • Fast autoscaling, collecting metrics every 15 seconds.
  • Resource efficiency, using 1 mili core of CPU and 2 MB of memory for each node in a cluster.
  • Scalable support up to 5,000 node clusters.

Use cases

You can use Metrics Server for:

Don't use Metrics Server when you need:

  • Non-Kubernetes clusters
  • An accurate source of resource usage metrics
  • Horizontal autoscaling based on other resources than CPU/Memory

For unsupported use cases, check out full monitoring solutions like Prometheus.

Requirements

Metrics Server has specific requirements for cluster and network configuration. These requirements aren't the default for all cluster distributions. Please ensure that your cluster distribution supports these requirements before using Metrics Server:

  • The kube-apiserver must enable an aggregation layer.
  • Nodes must have Webhook authentication and authorization enabled.
  • Kubelet certificate needs to be signed by cluster Certificate Authority (or disable certificate validation by passing --kubelet-insecure-tls to Metrics Server)
  • Container runtime must implement a container metrics RPCs (or have cAdvisor support)
  • Network should support following communication:
    • Control plane to Metrics Server. Control plane node needs to reach Metrics Server's pod IP and port 10250 (or node IP and custom port if hostNetwork is enabled). Read more about control plane to node communication.
    • Metrics Server to Kubelet on all nodes. Metrics server needs to reach node address and Kubelet port. Addresses and ports are configured in Kubelet and published as part of Node object. Addresses in .status.addresses and port in .status.daemonEndpoints.kubeletEndpoint.port field (default 10250). Metrics Server will pick first node address based on the list provided by kubelet-preferred-address-types command line flag (default InternalIP,ExternalIP,Hostname in manifests).

Installation

Metrics Server can be installed either directly from YAML manifest or via the official Helm chart. To install the latest Metrics Server release from the components.yaml manifest, run the following command.

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

Installation instructions for previous releases can be found in Metrics Server releases.

Compatibility Matrix

Metrics Server Metrics API group/version Supported Kubernetes version
0.7.x metrics.k8s.io/v1beta1 1.19+
0.6.x metrics.k8s.io/v1beta1 1.19+
0.5.x metrics.k8s.io/v1beta1 *1.8+
0.4.x metrics.k8s.io/v1beta1 *1.8+
0.3.x metrics.k8s.io/v1beta1 1.8-1.21

*Kubernetes versions lower than v1.16 require passing the --authorization-always-allow-paths=/livez,/readyz command line flag

High Availability

Metrics Server can be installed in high availability mode directly from a YAML manifest or via the official Helm chart by setting the replicas value greater than 1. To install the latest Metrics Server release in high availability mode from the high-availability.yaml manifest, run the following command.

On Kubernetes v1.21+:

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability-1.21+.yaml

On Kubernetes v1.19-1.21:

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability.yaml

Note

This configuration requires having a cluster with at least 2 nodes on which Metrics Server can be scheduled.

Also, to maximize the efficiency of this highly available configuration, it is recommended to add the --enable-aggregator-routing=true CLI flag to the kube-apiserver so that requests sent to Metrics Server are load balanced between the 2 instances.

Helm Chart

The Helm chart is maintained as an additional component within this repo and released into a chart repository backed on the gh-pages branch. A new version of the chart will be released for each Metrics Server release and can also be released independently if there is a need. The chart on the master branch shouldn't be referenced directly as it might contain modifications since it was last released, to view the chart code use the chart release tag.

Security context

Metrics Server requires the CAP_NET_BIND_SERVICE capability in order to bind to a privileged ports as non-root. If you are running Metrics Server in an environment that uses PSSs or other mechanisms to restrict pod capabilities, ensure that Metrics Server is allowed to use this capability. This applies even if you use the --secure-port flag to change the port that Metrics Server binds to a non-privileged port.

Scaling

Starting from v0.5.0 Metrics Server comes with default resource requests that should guarantee good performance for most cluster configurations up to 100 nodes:

  • 100m core of CPU
  • 200MiB of memory

Metrics Server resource usage depends on multiple independent dimensions, creating a Scalability Envelope. Default Metrics Server configuration should work in clusters that don't exceed any of the thresholds listed below:

Quantity Namespace threshold Cluster threshold
#Nodes n/a 100
#Pods per node 70 70
#Deployments with HPAs 100 100

Resources can be adjusted proportionally based on number of nodes in the cluster. For clusters of more than 100 nodes, allocate additionally:

  • 1m core per node
  • 2MiB memory per node

You can use the same approach to lower resource requests, but there is a boundary where this may impact other scalability dimensions like maximum number of pods per node.

Configuration

Depending on your cluster setup, you may also need to change flags passed to the Metrics Server container. Most useful flags:

  • --kubelet-preferred-address-types - The priority of node address types used when determining an address for connecting to a particular node (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])
  • --kubelet-insecure-tls - Do not verify the CA of serving certificates presented by Kubelets. For testing purposes only.
  • --requestheader-client-ca-file - Specify a root certificate bundle for verifying client certificates on incoming requests.
  • --node-selector -Can complete to scrape the metrics from the Specified nodes based on labels

You can get a full list of Metrics Server configuration flags by running:

docker run --rm registry.k8s.io/metrics-server/metrics-server:v0.7.0 --help

Design

Metrics Server is a component in the core metrics pipeline described in Kubernetes monitoring architecture.

For more information, see:

Have a question?

Before posting an issue, first checkout Frequently Asked Questions and Known Issues.

Community, discussion, contribution, and support

Learn how to engage with the Kubernetes community on the community page.

You can reach the maintainers of this project at:

This project is maintained by SIG Instrumentation

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

metrics-server's People

Contributors

agilgur5 avatar calvinbui avatar catherinef-dev avatar chotiwat avatar cpanato avatar dependabot[bot] avatar dgrisonnet avatar dims avatar directxman12 avatar edgrz avatar itskingori avatar k8s-ci-robot avatar kawych avatar maxbrunet avatar maximillianbrain1 avatar olagacek avatar piosz avatar qianchenglong avatar reetasingh avatar s-urbaniak avatar serathius avatar serializator avatar smarterclayton avatar somesh2905 avatar spiffxp avatar stevehipwell avatar wzshiming avatar x13n avatar yangjunmyfm192085 avatar yuwenma avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

metrics-server's Issues

README is missing

Please add a README to provide some more context for this repository.

401 unauthorized when accessing api from inside the cluster

I have a kubernetes python client with the following code running as a Pod:

from kubernetes import client, config

config.load_incluster_config()
conf = client.Configuration()

rest_client = client.rest.RESTClientObject(conf)
metrics_response = rest_client.GET(conf.host + "/apis/metrics.k8s.io/v1beta1/nodes").data

This code works fine when running with: config.load_kube_config() from outside the cluster, but gives an 401 Unauthorized error when using config.load_incluster_config() from inside the cluster.

What am I missing?

Thanks

Does metrics-server require read-only port

Does metrics-server require the --kubelet-read-only-port=10255 to be set on Kubelets still? Or is it possible to connect with TLS directly to the API now, defining the certs and keys credentials?

[Kubeadm] "x509: certificate signed by unknown authority

If we start metric-server as documentation states on a Kubernetes cluster created with kubeadm tool, we have certificate validation errors as showed below:

$ kubectl logs -f --namespace=kube-system metrics-server-7cc9c5496f-lsllm
I0601 15:46:27.425923 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:''
I0601 15:46:27.425977 1 heapster.go:72] Metrics Server version v0.2.1
I0601 15:46:27.426181 1 configs.go:61] Using Kubernetes client with master "https://10.96.0.1:443" and version
I0601 15:46:27.426192 1 configs.go:62] Using kubelet port 10255
I0601 15:46:27.426882 1 heapster.go:128] Starting with Metric Sink
I0601 15:46:27.931908 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
I0601 15:46:28.334906 1 heapster.go:101] Starting Heapster API server...
[restful] 2018/06/01 15:46:28 log.go:33: [restful/swagger] listing is available at https:///swaggerapi
[restful] 2018/06/01 15:46:28 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/
I0601 15:46:28.335923 1 serve.go:85] Serving securely on 0.0.0.0:443
E0601 15:46:31.627056 1 authentication.go:64] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes"), x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]
E0601 15:46:32.084203 1 authentication.go:64] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes"), x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")]

I did not found any workaround for this.

Anyone already run over this problem?

I'm using Kubernetes v1.10.1

Thanks.

exec user process caused "exec format error" - pod in CrashLoopBackOff

Hello,

what's the problem?
metrics-server crashes

running a 5 node cluster on raspberry pi 3 b+ with raspbian installed:

$ uname -a
Linux kube-master1 4.14.34-v7+ #1110 SMP Mon Apr 16 15:18:51 BST 2018 armv7l GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Raspbian
Description:    Raspbian GNU/Linux 9.4 (stretch)
Release:        9.4
Codename:       stretch
$ kubectl get pods --namespace=kube-system
NAME                                   READY     STATUS             RESTARTS   AGE
etcd-kube-master1                      1/1       Running            5          5d
kube-apiserver-kube-master1            1/1       Running            5          5d
kube-controller-manager-kube-master1   1/1       Running            5          5d
kube-dns-76f4db7445-qqxrj              3/3       Running            15         5d
kube-flannel-ds-gbdcd                  1/1       Running            7          5d
kube-flannel-ds-gjgpx                  1/1       Running            10         5d
kube-flannel-ds-kb6c8                  1/1       Running            4          1d
kube-flannel-ds-qndl6                  1/1       Running            6          5d
kube-flannel-ds-sg2x5                  1/1       Running            7          5d
kube-proxy-76vj6                       1/1       Running            3          1d
kube-proxy-jwgqb                       1/1       Running            4          5d
kube-proxy-p49j5                       1/1       Running            5          5d
kube-proxy-sk7ch                       1/1       Running            4          5d
kube-proxy-wjtfn                       1/1       Running            4          5d
kube-scheduler-kube-master1            1/1       Running            5          5d
metrics-server-dd995679b-xnxjz         0/1       CrashLoopBackOff   3          1m

I have kubernetes version 1.9.7 running with docker 18.05.0-ce.

I tried to install the metrics-server as advised by cloning the repo and

$ kubectl create -f deploy/1.8+/

The pod was crashing with the following logs:

$ kubectl logs metrics-server-dd995679b-xnxjz --namespace=kube-system
standard_init_linux.go:190: exec user process caused "exec format error"

After a little googling i found, that i might be using the wrong image for my architecture and i, in fact was.
So i changed the image in metrics-server-deployment.yaml to:
image: gcr.io/google_containers/metrics-server-arm64:v0.2.1
and applied the config.

After the rollout the pod is still crashing with the same error.
Here is the describe output:

$ kubectl describe pod metrics-server-dd995679b-xnxjz --namespace=kube-system
Name:           metrics-server-dd995679b-xnxjz
Namespace:      kube-system
Node:           kube-storage1/192.168.178.47
Start Time:     Fri, 15 Jun 2018 23:40:33 +0000
Labels:         k8s-app=metrics-server
                pod-template-hash=885512356
Annotations:    <none>
Status:         Running
IP:             10.244.4.4
Controlled By:  ReplicaSet/metrics-server-dd995679b
Containers:
  metrics-server:
    Container ID:  docker://31b9d93fa0e6c5a18309728421ec44e6ccc909251822f3749672319a17a6d072
    Image:         gcr.io/google_containers/metrics-server-arm64:v0.2.1
    Image ID:      docker-pullable://gcr.io/google_containers/metrics-server-arm64@sha256:4e7a8e2ac7b7ef0370405ee16d3fdec8b7d4ba50e061eecc14d01470cf1a7f1c
    Port:          <none>
    Command:
      /metrics-server
      --source=kubernetes.summary_api:''
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sat, 16 Jun 2018 00:22:36 +0000
      Finished:     Sat, 16 Jun 2018 00:22:36 +0000
    Ready:          False
    Restart Count:  13
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from metrics-server-token-4lhd2 (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  metrics-server-token-4lhd2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  metrics-server-token-4lhd2
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age                 From                    Message
  ----     ------                 ----                ----                    -------
  Normal   Scheduled              42m                 default-scheduler       Successfully assigned metrics-server-dd995679b-xnxjz to kube-storage1
  Normal   SuccessfulMountVolume  42m                 kubelet, kube-storage1  MountVolume.SetUp succeeded for volume "metrics-server-token-4lhd2"
  Normal   Pulled                 41m (x4 over 42m)   kubelet, kube-storage1  Successfully pulled image "gcr.io/google_containers/metrics-server-arm64:v0.2.1"
  Normal   Created                41m (x4 over 42m)   kubelet, kube-storage1  Created container
  Normal   Started                41m (x4 over 42m)   kubelet, kube-storage1  Started container
  Normal   Pulling                40m (x5 over 42m)   kubelet, kube-storage1  pulling image "gcr.io/google_containers/metrics-server-arm64:v0.2.1"
  Warning  BackOff                2m (x183 over 42m)  kubelet, kube-storage1  Back-off restarting failed container
$ kubectl describe deployment metrics-server --namespace=kube-system
Name:                   metrics-server
Namespace:              kube-system
CreationTimestamp:      Fri, 15 Jun 2018 23:40:33 +0000
Labels:                 k8s-app=metrics-server
Annotations:            deployment.kubernetes.io/revision=1
Selector:               k8s-app=metrics-server
Replicas:               1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
Pod Template:
  Labels:           k8s-app=metrics-server
  Service Account:  metrics-server
  Containers:
   metrics-server:
    Image:  gcr.io/google_containers/metrics-server-arm64:v0.2.1
    Port:   <none>
    Command:
      /metrics-server
      --source=kubernetes.summary_api:''
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  <none>
NewReplicaSet:   metrics-server-dd995679b (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  43m   deployment-controller  Scaled up replica set metrics-server-dd995679b to 1

is there a problem with the image or the application itself?
Any help would be amazing since i would love to use it.

Secure source

How to use secure connection for source parameter?

I'm trying this:

        command:
        - /metrics-server
        - --source=kubernetes.summary_api:''?useServiceAccount=true&kubeletHttps=true&kubeletPort=10250

It says:

E0405 15:17:05.006840       1 summary.go:97] error while getting metrics summary from Kubelet garuda-kube1(123.123.123.123:10250): request failed - "401 Unauthorized", response: "Unauthorized"

metrics-server assumes same TLS config for kube-apiserver and kubelets

Presently, metrics-server re-uses the TLS config that it constructs for communication with kube-apiserver in its configuration for talking with the kubelets. This is bad because kube-apiserver and kubelet are supposed to (or at least can) use separate CAs. As it stands, bringing metrics-server into the mix requires you to use the same CA for kube-apiserver and your kubelets.

Problem line: https://github.com/kubernetes-incubator/metrics-server/blob/251f7b578894d3f9adfccd9b0cc2127321819fba/metrics/sources/kubelet/configs.go#L67

Error with `kubectl top node`

I deployed the metrics-server following the guide here.

I didn't enable the heapster from addons list.

The metrics-server was deployed successfully in minikube. But when I run the below command

kubectl top pod

or

kubectl top node

this results in the following error.

Error from server (NotFound): the server could not find the requested resource (get services >http:heapster:)

My local machine configurations:
Minikube Version (minikube version): v0.24.1
Kubernetes Version (kubectl version): 1.8

Undocumented/unused metric types

(This is more of a question than an issue.)

I noticed there are definitions for more specific metrics here, for example: memory.

However, they do not seem to be used at all in the code base - are they yet to be implemented?

Also, are the API endpoints listed here a complete list?

It would be cool to document a lot of this stuff in the metrics-server repo rather than links to external sites as they feel a bit stale - I am more than happy to help with this!

Problem with auth scopes in metrics-server

There's something wrong with the permissions (RBAC maybe?). Example errors:

github.com/kubernetes-incubator/metrics-server/metrics/processors/namespace_based_enricher.go:85: Failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot list namespaces at the cluster scope
github.com/kubernetes-incubator/metrics-server/metrics/processors/namespace_based_enricher.go:85: Failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot list namespaces at the cluster scope

Kubernetes 1.8 from HEAD on GCE

/cc @piosz

Make Metric-Server run as a standard process

I'm trying to replace Heapster with Metric-Server.
I currently run Heapster as a standalone process like this
heapster --source=kubernetes.summary_api:http://localhost:8080?inClusterConfig=false --sink=influxdb:http://localhost:8086

However when I tried to do the same with Metric-Server I'm getting this error

I0428 04:52:08.129677      16 heapster.go:71] /metrics-server --source=kubernetes.summary_api:http://localhost:8080?inClusterConfig=false
I0428 04:52:08.129730      16 heapster.go:72] Metrics Server version v0.2.1
I0428 04:52:08.129745      16 configs.go:61] Using Kubernetes client with master "http://localhost:8080" and version 
I0428 04:52:08.129754      16 configs.go:62] Using kubelet port 10255
I0428 04:52:08.129908      16 heapster.go:128] Starting with Metric Sink
I0428 04:52:08.344438      16 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
E0428 04:52:08.684699      16 serving.go:189] Couldn't create in cluster config due to unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined. SharedInformerFactory will not be set.
W0428 04:52:08.684726      16 authentication.go:222] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLE_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
F0428 04:52:08.684733      16 heapster.go:97] Could not create the API server: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined

What is the proper way to run the Metric-Server as a standard process and not a pod?

HA with 3 masters kubectl top wont work after Custom Metrics API

What happened:
kubectl top only work on one master ( HA with 3 masters )

[root@APP198 log]$ kubectl top pod --all-namespaces
error: You must be logged in to the server (Unauthorized)

What you expected to happen:
kubectl top to work on all masters like before ( on 3 masters )

How to reproduce it (as minimally and precisely as possible):
after installed (Custom Metrics API) kubernetes-incubator/metrics-server kubectl top only works on one master

Environment:

  • Kubernetes version (use kubectl version): v1.10.2
  • Cloud provider or hardware configuration: vmware
  • OS (e.g. from /etc/os-release): centos 7.5
  • Kernel (e.g. uname -a): 3.10.0-862.2.3

Metrics gathering from pods with crashed containers

(continuing a discussion from #sig-autoscaling ...)

I've noticed some problems gathering metrics from multi-container pods when one of the containers has crashed. In my case, we had introduced a bug into one of our containers that caused it to crash regularly. This did not affect the liveness of the pod nor the normal operation of the pod but it did affect metrics-server's ability to get metrics from the pod.

If I'm understanding the situation correctly, one failed container will cause metrics gathering for the entire pod to abort.

CC: @DirectXMan12

v1beta1.metrics.k8s.io failed net/http: request canceled while waiting for connection

  1. kubectl create -f deploy/1.8+/
    master1 kube-apiserver: E0412 22:18:30.255424 2628 available_controller.go:295] v1beta1.metrics.k8s.io failed with: Get https://10.233.53.4:443: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  2. listen port
    [root@master1 1.8+]# kubectl exec -it metrics-server-7bcc5bf8f-pk865 -n kube-system sh / # ps -ef PID USER TIME COMMAND 1 root 0:00 /metrics-server --source=kubernetes.summary_api:'' --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.pem 38 root 0:00 sh 43 root 0:00 ps -ef / # netstat -anpt Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 10.233.53.4:49394 10.254.0.1:443 ESTABLISHED 1/metrics-server tcp 0 0 10.233.53.4:52990 192.168.200.52:10255 ESTABLISHED 1/metrics-server tcp 0 0 10.233.53.4:51762 192.168.200.53:10255 ESTABLISHED 1/metrics-server tcp 0 0 :::443 :::* LISTEN 1/metrics-server / #
  3. metrics-server log
    [root@master1 1.8+]# kubectl logs -f metrics-server-7bcc5bf8f-pk865 -n kube-system I0412 14:15:02.347895 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:'' --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.pem I0412 14:15:02.347957 1 heapster.go:72] Metrics Server version v0.2.1 I0412 14:15:02.348159 1 configs.go:61] Using Kubernetes client with master "https://10.254.0.1:443" and version I0412 14:15:02.348194 1 configs.go:62] Using kubelet port 10255 I0412 14:15:02.349072 1 heapster.go:128] Starting with Metric Sink I0412 14:15:02.555531 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key) I0412 14:15:02.933037 1 heapster.go:101] Starting Heapster API server... [restful] 2018/04/12 14:15:02 log.go:33: [restful/swagger] listing is available at https:///swaggerapi [restful] 2018/04/12 14:15:02 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/ I0412 14:15:02.934173 1 serve.go:85] Serving securely on 0.0.0.0:443
  4. kube-apiserver config
    `[Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.target

[Service]
User=root
ExecStart=/usr/local/bin/kube-apiserver
--admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
--advertise-address=192.168.200.51
--allow-privileged=true
--anonymous-auth=false
--apiserver-count=1
--audit-policy-file=/etc/kubernetes/audit-policy.yaml
--audit-log-maxage=30
--audit-log-maxbackup=3
--audit-log-maxsize=100
--audit-log-path=/var/log/kubernetes/audit.log
--authorization-mode=Node,RBAC
--bind-address=0.0.0.0
--secure-port=6443
--client-ca-file=/etc/kubernetes/ssl/ca.pem
--enable-swagger-ui=true
--etcd-cafile=/etc/kubernetes/ssl/ca.pem
--etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem
--etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem
--etcd-servers=https://192.168.200.51:2379,https://192.168.200.52:2379,https://192.168.200.53:2379
--event-ttl=1h
--kubelet-https=true
--insecure-bind-address=192.168.200.51
--insecure-port=8080
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem
--service-cluster-ip-range=10.254.0.0/16
--service-node-port-range=30000-32000
--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem
--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem
--enable-bootstrap-token-auth=true
--token-auth-file=/etc/kubernetes/token.csv
--requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.pem
--proxy-client-cert-file=/etc/kubernetes/ssl/front-proxy-client.pem
--proxy-client-key-file=/etc/kubernetes/ssl/front-proxy-client-key.pem
--requestheader-allowed-names=aggregator
--requestheader-group-headers=X-Remote-Group
--requestheader-extra-headers-prefix=X-Remote-Extra-
--requestheader-username-headers=X-Remote-User
--runtime-config=admissionregistration.k8s.io/v1alpha1
--runtime-config=api/all=true
--enable-aggregator-routing=true
--v=0
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
`

  1. kube-controller-manager config
    `

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager
--address=0.0.0.0
--master=http://192.168.200.51:8080
--service-cluster-ip-range=10.254.0.0/16
--cluster-name=kubernetes
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem
--root-ca-file=/etc/kubernetes/ssl/ca.pem
--leader-elect=true
--v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
`

Problem deploying Metric Server

Hi,

I'm trying to deploy the metric-server. I got the following error:

core@india-1-coreos-5706 ~/kubernetes/metrics-server/deploy/1.8+ $ kubectl create -f resource-reader.yaml
clusterrolebinding.rbac.authorization.k8s.io "system:metrics-server" created
Error from server (Forbidden): error when creating "resource-reader.yaml": clusterroles.rbac.authorization.k8s.io "system:metrics-server" is forbidden: attempt to grant extra privileges: [PolicyRule{APIGroups:[""], Resources:["pods"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["pods"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["pods"], Verbs:["watch"]} PolicyRule{APIGroups:[""], Resources:["nodes"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["nodes"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["nodes"], Verbs:["watch"]} PolicyRule{APIGroups:[""], Resources:["nodes/stats"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["nodes/stats"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["nodes/stats"], Verbs:["watch"]} PolicyRule{APIGroups:[""], Resources:["namespaces"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["namespaces"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["namespaces"], Verbs:["watch"]} PolicyRule{APIGroups:["extensions"], Resources:["deployments"], Verbs:["get"]} PolicyRule{APIGroups:["extensions"], Resources:["deployments"], Verbs:["list"]} PolicyRule{APIGroups:["extensions"], Resources:["deployments"], Verbs:["watch"]}] user=&{kube-admin [system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[]

Can someone point me to what the problem is.

Improve Prometheus metrics on metrics-server health

Many people use Prometheus metrics to monitor the health of their cluster.
Currently, we have some basic Prometheus metrics on request and collection
latency, but several crucial measures of collection health are missing.

We should figure out spots where we can improve metrics data, such as:

  • like requests for which metrics were missing
  • skipped rate calculation (split up by cause)

For prometheus, where to scrape metrics from metrics-server?

Hi,

After setup, I only get this apis endpoints from https://localhost:443/:

# curl -k -XGET -H "Authorization: Bearer $TOKEN" https://localhot:443/
{
  "paths": [
    "/apis",
    "/apis/metrics",
    "/apis/metrics/v1alpha1",
    "/healthz",
    "/healthz/healthz",
    "/healthz/ping",
    "/healthz/poststarthook/generic-apiserver-start-informers",
    "/swaggerapi/"
  ]
}

Where does metrics-server expose metrics in Prometheus format?

I want to config my prometheus to scrape metrics from metrics-server, because I need information from kubelet /stat/summary endpoint.

Kubernetes will not start after installing metrics-server

Related: kubernetes/kubernetes#55271

After installing metrics-server, kubernetes will fail on cold start. Reproduced with minikube and on our test cluster.

Seems like controllermanager is in an endless loop trying to connect to metrics api, which is not available.

Steps to reproduce:

Expected: Kubernetes is running

Actual: Kubernetes is dead

Logs:

Feb 06 17:05:20 bespin localkube[27052]: I0206 17:05:20.919105   27052 manager.go:316] Recovery completed
Feb 06 17:05:21 bespin localkube[27052]: E0206 17:05:21.145665   27052 helpers.go:832] Could not find capacity information for resource ephemeral-storage
Feb 06 17:05:21 bespin localkube[27052]: W0206 17:05:21.145729   27052 helpers.go:843] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Feb 06 17:05:21 bespin localkube[27052]: kubelet is ready!
Feb 06 17:05:21 bespin localkube[27052]: Starting proxy...
Feb 06 17:05:21 bespin localkube[27052]: Waiting for proxy to be healthy...
Feb 06 17:05:21 bespin localkube[27052]: W0206 17:05:21.198170   27052 server_others.go:63] unable to register configz: register config "componentconfig" twice
Feb 06 17:05:21 bespin localkube[27052]: I0206 17:05:21.205060   27052 server_others.go:117] Using iptables Proxier.
Feb 06 17:05:21 bespin localkube[27052]: W0206 17:05:21.214514   27052 proxier.go:473] clusterCIDR not specified, unable to distinguish between internal and external traffic
Feb 06 17:05:21 bespin localkube[27052]: I0206 17:05:21.214790   27052 server_others.go:152] Tearing down inactive rules.
Feb 06 17:05:21 bespin localkube[27052]: E0206 17:05:21.248531   27052 proxier.go:699] Failed to execute iptables-restore for nat: exit status 1 (iptables-restore: line 7 failed
Feb 06 17:05:21 bespin localkube[27052]: )
Feb 06 17:05:21 bespin localkube[27052]: I0206 17:05:21.249121   27052 config.go:202] Starting service config controller
Feb 06 17:05:21 bespin localkube[27052]: I0206 17:05:21.249142   27052 controller_utils.go:1041] Waiting for caches to sync for service config controller
Feb 06 17:05:21 bespin localkube[27052]: I0206 17:05:21.249165   27052 config.go:102] Starting endpoints config controller
Feb 06 17:05:21 bespin localkube[27052]: I0206 17:05:21.249172   27052 controller_utils.go:1041] Waiting for caches to sync for endpoints config controller
Feb 06 17:05:21 bespin localkube[27052]: E0206 17:05:21.249202   27052 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Feb 06 17:05:21 bespin localkube[27052]: I0206 17:05:21.349406   27052 controller_utils.go:1048] Caches are synced for endpoints config controller
Feb 06 17:05:21 bespin localkube[27052]: I0206 17:05:21.349605   27052 controller_utils.go:1048] Caches are synced for service config controller
Feb 06 17:05:21 bespin localkube[27052]: E0206 17:05:21.384193   27052 proxier.go:1621] Failed to delete stale service IP 10.96.0.10 connections, error: error deleting connection tracking state for UDP service IP: 10.96.0.10, error: error looking for path of conntrack: exec: "conntrack": executable file not found in $PATH
Feb 06 17:05:22 bespin localkube[27052]: proxy is ready!
Feb 06 17:05:22 bespin localkube[27052]: E0206 17:05:22.320638   27052 controllermanager.go:480] Error starting "garbagecollector"
Feb 06 17:05:22 bespin localkube[27052]: F0206 17:05:22.320679   27052 controllermanager.go:156] error starting controllers: failed to get supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: an error on the server ("Error: 'dial tcp 10.101.88.156:443: getsockopt: connection refused'\nTrying to reach: 'https://10.101.88.156:443/apis/metrics.k8s.io/v1beta1'") has prevented the request from succeeding
Feb 06 17:05:22 bespin systemd[1]: localkube.service: Main process exited, code=exited, status=255/n/a
Feb 06 17:05:22 bespin systemd[1]: localkube.service: Unit entered failed state.
Feb 06 17:05:22 bespin systemd[1]: localkube.service: Failed with result 'exit-code'.
Feb 06 17:05:25 bespin systemd[1]: localkube.service: Service hold-off time over, scheduling restart.
Feb 06 17:05:25 bespin systemd[1]: Stopped Localkube.

Workaround:

To fix the system, we must delete the metrics-server:

kubectl -n=kube-system delete apiservice v1beta1.metrics.k8s.io

Now kubernetes will start normally. After that we can reinstall the metrics server, and it will be ok.

Create a SECURITY_CONTACTS file.

As per the email sent to kubernetes-dev[1], please create a SECURITY_CONTACTS
file.

The template for the file can be found in the kubernetes-template repository[2].
A description for the file is in the steering-committee docs[3], you might need
to search that page for "Security Contacts".

Please feel free to ping me on the PR when you make it, otherwise I will see when
you close this issue. :)

Thanks so much, let me know if you have any questions.

(This issue was generated from a tool, apologies for any weirdness.)

[1] https://groups.google.com/forum/#!topic/kubernetes-dev/codeiIoQ6QE
[2] https://github.com/kubernetes/kubernetes-template-project/blob/master/SECURITY_CONTACTS
[3] https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance-template-short.md

kubectl top node gives error

I deployed the metrics-server in my minikube and has the deployment, pod, service, clusterrole, rolebindings and clusterrolebindings configured and running properly.

But when I say

kubectl top node

i am getting the below error.
Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)

Why do I need heapster, isn't metrics-server supposed to be a successor of heapster ?

My Minikube Environment:
minikube version:

v0.24.1

kubectl version :

โ€ƒโ€ƒโ€ƒClient - 1.8
โ€ƒโ€ƒโ€ƒServer - 1.8

kubectl api-versions

admissionregistration.k8s.io/v1alpha1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1beta1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
batch/v1
batch/v1beta1
batch/v2alpha1
certificates.k8s.io/v1beta1
extensions/v1beta1
metrics.k8s.io/v1beta1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1alpha1
rbac.authorization.k8s.io/v1beta1
settings.k8s.io/v1alpha1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1

How to get cpu usage in percents

Hello.
I have some api data:

/usr/bin/curl -sSk -H "Authorization: Bearer $TOKEN" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/apis/metrics.k8s.io/v1beta1/nodes/ip-172-20-57-130.eu-west-2.compute.internal/

{
  "kind": "NodeMetrics",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "name": "ip-172-20-57-130.eu-west-2.compute.internal",
    "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/ip-172-20-57-130.eu-west-2.compute.internal",
    "creationTimestamp": "2018-05-23T11:53:45Z"
  },
  "timestamp": "2018-05-23T11:53:00Z",
  "window": "1m0s",
  "usage": {
    "cpu": "62m",
    "memory": "1642780Ki"
  }

}

How to convert milicpu to percents?

Error when deploying metrics server to GKE

Hello, I'm following the instructions given in the README to install metrics-server into a GKE cluster (1.8.5) and I'm getting the following error, has anyone else encountered this before?

$ kubectl create -f deploy/
clusterrolebinding "metrics-server:system:auth-delegator" created
rolebinding "metrics-server-auth-reader" created
apiservice "v1beta1.metrics.k8s.io" created
serviceaccount "metrics-server" created
deployment "metrics-server" created
service "metrics-server" created
clusterrolebinding "system:metrics-server" created
Error from server (Forbidden): error when creating "deploy/resource-reader.yaml": clusterroles.rbac.authorization.k8s.io "system:metrics-server" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["get"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["deployments"], APIGroups:["extensions"], Verbs:["watch"]}] user=&{[email protected]  [system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healthz" "/swagger-2.0.0.pb-v1" "/swagger.json" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[]

OpenAPI spec does not exists

@DirectXMan12
So, I'm reopening this as a followup to issue #22 :
I've added kubelet & kube-proxy to all the nodes (controller, etc.) and thus the weave overlay network device and adresses are avaiable to the apiserver & al.

Any idea what is now going on by Nov 21 17:24:45?

Nov 21 17:21:05 localhost docker/kube-apiserver[891]: I1121 22:21:05.327639       1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.metrics.k8s.io/status: (5.12395ms) 409 [[kube-apiserver/v1.8.4 (linux/amd64) kubernetes/9befc2b] 127.0.0.1:53326]
Nov 21 17:21:05 localhost docker/kube-apiserver[891]: E1121 22:21:05.328952       1 available_controller.go:225] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
Nov 21 17:21:05 localhost docker/kube-apiserver[891]: I1121 22:21:05.332223       1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.metrics.k8s.io/status: (2.323296ms) 200 [[kube-apiserver/v1.8.4 (linux/amd64) kubernetes/9befc2b] 127.0.0.1:53326]
Nov 21 17:21:05 localhost docker/kube-apiserver[891]: I1121 22:21:05.339954       1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.metrics.k8s.io/status: (3.352474ms) 200 [[kube-apiserver/v1.8.4 (linux/amd64) kubernetes/9befc2b] 127.0.0.1:53326]
Nov 21 17:21:05 localhost docker/kube-apiserver[891]: I1121 22:21:05.421626       1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.metrics.k8s.io/status: (4.097894ms) 409 [[kube-apiserver/v1.8.4 (linux/amd64) kubernetes/9befc2b] 127.0.0.1:53326]
Nov 21 17:21:05 localhost docker/kube-apiserver[891]: E1121 22:21:05.423560       1 available_controller.go:225] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
Nov 21 17:21:05 localhost docker/kube-apiserver[891]: I1121 22:21:05.432152       1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.metrics.k8s.io/status: (5.594283ms) 200 [[kube-apiserver/v1.8.4 (linux/amd64) kubernetes/9befc2b] 127.0.0.1:53326]
Nov 21 17:21:05 localhost docker/kube-apiserver[891]: I1121 22:21:05.471791       1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.metrics.k8s.io/status: (14.25075ms) 409 [[kube-apiserver/v1.8.4 (linux/amd64) kubernetes/9befc2b] 127.0.0.1:53326]
Nov 21 17:21:05 localhost docker/kube-apiserver[891]: E1121 22:21:05.473203       1 available_controller.go:225] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
Nov 21 17:21:05 localhost docker/kube-apiserver[891]: I1121 22:21:05.478075       1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.metrics.k8s.io/status: (2.201772ms) 200 [[kube-apiserver/v1.8.4 (linux/amd64) kubernetes/9befc2b] 127.0.0.1:53326]
Nov 21 17:21:05 localhost docker/kube-apiserver[891]: I1121 22:21:05.487308       1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.metrics.k8s.io/status: (7.405118ms) 200 [[kube-apiserver/v1.8.4 (linux/amd64) kubernetes/9befc2b] 127.0.0.1:53326]
Nov 21 17:21:07 localhost docker/kube-apiserver[891]: I1121 22:21:07.973669       1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.metrics.k8s.io/status: (23.105986ms) 409 [[kube-apiserver/v1.8.4 (linux/amd64) kubernetes/9befc2b] 127.0.0.1:53326]
Nov 21 17:21:07 localhost docker/kube-apiserver[891]: E1121 22:21:07.974848       1 available_controller.go:225] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
Nov 21 17:21:07 localhost docker/kube-apiserver[891]: I1121 22:21:07.990372       1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.metrics.k8s.io/status: (10.307827ms) 200 [[kube-apiserver/v1.8.4 (linux/amd64) kubernetes/9befc2b] 127.0.0.1:53326]
Nov 21 17:21:07 localhost docker/kube-apiserver[891]: I1121 22:21:07.993020       1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.metrics.k8s.io/status: (1.674686ms) 200 [[kube-apiserver/v1.8.4 (linux/amd64) kubernetes/9befc2b] 127.0.0.1:53326]
Nov 21 17:21:11 localhost docker/kube-apiserver[891]: I1121 22:21:11.361404       1 controller.go:105] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
Nov 21 17:21:11 localhost docker/kube-apiserver[891]: E1121 22:21:11.390341       1 controller.go:111] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exists
Nov 21 17:21:11 localhost docker/kube-apiserver[891]: I1121 22:21:11.390397       1 controller.go:119] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 21 17:22:11 localhost docker/kube-apiserver[891]: I1121 22:22:11.391155       1 controller.go:105] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
Nov 21 17:22:11 localhost docker/kube-apiserver[891]: E1121 22:22:11.395587       1 controller.go:111] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exists
Nov 21 17:22:11 localhost docker/kube-apiserver[891]: I1121 22:22:11.395604       1 controller.go:119] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 21 17:24:11 localhost docker/kube-apiserver[891]: I1121 22:24:11.395958       1 controller.go:105] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
Nov 21 17:24:11 localhost docker/kube-apiserver[891]: E1121 22:24:11.398517       1 controller.go:111] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exists
Nov 21 17:24:11 localhost docker/kube-apiserver[891]: I1121 22:24:11.398543       1 controller.go:119] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 21 17:24:45 localhost docker/kube-apiserver[891]: I1121 22:24:45.689134       1 wrap.go:42] PUT /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.metrics.k8s.io/status: (1.802252ms) 200 [[kube-apiserver/v1.8.4 (linux/amd64) kubernetes/9befc2b] 127.0.0.1:53326]
Nov 21 17:24:49 localhost docker/kube-apiserver[891]: I1121 22:24:49.146330       1 controller.go:105] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
Nov 21 17:24:49 localhost docker/kube-apiserver[891]: E1121 22:24:49.148977       1 controller.go:111] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exists
Nov 21 17:24:49 localhost docker/kube-apiserver[891]: I1121 22:24:49.149034       1 controller.go:119] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 21 17:25:49 localhost docker/kube-apiserver[891]: I1121 22:25:49.149473       1 controller.go:105] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
Nov 21 17:25:49 localhost docker/kube-apiserver[891]: E1121 22:25:49.153288       1 controller.go:111] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exists
Nov 21 17:25:49 localhost docker/kube-apiserver[891]: I1121 22:25:49.153311       1 controller.go:119] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Nov 21 17:27:49 localhost docker/kube-apiserver[891]: I1121 22:27:49.153602       1 controller.go:105] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
Nov 21 17:27:49 localhost docker/kube-apiserver[891]: E1121 22:27:49.157182       1 controller.go:111] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exists
Nov 21 17:27:49 localhost docker/kube-apiserver[891]: I1121 22:27:49.157200       1 controller.go:119] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.

It seems to be able to talk to the API but then gets:
OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exists

kubectl top pod / node still says:
Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)

I seem to be getting closer, but still missing something... :-)

how to use the API to get data by some conditions?

Hello
I use the API to get resource usage like this : http://localhost:8080/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/get-python-3456785654-k9h5l it will show result as well like this:

{
  "kind": "PodMetrics",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "name": "get-python-3456785654-k9h5l",
    "namespace": "default",
    "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/get-python-3456785654-k9h5l",
    "creationTimestamp": "2018-03-10T13:11:40Z"
  },
  "timestamp": "2018-03-10T13:11:00Z",
  "window": "1m0s",
  "containers": [
    {
      "name": "get-python",
      "usage": {
        "cpu": "0",
        "memory": "22744Ki"
      }
    }
  ]
}

I'd like to get data by some conditions.

  • getting data by date duration. e.g: 1 hour instead 1 minute
  • getting data by date range. e.g: from 12pm to 12 am

Are these possible?

Need a immediately scrape at startup

I'm writing a metrics server to bind metrics-server and custom-metrics-server. I found that manager.realManager starts the first scrape at resolution+scrapeOffset seconds.

func (rm *realManager) Housekeep() {
	for {
		// Always try to get the newest metrics
		now := time.Now()
		start := now.Truncate(rm.resolution)
		end := start.Add(rm.resolution)
		timeToNextSync := end.Add(rm.scrapeOffset).Sub(now)

		select {
		case <-time.After(timeToNextSync):
			rm.housekeep(start, end)
		case <-rm.stopChan:
			rm.sink.Stop()
			return
		}
	}
}

I think it should scrape metrics immediately after starting (with start := now - resolution and end := now).

What do you think about it?

/cc @DirectXMan12

metrics-server cannot run as non-root user

I attempted to run metrics-server as a non-root user by adding the following to the deployment:

       securityContext:                                                                                                                                                                                                                                                                                                      
         runAsNonRoot: true                                                                                                                                                                                                                                                                                                  
         runAsUser: 65534  

However, it fails to start up with this error:

heapster.go:97] Could not create the API server: error creating self-signed certificates: mkdir apiserver.local.config: permission denied

Is there any reason that the metrics server must run as a root user in the container?

metrics-server api-service is not listing in api groups

I have a Kubernetes cluster running in 1.9 version and deployed metrics server for hpa. Issue I am facing is, not able to see metrics api service in kubectl get --raw "/apis/. I can see metrics server is getting data from heapster and able to see it in its endpoint. When I check hpa, can see it failing with below error.

Conditions:
  Type            Status  Reason                   Message
  ----            ------  ------                   -------
  AbleToScale     True    SucceededGetScale        the HPA controller was able to get the target's current scale
  ScalingActive   False   FailedGetResourceMetric  the HPA was unable to compute the replica count: unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)
  ScalingLimited  True    TooFewReplicas           the desired replica count is increasing faster than the maximum scale rate
Events:
  Type     Reason                        Age                  From                       Message
  ----     ------                        ----                 ----                       -------
  Warning  FailedComputeMetricsReplicas  49m (x13 over 55m)   horizontal-pod-autoscaler  failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)
  Warning  FailedGetResourceMetric       28s (x111 over 55m)  horizontal-pod-autoscaler  unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)

api service status -
metrics-server service ip - 10.107.166.120

[sujith@demo-k8s-server]$ kubectl get apiservice v1beta1.metrics.k8s.io -o yaml
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
  creationTimestamp: 2018-06-13T12:52:00Z
  name: v1beta1.metrics.k8s.io
  resourceVersion: "15003136"
  selfLink: /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.metrics.k8s.io
  uid: 8ff5f853-6f08-11e8-ac48-0a4adf35930a
spec:
  caBundle: null
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100
status:
  conditions:
  - lastTransitionTime: 2018-06-13T12:52:00Z
    message: 'no response from https://10.107.166.120:443: Get https://10.107.166.120:443:
      dial tcp 10.107.166.120:443: getsockopt: no route to host'
    reason: FailedDiscoveryCheck
    status: "False"
    type: Available
[sujith@demo-k8s-server]$

When I try to access metrics server from another pod, got below output

[sujith@demo-k8s-server]$ k exec -it php-apache-7ccc68c5cd-qqdtf sh
# curl -k https://10.107.166.120:443
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/\".",
  "reason": "Forbidden",
  "details": {},
  "code": 403
}#

metrics-server does not handle correctly IPv6 addresses

My Kubernetes v1.10.2 is deployed on top of servers with IPv4 and IPv6 addresses.

When I describe a node, the addresses look like:

Addresses:
  InternalIP:  10.8.10.9
  InternalIP:  2001:620:5ca1:4005:f816:3eff:fe25:ec8e
  Hostname:    k8s-2

The IPv6 address InternalIP is not handled correctly, as I can see from the logs of the metrics-server container:

E0502 19:40:05.001239       1 summary.go:97] error while getting metrics summary from Kubelet k8s-2(2001:620:5ca1:4005:f816:3eff:fe25:ec8e:10255): Get http://2001:620:5ca1:4005:f816:3eff:fe25:ec8e:10255/stats/summary/: invalid URL port "620:5ca1:4005:f816:3eff:fe25:ec8e:10255"

Error : forbidden: User \"system:anonymous\" cannot get path \"/\".

Hi, im trying to run auto-scaling in kubernetes with metrics-server. but the target give error

ubuntu@master:~/auto-scaling$ kubectl get hpa
NAME         REFERENCE               TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   <unknown>/50%   1         10        0          10s

and when im trying to get the metrics-server with

> ubuntu@master:~/auto-scaling$ kubectl get svc --all-namespaces
> NAMESPACE     NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
> default       kubernetes       ClusterIP   10.96.0.1        <none>        443/TCP         1d
> default       php-apache       ClusterIP   10.101.201.103   <none>        80/TCP          1m
> kube-system   kube-dns         ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP   1d
> kube-system   metrics-server   ClusterIP   10.110.186.18    <none>        443/TCP         1d
> ubuntu@master:~/auto-scaling$ curl https://10.110.186.18 -k
> {
>   "kind": "Status",
>   "apiVersion": "v1",
>   "metadata": {},
>   "status": "Failure",
>   "message": "forbidden: User \"system:anonymous\" cannot get path \"/\".",
>   "reason": "Forbidden",
>   "details": {},
>   "code": 403
> }
> ubuntu@master:~/auto-scaling$ 

i cant access the metrics-server. im deploying fresh kubernetes with kubeadm.

ubuntu@master:~/auto-scaling$ kubectl describe pod metrics-server-86bd9d7667-ghl8h -n kube-system
Name:           metrics-server-86bd9d7667-ghl8h
Namespace:      kube-system
Node:           worker0/10.200.200.20
Start Time:     Fri, 06 Jul 2018 04:48:37 +0200
Labels:         k8s-app=metrics-server
                pod-template-hash=4268583223
Annotations:    <none>
Status:         Running
IP:             10.244.1.30
Controlled By:  ReplicaSet/metrics-server-86bd9d7667
Containers:
  metrics-server:
    Container ID:  docker://7c7b6e4595225c479ae21d1075630402329c722eff93ad3534effe6bbaffea56
    Image:         gcr.io/google_containers/metrics-server-amd64:v0.2.1
    Image ID:      docker-pullable://gcr.io/google_containers/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892
    Port:          <none>
    Host Port:     <none>
    Command:
      /metrics-server
      --source=kubernetes.summary_api:''
    State:          Running
      Started:      Fri, 06 Jul 2018 04:48:49 +0200
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from metrics-server-token-8rgcx (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  metrics-server-token-8rgcx:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  metrics-server-token-8rgcx
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>
ubuntu@master:~/auto-scaling$ kubectl get node
NAME      STATUS    ROLES     AGE       VERSION                                                                                                                                                                     
master    Ready     master    1d        v1.11.0                                                                                                                                                                     
worker0   Ready     <none>    1d        v1.11.0                                                                                                                                                                     
ubuntu@master:~/auto-scaling$

Crashed when I use it.

1 heapster.go:97] Could not create the API server: cluster doesn't provide requestheader-client-ca-file

Metrics server api not getting registered

I have deployed metrics api in kubernetes following https://github.com/kubernetes-incubator/metrics-server/tree/master/deploy/1.8%2B. Metric server is running fine and but I am not able to get metrics from it. I am using kubernetes 1.9 version.

[demo@dev-demo metrics-server]$ kubectl get --raw "/apis/metrics.k8s.io" Error from server (NotFound): the server could not find the requested resource [demo@dev-demo metrics-server]$

`[demo@dev-demo metrics-server]$ k get hpa -n default
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache / 50% 1 10 1 19h
[demo@dev-demo metrics-server]$ k describe hpa -n default
Name: php-apache
Namespace: default
Labels:
Annotations:
CreationTimestamp: Wed, 21 Mar 2018 04:57:32 -0400
Reference: Deployment/php-apache
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): / 50%
Min replicas: 1
Max replicas: 10
Conditions:
Type Status Reason Message


AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)
Events:
Type Reason Age From Message


Warning FailedComputeMetricsReplicas 43m (x13 over 49m) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)
Warning FailedGetResourceMetric 4m (x91 over 49m) horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)`

metrics-server CrashLoopBackOff on EKS v1.10

I'm using a 3 node AWS EKS cluster. What I did:

  1. I copied the files from https://github.com/kubernetes-incubator/metrics-server/tree/master/deploy/1.8%2B into a folder on my local
  2. I installed the metrics-server to Kubernetes with kubectl apply -f metrics-server/
  3. I noticed metrics-server was in CrashLoopBackOff status:
$ kubectl get pods -n=kube-system
NAME                                    READY     STATUS             RESTARTS   AGE
aws-node-gnwpm                          1/1       Running            0          12m
aws-node-hggb6                          1/1       Running            1          1d
aws-node-qld6b                          1/1       Running            1          1d
kube-dns-64b69465b4-n8rcl               3/3       Running            0          1d
kube-proxy-4b5bp                        1/1       Running            0          1d
kube-proxy-7xp9l                        1/1       Running            0          1d
kube-proxy-ww4jl                        1/1       Running            0          1d
metrics-server-6fbfb84cdd-tprkc         0/1       CrashLoopBackOff   6          1h

Here are the metrics-server logs:

$ kubectl logs -n=kube-system metrics-server-6fbfb84cdd-tprkc
I0706 15:12:24.030768       1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:''
I0706 15:12:24.030819       1 heapster.go:72] Metrics Server version v0.2.1
I0706 15:12:24.031114       1 configs.go:61] Using Kubernetes client with master "https://172.20.0.1:443" and version 
I0706 15:12:24.031134       1 configs.go:62] Using kubelet port 10255
I0706 15:12:24.031960       1 heapster.go:128] Starting with Metric Sink
I0706 15:12:24.405616       1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
W0706 15:12:24.792046       1 authentication.go:222] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLE_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
F0706 15:12:24.792071       1 heapster.go:97] Could not create the API server: configmaps "extension-apiserver-authentication" not found

It says to run kubectl create rolebinding -n kube-system ROLE_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA ... I'm not exactly sure what to put for "ROLE_NAME" and "YOUR_NS:YOUR_SA". I tried kubectl create rolebinding -n kube-system system:metrics-server --role=extension-apiserver-authentication-reader --serviceaccount=kube-system:metrics-server but had no luck.

RBAC Deny when requesting metrics

Hello,

I have a 3 node cluster on Virtual Machines (Kubernetes Version 1.9.0)

I added the following flags to the Kube API Server to enable aggregation

--requestheader-client-ca-file=/var/lib/kubernetes/ca.pem \
  --requestheader-extra-headers-prefix=X-Remote-Extra- \
  --requestheader-group-headers=X-Remote-Group \
  --requestheader-username-headers=X-Remote-User \
  --proxy-client-cert-file=/var/lib/kubernetes/kube-apiserver.pem \
  --enable-aggregator-routing=true \
  --proxy-client-key-file=/var/lib/kubernetes/kube-apiserver-key.pem

Since this is a non production set up, i am using the same CA and Certs which i use for Kube API Server.

Then i deployed the manifests located at https://github.com/kubernetes-incubator/metrics-server/tree/master/deploy

When running

On Metrics Server Pod

 Rahul@rahul-mbp ๎‚ฐ ~/dev/2018-sandbox/k8s-on-vagrant/metrics-server ๎‚ฐ ๎‚  master ๎‚ฐ kubectl logs metrics-server-bb9ffc6b8-n8pt5 -n=kube-system
I0208 02:38:27.998984       1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:''
I0208 02:38:27.999186       1 heapster.go:72] Metrics Server version v0.2.1
I0208 02:38:27.999374       1 configs.go:61] Using Kubernetes client with master "https://10.32.0.1:443" and version
I0208 02:38:27.999429       1 configs.go:62] Using kubelet port 10255
I0208 02:38:28.000372       1 heapster.go:128] Starting with Metric Sink
I0208 02:38:28.107149       1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
I0208 02:38:28.384409       1 heapster.go:101] Starting Heapster API server...
[restful] 2018/02/08 02:38:28 log.go:33: [restful/swagger] listing is available at https:///swaggerapi
[restful] 2018/02/08 02:38:28 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/
I0208 02:38:28.385309       1 serve.go:85] Serving securely on 0.0.0.0:443
E0208 02:49:13.068371       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=37, ErrCode=NO_ERROR, debug=""
E0208 02:49:13.068739       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=37, ErrCode=NO_ERROR, debug=""
E0208 02:49:13.068953       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=37, ErrCode=NO_ERROR, debug=""
E0208 02:49:13.069191       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=37, ErrCode=NO_ERROR, debug=""
E0208 02:49:13.069387       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=37, ErrCode=NO_ERROR, debug=""
E0208 02:49:13.069673       1 reflector.go:315] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to watch *v1.Node: Get https://10.32.0.1:443/api/v1/nodes?resourceVersion=1603&timeoutSeconds=323&watch=true: dial tcp 10.32.0.1:443: getsockopt: connection refused
E0208 02:49:13.069716       1 reflector.go:315] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to watch *v1.Node: Get https://10.32.0.1:443/api/v1/nodes?resourceVersion=1603&timeoutSeconds=592&watch=true: dial tcp 10.32.0.1:443: getsockopt: connection refused
E0208 02:49:13.069835       1 reflector.go:315] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to watch *v1.Node: Get https://10.32.0.1:443/api/v1/nodes?resourceVersion=1603&timeoutSeconds=482&watch=true: dial tcp 10.32.0.1:443: getsockopt: connection refused
E0208 02:49:13.069878       1 reflector.go:315] github.com/kubernetes-incubator/metrics-server/metrics/processors/namespace_based_enricher.go:85: Failed to watch *v1.Namespace: Get https://10.32.0.1:443/api/v1/namespaces?resourceVersion=1392&timeoutSeconds=347&watch=true: dial tcp 10.32.0.1:443: getsockopt: connection refused
E0208 02:49:13.069913       1 reflector.go:315] github.com/kubernetes-incubator/metrics-server/metrics/heapster.go:254: Failed to watch *v1.Pod: Get https://10.32.0.1:443/api/v1/pods?resourceVersion=1469&timeoutSeconds=478&watch=true: dial tcp 10.32.0.1:443: getsockopt: connection refused
E0208 02:49:16.917255       1 reflector.go:205] github.com/kubernetes-incubator/metrics-server/metrics/processors/namespace_based_enricher.go:85: Failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot list namespaces at the cluster scope
E0208 02:49:16.928352       1 reflector.go:205] github.com/kubernetes-incubator/metrics-server/metrics/heapster.go:254: Failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot list pods at the cluster scope
E0208 02:49:16.928398       1 reflector.go:205] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot list nodes at the cluster scope
E0208 02:49:16.928433       1 reflector.go:205] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot list nodes at the cluster scope
E0208 02:49:16.928456       1 reflector.go:205] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot list nodes at the cluster scope
E0208 02:50:36.827934       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=29, ErrCode=NO_ERROR, debug=""
E0208 02:50:36.831242       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=29, ErrCode=NO_ERROR, debug=""
E0208 02:50:36.833029       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=29, ErrCode=NO_ERROR, debug=""
E0208 02:50:36.833845       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=29, ErrCode=NO_ERROR, debug=""
E0208 02:50:36.834837       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=29, ErrCode=NO_ERROR, debug=""
E0208 02:50:36.835548       1 reflector.go:315] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to watch *v1.Node: Get https://10.32.0.1:443/api/v1/nodes?resourceVersion=1621&timeoutSeconds=426&watch=true: dial tcp 10.32.0.1:443: getsockopt: connection refused
E0208 02:50:36.835749       1 reflector.go:315] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to watch *v1.Node: Get https://10.32.0.1:443/api/v1/nodes?resourceVersion=1621&timeoutSeconds=536&watch=true: dial tcp 10.32.0.1:443: getsockopt: connection refused
E0208 02:50:36.835862       1 reflector.go:315] github.com/kubernetes-incubator/metrics-server/metrics/heapster.go:254: Failed to watch *v1.Pod: Get https://10.32.0.1:443/api/v1/pods?resourceVersion=1603&timeoutSeconds=384&watch=true: dial tcp 10.32.0.1:443: getsockopt: connection refused
E0208 02:50:36.835994       1 reflector.go:315] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to watch *v1.Node: Get https://10.32.0.1:443/api/v1/nodes?resourceVersion=1621&timeoutSeconds=376&watch=true: dial tcp 10.32.0.1:443: getsockopt: connection refused
E0208 02:50:36.836131       1 reflector.go:315] github.com/kubernetes-incubator/metrics-server/metrics/processors/namespace_based_enricher.go:85: Failed to watch *v1.Namespace: Get https://10.32.0.1:443/api/v1/namespaces?resourceVersion=1603&timeoutSeconds=459&watch=true: dial tcp 10.32.0.1:443: getsockopt: connection refused
E0208 02:50:40.546884       1 reflector.go:205] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot list nodes at the cluster scope
E0208 02:50:40.547699       1 reflector.go:205] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot list nodes at the cluster scope
E0208 02:50:40.547731       1 reflector.go:205] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot list nodes at the cluster scope
E0208 02:50:40.547759       1 reflector.go:205] github.com/kubernetes-incubator/metrics-server/metrics/processors/namespace_based_enricher.go:85: Failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot list namespaces at the cluster scope
E0208 02:52:11.018945       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
E0208 02:52:11.019294       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
E0208 02:52:11.019906       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
E0208 02:52:11.020206       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
E0208 02:52:11.020512       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""
E0208 02:52:11.020821       1 reflector.go:315] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to watch *v1.Node: Get https://10.32.0.1:443/api/v1/nodes?resourceVersion=1646&timeoutSeconds=522&watch=true: dial tcp 10.32.0.1:443: getsockopt: connection refused
E0208 02:52:11.020873       1 reflector.go:315] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to watch *v1.Node: Get https://10.32.0.1:443/api/v1/nodes?resourceVersion=1646&timeoutSeconds=579&watch=true: dial tcp 10.32.0.1:443: getsockopt: connection refused
E0208 02:52:11.020909       1 reflector.go:315] github.com/kubernetes-incubator/metrics-server/metrics/processors/namespace_based_enricher.go:85: Failed to watch *v1.Namespace: Get https://10.32.0.1:443/api/v1/namespaces?resourceVersion=1621&timeoutSeconds=372&watch=true: dial tcp 10.32.0.1:443: getsockopt: connection refused
E0208 02:52:11.024620       1 reflector.go:315] github.com/kubernetes-incubator/metrics-server/metrics/heapster.go:254: Failed to watch *v1.Pod: Get https://10.32.0.1:443/api/v1/pods?resourceVersion=1621&timeoutSeconds=540&watch=true: dial tcp 10.32.0.1:443: getsockopt: connection refused
E0208 02:52:11.024702       1 reflector.go:315] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to watch *v1.Node: Get https://10.32.0.1:443/api/v1/nodes?resourceVersion=1646&timeoutSeconds=393&watch=true: dial tcp 10.32.0.1:443: getsockopt: connection refused
E0208 02:52:14.954742       1 reflector.go:205] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot list nodes at the cluster scope
E0208 02:52:14.955042       1 reflector.go:205] github.com/kubernetes-incubator/metrics-server/metrics/heapster.go:254: Failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot list pods at the cluster scope
E0208 02:52:14.955075       1 reflector.go:205] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot list nodes at the cluster scope
E0208 02:52:14.955181       1 reflector.go:205] github.com/kubernetes-incubator/metrics-server/metrics/processors/namespace_based_enricher.go:85: Failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot list namespaces at the cluster scope
E0208 02:52:14.965734       1 reflector.go:205] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot list nodes at the cluster scope
E0208 02:52:19.873651       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=29, ErrCode=NO_ERROR, debug=""
E0208 02:52:19.874168       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=29, ErrCode=NO_ERROR, debug=""
E0208 02:52:19.874504       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=29, ErrCode=NO_ERROR, debug=""
E0208 02:52:19.876888       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=29, ErrCode=NO_ERROR, debug=""
E0208 02:52:19.877350       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=29, ErrCode=NO_ERROR, debug=""
E0208 02:52:19.877753       1 reflector.go:315] github.com/kubernetes-incubator/metrics-server/metrics/heapster.go:254: Failed to watch *v1.Pod: Get https://10.32.0.1:443/api/v1/pods?resourceVersion=1646&timeoutSeconds=447&watch=true: dial tcp 10.32.0.1:443: getsockopt: connection refused
E0208 02:52:19.877809       1 reflector.go:315] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to watch *v1.Node: Get https://10.32.0.1:443/api/v1/nodes?resourceVersion=1646&timeoutSeconds=327&watch=true: dial tcp 10.32.0.1:443: getsockopt: connection refused
E0208 02:52:19.877844       1 reflector.go:315] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to watch *v1.Node: Get https://10.32.0.1:443/api/v1/nodes?resourceVersion=1646&timeoutSeconds=576&watch=true: dial tcp 10.32.0.1:443: getsockopt: connection refused
E0208 02:52:19.877880       1 reflector.go:315] github.com/kubernetes-incubator/metrics-server/metrics/processors/namespace_based_enricher.go:85: Failed to watch *v1.Namespace: Get https://10.32.0.1:443/api/v1/namespaces?resourceVersion=1646&timeoutSeconds=593&watch=true: dial tcp 10.32.0.1:443: getsockopt: connection refused
E0208 02:52:19.877920       1 reflector.go:315] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to watch *v1.Node: Get https://10.32.0.1:443/api/v1/nodes?resourceVersion=1646&timeoutSeconds=578&watch=true: dial tcp 10.32.0.1:443: getsockopt: connection refused
E0208 02:52:23.954419       1 reflector.go:205] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot list nodes at the cluster scope: [clusterrole.rbac.authorization.k8s.io "system:auth-delegator" not found, clusterrole.rbac.authorization.k8s.io "system:metrics-server" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
E0208 02:52:23.954480       1 reflector.go:205] github.com/kubernetes-incubator/metrics-server/metrics/heapster.go:254: Failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot list pods at the cluster scope: [clusterrole.rbac.authorization.k8s.io "system:auth-delegator" not found, clusterrole.rbac.authorization.k8s.io "system:metrics-server" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
E0208 02:52:23.958895       1 reflector.go:205] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot list nodes at the cluster scope: [clusterrole.rbac.authorization.k8s.io "system:metrics-server" not found, clusterrole.rbac.authorization.k8s.io "system:auth-delegator" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
E0208 02:52:23.960312       1 reflector.go:205] github.com/kubernetes-incubator/metrics-server/metrics/processors/namespace_based_enricher.go:85: Failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot list namespaces at the cluster scope: [clusterrole.rbac.authorization.k8s.io "system:auth-delegator" not found, clusterrole.rbac.authorization.k8s.io "system:metrics-server" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
E0208 02:52:23.968730       1 reflector.go:205] github.com/kubernetes-incubator/metrics-server/metrics/util/util.go:52: Failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:metrics-server" cannot list nodes at the cluster scope: [clusterrole.rbac.authorization.k8s.io "system:auth-delegator" not found, clusterrole.rbac.authorization.k8s.io "system:metrics-server" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
W0208 02:55:26.283956       1 x509.go:168] x509: subject with cn=kubernetes is not in the allowed list: [aggregator]
W0208 02:55:35.160885       1 x509.go:168] x509: subject with cn=kubernetes is not in the allowed list: [aggregator]
W0208 02:55:44.656033       1 x509.go:168] x509: subject with cn=kubernetes is not in the allowed list: [aggregator]
W0208 02:56:26.260001       1 x509.go:168] x509: subject with cn=kubernetes is not in the allowed list: [aggregator]
W0208 02:57:26.320622       1 x509.go:168] x509: subject with cn=kubernetes is not in the allowed list: [aggregator]
W0208 02:58:26.315669       1 x509.go:168] x509: subject with cn=kubernetes is not in the allowed list: [aggregator]

On Kube API Server:

Feb 08 03:49:09 k8s-master kube-apiserver[11124]: I0208 03:49:09.868936   11124 rbac.go:116] RBAC DENY: user "system:serviceaccount:kube-system:metrics-server" groups ["system:serviceaccounts" "system:serviceaccounts:kube-system" "system:authenticated"] cannot "list" resource "pods" cluster-wide
Feb 08 03:49:09 k8s-master kube-apiserver[11124]: I0208 03:49:09.868984   11124 wrap.go:42] GET /api/v1/pods?resourceVersion=0: (26.023079ms) 403 [[metrics-server/v0.0.0 (linux/amd64) kubernetes/$Format] 172.178.205.102:49548]
Feb 08 03:49:09 k8s-master kube-apiserver[11124]: I0208 03:49:09.869161   11124 rbac.go:116] RBAC DENY: user "system:serviceaccount:kube-system:metrics-server" groups ["system:serviceaccounts" "system:serviceaccounts:kube-system" "system:authenticated"] cannot "list" resource "namespaces" cluster-wide
Feb 08 03:49:09 k8s-master kube-apiserver[11124]: I0208 03:49:09.868892   11124 wrap.go:42] GET /api/v1/nodes?resourceVersion=0: (4.397189ms) 403 [[metrics-server/v0.0.0 (linux/amd64) kubernetes/$Format] 172.178.205.102:49548]
Feb 08 03:49:09 k8s-master kube-apiserver[11124]: I0208 03:49:09.869215   11124 wrap.go:42] GET /api/v1/namespaces?resourceVersion=0: (25.664566ms) 403 [[metrics-server/v0.0.0 (linux/amd64) kubernetes/$Format] 172.178.205.102:49548]
Feb 08 03:49:09 k8s-master kube-apiserver[11124]: I0208 03:49:09.873930   11124 rbac.go:116] RBAC DENY: user "system:serviceaccount:kube-system:metrics-server" groups ["system:serviceaccounts" "system:serviceaccounts:kube-system" "system:authenticated"] cannot "list" resource "nodes" cluster-wide
Feb 08 03:49:09 k8s-master kube-apiserver[11124]: I0208 03:49:09.873996   11124 wrap.go:42] GET /api/v1/nodes?resourceVersion=0: (29.451749ms) 403 [[metrics-server/v0.0.0 (linux/amd64) kubernetes/$Format] 172.178.205.102:49548]
Feb 08 03:49:10 k8s-master kube-apiserver[11124]: I0208 03:49:10.874144   11124 wrap.go:42] GET /api/v1/namespaces?resourceVersion=0: (758.615ยตs) 200 [[metrics-server/v0.0.0 (linux/amd64) kubernetes/$Format] 172.178.205.102:49548]
Feb 08 03:49:10 k8s-master kube-apiserver[11124]: I0208 03:49:10.875424   11124 wrap.go:42] GET /api/v1/nodes?resourceVersion=0: (606.465ยตs) 200 [[metrics-server/v0.0.0 (linux/amd64) kubernetes/$Format] 172.178.205.102:49548]
Feb 08 03:49:10 k8s-master kube-apiserver[11124]: I0208 03:49:10.876138   11124 wrap.go:42] GET /api/v1/nodes?resourceVersion=0: (252.071ยตs) 200 [[metrics-server/v0.0.0 (linux/amd64) kubernetes/$Format] 172.178.205.102:49548]
Feb 08 03:49:10 k8s-master kube-apiserver[11124]: I0208 03:49:10.877562   11124 wrap.go:42] GET /api/v1/nodes?resourceVersion=0: (230.239ยตs) 200 [[metrics-server/v0.0.0 (linux/amd64) kubernetes/$Format] 172.178.205.102:49548]
Feb 08 03:49:10 k8s-master kube-apiserver[11124]: I0208 03:49:10.878229   11124 wrap.go:42] GET /api/v1/pods?resourceVersion=0: (1.375966ms) 200 [[metrics-server/v0.0.0 (linux/amd64) kubernetes/$Format] 172.178.205.102:49548]

kubectl just responds with
Error from server (Forbidden): nodes.metrics.k8s.io is forbidden: User "kubernetes" cannot list nodes.metrics.k8s.io at the cluster scope.

I don't know where the user "kubernetes" is picked up from , my admin has a CN named admin, the certificate is signed by an issuer with CN(kubernetes)

I thought the deployment manifests of metrics server would address its RBAC requirements, what other permissions does the metric server need?

Can't acess /apis/metrics.k8s.io/ from apiserver

Hi All,

I just upgrade the k8s to 1.8.1 from 1.7.5, and deployed metrics-server with the default deployment manifes in the repo. right now I can't get metrics from /apis/metrics.k8s.io/v1beta1/nodes or /apis/metrics.k8s.io/v1beta1/pods.

#curl --cacert ca.pem --cert apiserver.pem --key apiserver-key.pem  https://10.58.137.243:6443/apis/metrics.k8s.io/v1beta1/nodes 
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "nodes.metrics.k8s.io is forbidden: User \"xxx\" cannot list nodes.metrics.k8s.io at the cluster scope",
  "reason": "Forbidden",
  "details": {
    "group": "metrics.k8s.io",
    "kind": "nodes"
  },
  "code": 403

xxx: is the node FQDN
and I use the apiserver key/cert in the command parameter.

also apiserver append following parameters

      --requestheader-client-ca-file=/srv/kubernetes/ca.pem
      --requestheader-allowed-names=aggregator
      --requestheader-extra-headers-prefix=X-Remote-Extra-
      --requestheader-group-headers=X-Remote-Group
      --requestheader-username-headers=X-Remote-User
      --proxy-client-cert-file=/srv/kubernetes/kubelet-cert.pem
      --proxy-client-key-file=/srv/kubernetes/kubelet-key.pem
  1. As I just upgrade from previous cluster, I don't create a second CA authority for metrics-server. how to leverage the current CA to make it work ?

  2. do I need to create a second cert/key or users signed by the currentl CA to get access the metrics-server? additional cluster role needed ?

  3. Can we just use currently cert/key/ users to access these metrics? since current use can aceess the other apis exposed by apiserver

k8s info:

 kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:27:35Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:16:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

docker info:

#  docker info
Containers: 171
 Running: 83
 Paused: 0
 Stopped: 88
Images: 456
Server Version: 1.12.6
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: xfs
 Dirs: 1061
 Dirperm1 Supported: true
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
 Volume: local
 Network: host null bridge overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor seccomp
Kernel Version: 4.8.0-58-generic
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 32
Total Memory: 62.79 GiB
Name: cnpvgl56588417
ID: RBIH:KI5N:HFUQ:JBTQ:ENOS:QLZV:T3T2:TGXT:VM5H:SWVO:ARDN:W33P
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 383
 Goroutines: 360
 System Time: 2017-10-17T17:39:26.358987804+08:00
 EventsListeners: 0
Http Proxy: http://proxy.pvgl.sap.corp:8080
Https Proxy: http://proxy.pvgl.sap.corp:8080
No Proxy: registry.gcsc.sap.corp
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
 127.0.0.0/8

Kubelet Read only port is now deprecated so what now?

I have deployed a Kubernetes cluster v1.10.4. With in the Kubespray deployment plans the option to even enable port 10255 is no where to be found at all and is deprecated.

What is the team doing to address this as I get all errors when trying to get metrics data?

Window field in PodMetrics and NodeMetrics

Currently the Window field in both PodMetrics and NodeMetrics is hard-coded to be one minute. Is this the desired behavior or the value should reflect the --metric_resolution configuration?

Relevant code can be found here and here.

apiserver panic'd on GET /apis/metrics.k8s.io/v1beta1/nodes

  • Kubernetes 1.10.2
  • Metrics server v0.2.1 (metrics-server-arm64 Docker image)

I tried metric-server on my ARM based home lab but am having some issues getting it working. FWIW I don't have any errors or other problems if I use Heapster, it seems to be related to the APIService. I deployed using the deploy/1.8+ manifests from the repo.

After deploying I try to to run kubectl top nodes, sometimes it works but more often than not I get the following error.

error: Stream error http2.StreamError{StreamID:0x5, Code:0x2, Cause:error(nil)} when reading response body, may be caused by closed connection. Please retry.

Here's the traceback in the apiserver logs:

E0603 15:40:19.841397       1 runtime.go:66] Observed a panic: &errors.errorString{s:"killing connection/stream because serving request timed out and response had been started"} (killi                       ng connection/stream because serving request timed out and response had been started)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/asm_arm.s:432
/usr/local/go/src/runtime/panic.go:491
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:217
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:101
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:51
/usr/local/go/src/net/http/server.go:1918
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:45
/usr/local/go/src/net/http/server.go:1918
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request/requestcontext.go:110
/usr/local/go/src/net/http/server.go:1918
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:41
/usr/local/go/src/net/http/server.go:1918
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:197
/usr/local/go/src/net/http/server.go:2619
/usr/local/go/src/net/http/server.go:3164
<autogenerated>:1
/usr/local/go/src/net/http/h2_bundle.go:5462
/usr/local/go/src/net/http/h2_bundle.go:5747
/usr/local/go/src/runtime/asm_arm.s:971
E0603 15:40:19.866226       1 wrap.go:34] apiserver panic'd on GET /apis/metrics.k8s.io/v1beta1/nodes: killing connection/stream because serving request timed out and response had been                        started
goroutine 4117171 [running]:
runtime/debug.Stack(0x85e9ff0, 0x247c9900, 0x2d99aa5)
        /usr/local/go/src/runtime/debug/stack.go:24 +0x80
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithPanicRecovery.func1.1(0x2738f10, 0x19477b78)
        /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:34 +0x4c
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x250cce84, 0x1, 0x1)
        /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:54 +0                       xb8
panic(0x2738f10, 0x19477b78)
        /usr/local/go/src/runtime/panic.go:491 +0x204
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).timeout(0x26ad95c0, 0x22fe6d20)
        /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:217 +0                       x13c
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0x203e9dc0, 0x85e9ff0, 0x247c9900, 0x247b7680)
        /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:101 +0                       x20c
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithWaitGroup.func1(0x85e9ff0, 0x247c9900, 0x247b7680)
        /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:51 +                       0xcc
net/http.HandlerFunc.ServeHTTP(0x20548da0, 0x85e9ff0, 0x247c9900, 0x247b7680)
        /usr/local/go/src/net/http/server.go:1918 +0x34
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1(0x85e9ff0, 0x247c9900, 0x247b7680)
        /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go                       :45 +0x17c
net/http.HandlerFunc.ServeHTTP(0x20548dc0, 0x85e9ff0, 0x247c9900, 0x247b7680)
        /usr/local/go/src/net/http/server.go:1918 +0x34
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request.WithRequestContext.func1(0x85e9ff0, 0x247c9900, 0x247b7680)
        /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request/requestcontext                       .go:110 +0xa8
net/http.HandlerFunc.ServeHTTP(0x203e9dd0, 0x85e9ff0, 0x247c9900, 0x247b7680)
        /usr/local/go/src/net/http/server.go:1918 +0x34
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithPanicRecovery.func1(0x85e9ff0, 0x247c9900, 0x247b7680)
        /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:41 +0xd4
net/http.HandlerFunc.ServeHTTP(0x203e9de0, 0x85ea8d0, 0x1bec25b0, 0x247b7680)
        /usr/local/go/src/net/http/server.go:1918 +0x34
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(0x20548de0, 0x85ea8d0, 0x1bec25b0, 0x247b7680)
        /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:197 +0x40
net/http.serverHandler.ServeHTTP(0x22833200, 0x85ea8d0, 0x1bec25b0, 0x247b7680)
        /usr/local/go/src/net/http/server.go:2619 +0x74
net/http.initNPNRequest.ServeHTTP(0x1c640400, 0x22833200, 0x85ea8d0, 0x1bec25b0, 0x247b7680)
        /usr/local/go/src/net/http/server.go:3164 +0x60
net/http.(*initNPNRequest).ServeHTTP(0x20715d28, 0x85ea8d0, 0x1bec25b0, 0x247b7680)
        <autogenerated>:1 +0x54
net/http.(Handler).ServeHTTP-fm(0x85ea8d0, 0x1bec25b0, 0x247b7680)
        /usr/local/go/src/net/http/h2_bundle.go:5462 +0x3c
net/http.(*http2serverConn).runHandler(0x1fbc67e0, 0x1bec25b0, 0x247b7680, 0x1f873320)
        /usr/local/go/src/net/http/h2_bundle.go:5747 +0x70
created by net/http.(*http2serverConn).processHeaders
        /usr/local/go/src/net/http/h2_bundle.go:5481 +0x3a4

I0603 15:40:19.879031       1 logs.go:49] http2: panic serving 192.168.1.154:53446: killing connection/stream because serving request timed out and response had been started
goroutine 4117171 [running]:
net/http.(*http2serverConn).runHandler.func1(0x1bec25b0, 0x250ccfd8, 0x1fbc67e0)
        /usr/local/go/src/net/http/h2_bundle.go:5740 +0x13c
panic(0x2738f10, 0x19477b78)
        /usr/local/go/src/runtime/panic.go:491 +0x204
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x250cce84, 0x1, 0x1)
        /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0                       xfc
panic(0x2738f10, 0x19477b78)
        /usr/local/go/src/runtime/panic.go:491 +0x204
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).timeout(0x26ad95c0, 0x22fe6d20)
        /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:217 +0                       x13c
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0x203e9dc0, 0x85e9ff0, 0x247c9900, 0x247b7680)
        /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:101 +0                       x20c
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithWaitGroup.func1(0x85e9ff0, 0x247c9900, 0x247b7680)
        /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:51 +                       0xcc
net/http.HandlerFunc.ServeHTTP(0x20548da0, 0x85e9ff0, 0x247c9900, 0x247b7680)
        /usr/local/go/src/net/http/server.go:1918 +0x34
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1(0x85e9ff0, 0x247c9900, 0x247b7680)
        /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go                       :45 +0x17c
net/http.HandlerFunc.ServeHTTP(0x20548dc0, 0x85e9ff0, 0x247c9900, 0x247b7680)
        /usr/local/go/src/net/http/server.go:1918 +0x34
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request.WithRequestContext.func1(0x85e9ff0, 0x247c9900, 0x247b7680)
        /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/request/requestcontext                       .go:110 +0xa8
net/http.HandlerFunc.ServeHTTP(0x203e9dd0, 0x85e9ff0, 0x247c9900, 0x247b7680)
        /usr/local/go/src/net/http/server.go:1918 +0x34
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithPanicRecovery.func1(0x85e9ff0, 0x247c9900, 0x247b7680)
        /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:41 +0xd4
net/http.HandlerFunc.ServeHTTP(0x203e9de0, 0x85ea8d0, 0x1bec25b0, 0x247b7680)
        /usr/local/go/src/net/http/server.go:1918 +0x34
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(0x20548de0, 0x85ea8d0, 0x1bec25b0, 0x247b7680)
        /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:197 +0x40
net/http.serverHandler.ServeHTTP(0x22833200, 0x85ea8d0, 0x1bec25b0, 0x247b7680)
        /usr/local/go/src/net/http/server.go:2619 +0x74
net/http.initNPNRequest.ServeHTTP(0x1c640400, 0x22833200, 0x85ea8d0, 0x1bec25b0, 0x247b7680)
        /usr/local/go/src/net/http/server.go:3164 +0x60
net/http.(*initNPNRequest).ServeHTTP(0x20715d28, 0x85ea8d0, 0x1bec25b0, 0x247b7680)
        <autogenerated>:1 +0x54
net/http.(Handler).ServeHTTP-fm(0x85ea8d0, 0x1bec25b0, 0x247b7680)
        /usr/local/go/src/net/http/h2_bundle.go:5462 +0x3c
net/http.(*http2serverConn).runHandler(0x1fbc67e0, 0x1bec25b0, 0x247b7680, 0x1f873320)
        /usr/local/go/src/net/http/h2_bundle.go:5747 +0x70
created by net/http.(*http2serverConn).processHeaders
        /usr/local/go/src/net/http/h2_bundle.go:5481 +0x3a4

Empty node metrics with kubernetes v 1.11.0

When using the metrics server with the latest version of kubernetes the command kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes outputs

{"kind":"NodeMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/metrics.k8s.io/v1beta1/nodes"},"items":[]}

where there is an emtpy list for the node metrics even though this is a on a cluster I know has 4 nodes as kubectl get nodes prints out

NAME               STATUS    ROLES     AGE       VERSION
ip-172-31-17-53    Ready     <none>    21m       v1.11.0
ip-172-31-23-195   Ready     <none>    48s       v1.11.0
ip-172-31-24-3     Ready     master    25m       v1.11.0
ip-172-31-27-90    Ready     <none>    21m       v1.11.0
ip-172-31-31-206   Ready     <none>    21m       v1.11.0

inaccessibility to registered metrics-server causes apiserver slowness and controller-manager failure

Hi, thanks for maintaining the great project ๐Ÿ‘

Are you aware of kubernetes/kubernetes#56430 and kubernetes-retired/kube-aws#1039?

In nutshell, apiserver seems to become super slow when it is unable to communicate with the metrics-server service, which causes a continous controller-manager failure until you remove the apiservice "v1beta1.metrics.k8s.io`.

Questions:

  • Would there be something we can fix on metrics-server side?
  • Or should we just fix apiserver not get wild when just one of registered apiservices are inaccesible?
  • Would there be a reliable work-around for this?
  • Am I missing something?
    • Is this due to misconfiguration on our side(users)?

Thinking:

For example, kube-aws is currently unable to rolling-update controller nodes due to this behavior.

My guess about what's going on is:

  • apiserver in a newly created controller node doesn't get a "service's clusterIP to the metrics-server podIp" iptables rule until the controller-manager becomes fully up. controller-manager requires apiserver to be responsive against GET queries in order to fully start, but apiserver is unable to do so without controller-manager. Dead-lock.

In kube-aws, we're going to work-around it by removing the apiservice (by running kubectl delete apiservice v1beta1.metrics.k8s.io) in very early stage of the controller node. However, IMHO this seems very fragile work-around as we don't know exactly which timing apiserver becomes that slow.

No authentication for metrics API

I have installed metrics server on a K8s 1.9 server using the manifests in the deploy folder of this repo.

I then tried to access the metrics API through my K8s API server at /apis/metrics.k8s.io/v1beta1/pods/

I get the response:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "pods.metrics.k8s.io is forbidden: User \"system:anonymous\" cannot list pods.metrics.k8s.io at the cluster scope.",
  "reason": "Forbidden",
  "details": {
    "group": "metrics.k8s.io",
    "kind": "pods"
  },
  "code": 403
}

The cluster itself uses RBAC and the request hit's the K8s API server with an Authorization header and a JWT. This Authentication works for every other API endpoint but it does not seem to work for the metrics server.

I have tried connecting directly to the metrics server on it's Pod IP, here I can set teh Authorization header and it authenticates me correctly.

Having looked through the APIserver codebase, it looks as though it can't pass the bearer token on.
https://github.com/kubernetes/kubernetes/blob/6dab46e3fbf1c799673750a0ca635d9ee515ec0d/staging/src/k8s.io/apiserver/pkg/authentication/request/bearertoken/bearertoken.go#L58-L60

I've also added these flags to the api server but they didn't seem to help:

          - --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem
          - --requestheader-allowed-names=
          - --requestheader-extra-headers-prefix=X-Remote-Extra-
          - --requestheader-group-headers=X-Remote-Group
          - --requestheader-username-headers=X-Remote-User

So my question is, how do I authenticate to the metrics server through the main K8s api server? It doesn't seem to pass through. Has anyone got an RBAC'd setup on K8s 1.9 with examples I can compare with?

compatibility with kubernetes v.1.11.0

i was updating from v1.10.4 to v.1.11.0 and i have this error on kube-controller-manager pods

1 controllermanager.go:174] error starting controllers: failed to discover resources: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: Unauthorized

i had to undeploy to finish the updating.

is this compatibility with 1.11.0 ?

Refactor method's receiver name to obey golang conventions

GO Convention

The name of a method's receiver should be a reflection of its identity; often a one or two letter abbreviation of its type suffices (such as "c" or "cl" for "Client"). Don't use generic names such as "me", "this" or "self", identifiers typical of object-oriented languages that place more emphasis on methods as opposed to functions. The name need not be as descriptive as that of a method argument, as its role is obvious and serves no documentary purpose. It can be very short as it will appear on almost every line of every method of the type; familiarity admits brevity. Be consistent, too: if you call the receiver "c" in one method, don't call it "cl" in another.

There are so many methods with this or self.
e.g.

func (this *SinkFactory) Build(uri flags.Uri) (core.DataSink, error) {
    ...
}

func (self *KubeletClient) postRequestAndGetValue(client *http.Client, req *http.Request, value interface{}) error {
   ...
}

Can I create a pr to fix it?

/cc @DirectXMan12

Deploy YAMLs files is broken for Kubernetes 1.8.x version

I was trying to install metrics-server manually by applying all YAML files under path deploy/1.8+. But surprisingly kubectl reported an error:

error: error validating "resource-reader.yaml": error validating data: unknown object type schema.GroupVersionKind{Group:"rbac.authorization.k8s.io", Version:"v1", Kind:"ClusterRole"}; if you choose to ignore these errors, turn validation off with --validate=false

My Kubernetes version is 1.8.3, after some investigations. I changed rbac.authorization.k8s.io/v1 to rbac.authorization.k8s.io/v1beta1 and applied again, the error is gone.

So, should we update the YAML file specially for 1.8.X version to fix this problem?

unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)

Hello, I have installed locally with minikube the metric server and it worked like a charm!

But when I try to install it on my servers it doesn't seem to work. It's like my hpa doesn't see it at all.

What I did:

1- Init My cluster
2- Use Weave as my CNI
3- Join a node
4- Install the metric server by cloning and kubectl create -f deploy/1.8+
5- Create a deployment for nginx
6- Create a hpa

I get this error with kubectl describe hpa

Warning  FailedComputeMetricsReplicas  23m (x13 over 29m)  horizontal-pod-autoscaler  failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)
  Warning  FailedGetResourceMetric       19m (x21 over 29m)  horizontal-pod-autoscaler  unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)
  Warning  FailedComputeMetricsReplicas  7m (x13 over 13m)   horizontal-pod-autoscaler  failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)
  Warning  FailedGetResourceMetric       3m (x21 over 13m)   horizontal-pod-autoscaler  unable to get metrics for resource cpu: unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io)

I am using Kubernetes 1.9.3

The Metric Server Seems to be running okay:

I0228 21:26:58.367749       1 heapster.go:72] Metrics Server version v0.2.1
I0228 21:26:58.368133       1 configs.go:61] Using Kubernetes client with master "https://10.96.0.1:443" and version 
I0228 21:26:58.368155       1 configs.go:62] Using kubelet port 10255
I0228 21:26:58.369592       1 heapster.go:128] Starting with Metric Sink
I0228 21:26:58.900975       1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
I0228 21:26:59.394026       1 heapster.go:101] Starting Heapster API server...
[restful] 2018/02/28 21:26:59 log.go:33: [restful/swagger] listing is available at https:///swaggerapi
[restful] 2018/02/28 21:26:59 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/
I0228 21:26:59.395431       1 serve.go:85] Serving securely on 0.0.0.0:443

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.