Code Monkey home page Code Monkey logo

apiserver's Introduction

apiserver

Generic library for building a Kubernetes aggregated API server.

Purpose

This library contains code to create Kubernetes aggregation server complete with delegated authentication and authorization, kubectl compatible discovery information, optional admission chain, and versioned types. It's first consumers are k8s.io/kubernetes, k8s.io/kube-aggregator, and github.com/kubernetes-incubator/service-catalog.

Compatibility

There are NO compatibility guarantees for this repository, yet. It is in direct support of Kubernetes, so branches will track Kubernetes and be compatible with that repo. As we more cleanly separate the layers, we will review the compatibility guarantee. We have a goal to make this easier to use in the future.

Where does it come from?

apiserver is synced from https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver. Code changes are made in that location, merged into k8s.io/kubernetes and later synced here.

Things you should NOT do

  1. Directly modify any files under pkg in this repo. Those are driven from k8s.io/kubernetes/staging/src/k8s.io/apiserver.
  2. Expect compatibility. This repo is changing quickly in direct support of Kubernetes and the API isn't yet stable enough for API guarantees.

apiserver's People

Contributors

alexzielenski avatar apelisse avatar aramase avatar caoshufeng avatar cici37 avatar deads2k avatar dims avatar enj avatar hzxuzhonghu avatar jefftree avatar jiahuif avatar jpbetz avatar k8s-publish-robot avatar k8s-publishing-bot avatar liggitt avatar logicalhan avatar mbohlool avatar mikedanese avatar mikespreitzer avatar p0lyn0mial avatar pacoxu avatar pohly avatar roycaihw avatar smarterclayton avatar stevekuznetsov avatar sttts avatar tallclair avatar thockin avatar tkashem avatar wojtek-t avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

apiserver's Issues

Create a SECURITY_CONTACTS file.

As per the email sent to kubernetes-dev[1], please create a SECURITY_CONTACTS
file.

The template for the file can be found in the kubernetes-template repository[2].
A description for the file is in the steering-committee docs[3], you might need
to search that page for "Security Contacts".

Please feel free to ping me on the PR when you make it, otherwise I will see when
you close this issue. :)

Thanks so much, let me know if you have any questions.

(This issue was generated from a tool, apologies for any weirdness.)

[1] https://groups.google.com/forum/#!topic/kubernetes-dev/codeiIoQ6QE
[2] https://github.com/kubernetes/kubernetes-template-project/blob/master/SECURITY_CONTACTS
[3] https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance-template-short.md

An apparent bug with mixing authorization and CORS

When running kube-apiserver with both authorization and CORS, it seems that OPTIONS pre-flight requests are checking for the Authorization header and rejecting the requests. According to the CORS specification, Authorization headers are always excluded from OPTIONS pre-fight requests:

For a CORS-preflight request, request’s credentials mode is always "same-origin", i.e., it excludes credentials, but for any subsequent CORS requests it might not be. Support therefore needs to be indicated as part of the HTTP response to the CORS-preflight request as well.

This is a huge blocker for writing any authenticated browser-based UI that can make calls against kube-apiserver, from what I can tell.

Hopefully this is just something misconfigured on our end, and I'm just misunderstanding.

DecisionNoOpinion when denied field is missing

Webhook authorizer checks denied field of SubjectAccessReviewStatus to make a decision if access is denied. According to SubjectAccessReviewStatus docs field denied is optional. On the other hand the same doc states that "If both allowed is false and denied is false, then the authorizer has no opinion on whether to authorize the action". It makes perfect sense when both fields are set, but when denied is missing bool's default value kicks in and the result is the same. This is example response from api server:

apiVersion: authorization.k8s.io/v1
kind: SubjectAccessReview
spec:
  user: [email protected]
  resourceAttributes:
    group: servicecatalog.k8s.io
    resource: serviceinstance
    verb: list
    namespace: stage
EOF
apiVersion: authorization.k8s.io/v1
kind: SubjectAccessReview
metadata:
  creationTimestamp: null
spec:
  resourceAttributes:
    group: servicecatalog.k8s.io
    namespace: stage
    resource: serviceinstance
    verb: list
  user: [email protected]
status:
  allowed: false

Status clearly states that access is denied, but webhook authorizer would return DecisionNoOpinion in this scenario. I'm using k8s version 1.10.

DestroyFunc is not exposed via rest.Storage nor called by the ApiServer

It becomes impossible to call the DestroyFunc passed into the registry.Store struct after registering this storage into the genericAPIServer.InstallAPIGroup() method.

This is either a bug with the api server, or a misunderstanding on the implementers side for whose responsibility it is to call DestroyFunc.

I propose as a first step, implement a method to registry.Store that allows the DestroyFunc to be invoked and add a new interface to rest.Storage that allows access to that new method.

Then later we can talk about how the shutting down of the api server should know to call the new method.

Relates to:
kubernetes/kubernetes#50690
kubernetes/kubernetes#53617
kubernetes-retired/service-catalog#1649

Add support for domain name as advertise address

Is there any reason not to allow an API server advertising a domain name rather than just an IP address?

This would be useful when you want to have multiple masters behind a loadbalancer and using the IP address is not suitable, for example in the case of AWS ELB.

Please correct me if I'm wrong here, but from what I can tell the bind address option seems to be the address which the API server is going to listen on while the advertise address is what it is telling other components to use for communication.

Add PrepareForDelete to RESTGracefulDeleteStrategy

There is a desire in https://github.com/kubernetes-incubator/service-catalog to have a PrepareForDelete function in the RESTGracefulDeleteStrategy interface similar to PrepareForCreate and PrepareForUpdate functions in the RESTCreateStrategy and RESTUpdateStrategy interfaces. When ServiceInstance and ServiceBinding resources are deleted in service-catalog, the user that requested the delete is stored in the resource to be sent to the broker. Currently, service-catalog uses the CheckGracefulDelete function to get the user from the context to add it to the resource. That does not seem like an appropriate use of that function.

[BUG] Connection refused for apiserver

Back Ground

I followed the Kubernetes Guide to setup a basic K8S cluster with default parameters, except for following two options added to kube-apiserver.yaml

  - --insecure-bind-address=0.0.0.0
   - --insecure-port=8090 

My full kube-apiserver.yaml is as follows.

apiVersion: v1
kind: Pod
metadata:
  name: kube-apiserver
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-apiserver
    image: quay.io/coreos/hyperkube:v1.6.1_coreos.0
    command:
    - /hyperkube
    - apiserver
    - --bind-address=0.0.0.0
    - --etcd-servers=http://192.168.57.13:2379
    - --allow-privileged=true
    - --service-cluster-ip-range=10.3.0.0/24
    - --secure-port=443
    - --insecure-bind-address=0.0.0.0
    - --insecure-port=8090
    - --advertise-address=192.168.57.130
    - --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
    - --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
    - --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    - --client-ca-file=/etc/kubernetes/ssl/ca.pem
    - --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    - --runtime-config=extensions/v1beta1/networkpolicies=true
    - --anonymous-auth=false
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        port: 8080
        path: /healthz
      initialDelaySeconds: 15
      timeoutSeconds: 15
    ports:
    - containerPort: 443
      hostPort: 443
      name: https
    - containerPort: 8080
      hostPort: 8080
      name: local
    volumeMounts:
    - mountPath: /etc/kubernetes/ssl
      name: ssl-certs-kubernetes
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
  volumes:
  - hostPath:
 - hostPath:
      path: /etc/kubernetes/ssl
    name: ssl-certs-kubernetes
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host

Now when I start the kubelet I see the following errors.

This is with systemctl status kubelet

�~W~O kubelet.service
   Loaded: loaded (/etc/systemd/system/kubelet.service; disabled; vendor preset: disabled)
   Active: active (running) since Sun 2017-05-14 08:54:41 UTC; 4min 31s ago
  Process: 14968 ExecStartPre=/usr/bin/mkdir -p /opt/cni/bin (code=exited, status=0/SUCCESS)
  Process: 14956 ExecStartPre=/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid (code=exited, status=254)
  Process: 14952 ExecStartPre=/usr/bin/mkdir -p /var/log/containers (code=exited, status=0/SUCCESS)
  Process: 14943 ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests (code=exited, status=0/SUCCESS)
 Main PID: 14972 (kubelet)
    Tasks: 16 (limit: 32768)
   Memory: 1.3G
      CPU: 40.662s
   CGroup: /system.slice/kubelet.service
           �~T~\�~T~@14972 /kubelet --api-servers=http://127.0.0.1:8080 --register-schedulable=false --cni-conf-dir=/etc/kubernetes/cni/net.d --network-plugin=cni --container-runtime=docker --allow-privileged=true --pod-manifest-path=/etc/kubernetes/manifests --hostname-override=192.168.57.130 --cluster_dns=10.3.0.10 --cluster_domain=cluster.local
           �~T~T�~T~@15165 journalctl -k -f

May 14 08:59:10 yasassri-test-b9064eab-d104-4183-b42f-6cb5e120ca67.novalocal kubelet-wrapper[14972]: E0514 08:59:10.170585   14972 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:390: Failed to list *v1.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=metadata.name%3D192.168.57.130&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 08:59:10 yasassri-test-b9064eab-d104-4183-b42f-6cb5e120ca67.novalocal kubelet-wrapper[14972]: E0514 08:59:10.171555   14972 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D192.168.57.130&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 08:59:10 yasassri-test-b9064eab-d104-4183-b42f-6cb5e120ca67.novalocal kubelet-wrapper[14972]: E0514 08:59:10.172413   14972 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:382: Failed to list *v1.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 08:59:11 yasassri-test-b9064eab-d104-4183-b42f-6cb5e120ca67.novalocal kubelet-wrapper[14972]: E0514 08:59:11.171287   14972 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:390: Failed to list *v1.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=metadata.name%3D192.168.57.130&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 08:59:11 yasassri-test-b9064eab-d104-4183-b42f-6cb5e120ca67.novalocal kubelet-wrapper[14972]: E0514 08:59:11.172360   14972 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D192.168.57.130&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 08:59:11 yasassri-test-b9064eab-d104-4183-b42f-6cb5e120ca67.novalocal kubelet-wrapper[14972]: E0514 08:59:11.173376   14972 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:382: Failed to list *v1.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 08:59:12 yasassri-test-b9064eab-d104-4183-b42f-6cb5e120ca67.novalocal kubelet-wrapper[14972]: E0514 08:59:12.169077   14972 eviction_manager.go:214] eviction manager: unexpected err: failed GetNode: node '192.168.57.130' not found
May 14 08:59:12 yasassri-test-b9064eab-d104-4183-b42f-6cb5e120ca67.novalocal kubelet-wrapper[14972]: E0514 08:59:12.171928   14972 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:390: Failed to list *v1.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=metadata.name%3D192.168.57.130&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 08:59:12 yasassri-test-b9064eab-d104-4183-b42f-6cb5e120ca67.novalocal kubelet-wrapper[14972]: E0514 08:59:12.172765   14972 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D192.168.57.130&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 08:59:12 yasassri-test-b9064eab-d104-4183-b42f-6cb5e120ca67.novalocal kubelet-wrapper[14972]: E0514 08:59:12.173750   14972 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:382: Failed to list *v1.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused

Other Logs from /var/log/pods

{"log":"E0514 09:02:28.606961       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.ServiceAccount: Get https://localhost:443/api/v1/serviceaccounts?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T09:02:28.60733353Z"}
{"log":"E0514 09:02:28.607194       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *storage.StorageClass: Get https://localhost:443/apis/storage.k8s.io/v1beta1/storageclasses?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T09:02:28.607413819Z"}
{"log":"E0514 09:02:28.607719       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.LimitRange: Get https://localhost:443/api/v1/limitranges?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T09:02:28.607890803Z"}
{"log":"E0514 09:02:28.609090       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.ResourceQuota: Get https://localhost:443/api/v1/resourcequotas?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T09:02:28.609334802Z"}
{"log":"E0514 09:02:28.617184       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.Secret: Get https://localhost:443/api/v1/secrets?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T09:02:28.617450991Z"}
{"log":"E0514 09:02:28.628247       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.Namespace: Get https://localhost:443/api/v1/namespaces?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T09:02:28.628501464Z"}
{"log":"[restful] 2017/05/14 09:02:28 log.go:30: [restful/swagger] listing is available at https://192.168.57.130:443/swaggerapi/\n","stream":"stderr","time":"2017-05-14T09:02:28.657301606Z"}
{"log":"[restful] 2017/05/14 09:02:28 log.go:30: [restful/swagger] https://192.168.57.130:443/swaggerui/ is mapped to folder /swagger-ui/\n","stream":"stderr","time":"2017-05-14T09:02:28.657350995Z"}
{"log":"I0514 09:02:28.863874       1 serve.go:79] Serving securely on 0.0.0.0:443\n","stream":"stderr","time":"2017-05-14T09:02:28.864169072Z"}
{"log":"I0514 09:02:28.864109       1 serve.go:94] Serving insecurely on 0.0.0.0:8090\n","stream":"stderr","time":"2017-05-14T09:02:28.864209629Z"}
{"log":"E0514 09:02:29.349333       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport is closing\n","stream":"stderr","time":"2017-05-14T09:02:29.349625692Z"}
{"log":"E0514 09:02:29.381326       1 client_ca_hook.go:58] rpc error: code = 13 desc = transport is closing\n","stream":"stderr","time":"2017-05-14T09:02:29.381658997Z"}

I also came accross kubernetes/kubeadm#226 as well. I'm not sure whether its related. Please let me know if you need more information.

Go 1.10: x509_test.go:700: server cert: Expected error, got none

1.9.6 does not pass unit tests with Go 1.10. At least:

+ GOPATH=/builddir/build/BUILD/apiserver-kubernetes-1.9.6/_build:/usr/share/gocode
+ go test -buildmode pie -compiler gc -ldflags '-extldflags '\''-Wl,-z,relro  -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld '\'''
--- FAIL: TestX509 (0.02s)
	x509_test.go:700: server cert: Expected error, got none
--- FAIL: TestX509Verifier (0.00s)
	x509_test.go:855: server cert disallowed: Expected error, got none
FAIL
exit status 1
FAIL	k8s.io/apiserver/pkg/authentication/request/x509	0.040s

+ GOPATH=/builddir/build/BUILD/apiserver-kubernetes-1.9.6/_build:/usr/share/gocode
+ go test -buildmode pie -compiler gc -ldflags '-extldflags '\''-Wl,-z,relro  -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld '\'''
# k8s.io/apiserver/pkg/authentication/token/cache
./cache_test.go:80: Errorf format %v reads arg #3, but call has only 2 args
./cache_test.go:86: Errorf format %v reads arg #3, but call has only 2 args
FAIL	k8s.io/apiserver/pkg/authentication/token/cache [build failed]

+ GOPATH=/builddir/build/BUILD/apiserver-kubernetes-1.9.6/_build:/usr/share/gocode
+ go test -buildmode pie -compiler gc -ldflags '-extldflags '\''-Wl,-z,relro  -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld '\'''
# k8s.io/apiserver/pkg/endpoints
./apiserver_test.go:1961: Fatalf format %v reads arg #2, but call has only 1 arg
./apiserver_test.go:1965: Fatalf format %#v reads arg #2, but call has only 1 arg
./apiserver_test.go:2086: Errorf format %v reads arg #2, but call has only 1 arg
./apiserver_test.go:2091: Errorf format %#v reads arg #2, but call has only 1 arg

+ GOPATH=/builddir/build/BUILD/apiserver-kubernetes-1.9.6/_build:/usr/share/gocode
+ go test -buildmode pie -compiler gc -ldflags '-extldflags '\''-Wl,-z,relro  -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld '\'''
# k8s.io/apiserver/pkg/endpoints/filters
./audit_test.go:851: Errorf format %p has arg resp.Header.Get("Audit-ID") of wrong type string
FAIL    k8s.io/apiserver/pkg/endpoints/filters [build failed]

+ GOPATH=/builddir/build/BUILD/apiserver-kubernetes-1.9.6/_build:/usr/share/gocode
+ go test -buildmode pie -compiler gc -ldflags '-extldflags '\''-Wl,-z,relro  '\'''
# k8s.io/apiserver/pkg/storage/etcd3
./store_test.go:1196: Fatal call has possible formatting directive %v
FAIL	k8s.io/apiserver/pkg/storage/etcd3 [build failed]

And for 1.7.15:

+ GOPATH=/builddir/build/BUILD/apiserver-kubernetes-1.7.15/_build:/usr/share/gocode
+ go test -buildmode pie -compiler gc -ldflags '-extldflags '\''-Wl,-z,relro  '\'''
# k8s.io/apiserver/pkg/endpoints
./apiserver_test.go:1730: Errorf format %d has arg resp of wrong type *net/http.Response
./apiserver_test.go:1735: Errorf format %d has arg resp of wrong type *net/http.Response
FAIL	k8s.io/apiserver/pkg/endpoints [build failed]

Audit logs

Hello,

I have a problem with the apiserver though it makes audit logs for certain patches and this makes it slow. Currently, I use it with default settings, so in theory, it should not have to make any audit logs, but it does somehow.

For cluster I use Minikube v1.19.0 with Kubernetes v1.21.2.

That's the log:

I0716 12:17:04.198943       1 queueset.go:305] QS(workload-low): Context of request "service-accounts" &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/namespaces/default/pods", Verb:"list", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"default", Resource:"pods", Subresource:"", Name:"", Parts:[]string{"pods"}} &user.DefaultInfo{Name:"system:serviceaccount:default:default", UID:"e7d9a12e-48c2-4391-9e5a-3b77a8507c72", Groups:[]string{"system:serviceaccounts", "system:serviceaccounts:default", "system:authenticated"}, Extra:map[string][]string{"authentication.kubernetes.io/pod-name":[]string{"rtpe-controller-795ffd98c-r6vl4"}, "authentication.kubernetes.io/pod-uid":[]string{"15e8f728-3ba5-445c-88dc-a44d70a234fe"}}} is Done
I0716 12:17:04.210936       1 trace.go:205] Trace[581813521]: "Patch" url:/apis/l7mp.io/v1/namespaces/default/rules/worker-rtp-rule-30383040-fromtag3038,user-agent:kopf/1.32.1,client:172.17.0.3,accept:*/*,protocol:HTTP/1.1 (16-Jul-2021 12:17:01.799) (total time: 2411ms):
Trace[581813521]: ---"Recorded the audit event" 2374ms (12:17:00.173)
Trace[581813521]: ---"About to apply patch" 0ms (12:17:00.173)
Trace[581813521]: ---"About to check admission control" 6ms (12:17:00.180)
Trace[581813521]: ---"Object stored in database" 29ms (12:17:00.210)
Trace[581813521]: ---"Self-link added" 0ms (12:17:00.210)
Trace[581813521]: [2.41185977s] [2.41185977s] END
I0716 12:17:04.211241       1 queueset.go:732] QS(workload-low) at r=2021-07-16 12:17:04.211216353 v=31.852181904s: request &request.RequestInfo{IsResourceRequest:true, Path:"/apis/l7mp.io/v1/namespaces/default/rules/worker-rtp-rule-30383040-fromtag3038", Verb:"patch", APIPrefix:"apis", APIGroup:"l7mp.io", APIVersion:"v1", Namespace:"default", Resource:"rules", Subresource:"", Name:"worker-rtp-rule-30383040-fromtag3038", Parts:[]string{"rules", "worker-rtp-rule-30383040-fromtag3038"}} &user.DefaultInfo{Name:"system:serviceaccount:default:l7mp-account-chart-1626433631", UID:"6ffade7d-19ba-4192-b753-c9325640bbe6", Groups:[]string{"system:serviceaccounts", "system:serviceaccounts:default", "system:authenticated"}, Extra:map[string][]string{"authentication.kubernetes.io/pod-name":[]string{"l7mp-operator-5fc45f5b9c-lddsm"}, "authentication.kubernetes.io/pod-uid":[]string{"620928fa-5435-4dad-a3a8-f63b38dd5a53"}}} finished, adjusted queue 38 virtual start time to 751.633329449s due to service time 2.416107905s, queue will have 0 waiting & 12 executing
I0716 12:17:04.211314       1 apf_filter.go:160] Handle(RequestDigest{RequestInfo: &request.RequestInfo{IsResourceRequest:true, Path:"/apis/l7mp.io/v1/namespaces/default/rules/worker-rtp-rule-30383040-fromtag3038", Verb:"patch", APIPrefix:"apis", APIGroup:"l7mp.io", APIVersion:"v1", Namespace:"default", Resource:"rules", Subresource:"", Name:"worker-rtp-rule-30383040-fromtag3038", Parts:[]string{"rules", "worker-rtp-rule-30383040-fromtag3038"}}, User: &user.DefaultInfo{Name:"system:serviceaccount:default:l7mp-account-chart-1626433631", UID:"6ffade7d-19ba-4192-b753-c9325640bbe6", Groups:[]string{"system:serviceaccounts", "system:serviceaccounts:default", "system:authenticated"}, Extra:map[string][]string{"authentication.kubernetes.io/pod-name":[]string{"l7mp-operator-5fc45f5b9c-lddsm"}, "authentication.kubernetes.io/pod-uid":[]string{"620928fa-5435-4dad-a3a8-f63b38dd5a53"}}}}) => fsName="service-accounts", distMethod=&v1beta1.FlowDistinguisherMethod{Type:"ByUser"}, plName="workload-low", isExempt=false, queued=true, Finish() => panicking=false idle=false
I0716 12:17:04.211408       1 httplog.go:89] "HTTP" verb="PATCH" URI="/apis/l7mp.io/v1/namespaces/default/rules/worker-rtp-rule-30383040-fromtag3038" latency="2.434419481s" userAgent="kopf/1.32.1" srcIP="172.17.0.3:36016" resp=200
I0716 12:17:04.212052       1 queueset.go:305] QS(workload-low): Context of request "service-accounts" &request.RequestInfo{IsResourceRequest:true, Path:"/apis/l7mp.io/v1/namespaces/default/rules/worker-rtp-rule-30383040-fromtag3038", Verb:"patch", APIPrefix:"apis", APIGroup:"l7mp.io", APIVersion:"v1", Namespace:"default", Resource:"rules", Subresource:"", Name:"worker-rtp-rule-30383040-fromtag3038", Parts:[]string{"rules", "worker-rtp-rule-30383040-fromtag3038"}} &user.DefaultInfo{Name:"system:serviceaccount:default:l7mp-account-chart-1626433631", UID:"6ffade7d-19ba-4192-b753-c9325640bbe6", Groups:[]string{"system:serviceaccounts", "system:serviceaccounts:default", "system:authenticated"}, Extra:map[string][]string{"authentication.kubernetes.io/pod-name":[]string{"l7mp-operator-5fc45f5b9c-lddsm"}, "authentication.kubernetes.io/pod-uid":[]string{"620928fa-5435-4dad-a3a8-f63b38dd5a53"}}} is Done
I0716 12:17:04.216521       1 trace.go:205] Trace[814898886]: "Create" url:/api/v1/namespaces/default/events,user-agent:kopf/1.32.1,client:172.17.0.3,accept:*/*,protocol:HTTP/1.1 (16-Jul-2021 12:17:01.796) (total time: 2419ms):
Trace[814898886]: ---"About to convert to expected version" 2376ms (12:17:00.173)
Trace[814898886]: ---"Conversion done" 0ms (12:17:00.173)
Trace[814898886]: ---"About to store object in database" 0ms (12:17:00.173)
Trace[814898886]: ---"Object stored in database" 42ms (12:17:00.216)
Trace[814898886]: [2.419520835s] [2.419520835s] END

How can I remove the Recorded the audit event and About to convert to expected version time?

Ability to run a fake api server

I would like to run a fake api server that supports declaring actions in its handlers so I can test various scenarios as part of integration testing in prow. Is this possible today (I guess not)? Is this the correct repository to request such a thing?

@kubernetes/sig-api-machinery-feature-requests

Example of a basic API server

Any way I could request a very basic example of using the library to get a vanilla API server up and running with a hello world? I am working on trying to build one now, and can contribute my notes/examples if that helps.

But wondering if anyone has anything useful lying around that isn't in the repo!

Cheers

no json naming on audit.Event object

Hi, from k8s I've requested audit events and got an object

{ "kind": "Event", "apiVersion": "audit.k8s.io/v1", "level": "Metadata", "auditID": "1847e1e1-d66b-4661-b458-4dc553cd8539", "stage": "ResponseComplete", "requestURI": "/apis/storage.k8s.io/v1?timeout=32s", "verb": "get", "user": { "username": "system:serviceaccount:kube-system:generic-garbage-collector", "uid": "83093a4c-3f5f-433e-8fd4-4a2cc23eead8", "groups": [ "system:serviceaccounts", "system:serviceaccounts:kube-system", "system:authenticated" ] }, "sourceIPs": [ "192.168.49.2" ], "userAgent": "kube-controller-manager/v1.20.0 (linux/amd64) kubernetes/af46c47/system:serviceaccount:kube-system:generic-garbage-collector", "responseStatus": { "metadata": {}, "code": 200 }, "requestReceivedTimestamp": "2021-02-18T08:28:43.237861Z", "stageTimestamp": "2021-02-18T08:28:43.238551Z", "annotations": { "authentication.k8s.io/legacy-token": "system:serviceaccount:kube-system:generic-garbage-collector", "authorization.k8s.io/decision": "allow", "authorization.k8s.io/reason": "RBAC: allowed by ClusterRoleBinding \"system:discovery\" of ClusterRole \"system:discovery\" to Group \"system:authenticated\"" } }

yet when I marshal and unmarshal it i get the fields in PascalCase instead of camelCase
I saw that audit.Event (/pkg/apis/audit/types.go) has no json names

this creates anomality between data received from k8s and data received from golang package.

{ "kind": "Event", "apiVersion": "audit.k8s.io/v1", "Level": "Metadata", "AuditID": "1847e1e1-d66b-4661-b458-4dc553cd8539", "Stage": "ResponseComplete", "RequestURI": "/apis/storage.k8s.io/v1?timeout=32s", "Verb": "get", "User": { "username": "system:serviceaccount:kube-system:generic-garbage-collector", "uid": "83093a4c-3f5f-433e-8fd4-4a2cc23eead8", "groups": [ "system:serviceaccounts", "system:serviceaccounts:kube-system", "system:authenticated" ] }, "ImpersonatedUser": null, "SourceIPs": [ "192.168.49.2" ], "UserAgent": "kube-controller-manager/v1.20.0 (linux/amd64) kubernetes/af46c47/system:serviceaccount:kube-system:generic-garbage-collector", "ObjectRef": null, "ResponseStatus": { "metadata": {}, "code": 200 }, "RequestObject": null, "ResponseObject": null, "RequestReceivedTimestamp": "2021-02-18T08:28:43.237861Z", "StageTimestamp": "2021-02-18T08:28:43.238551Z", "Annotations": { "authentication.k8s.io/legacy-token": "system:serviceaccount:kube-system:generic-garbage-collector", "authorization.k8s.io/decision": "allow", "authorization.k8s.io/reason": "RBAC: allowed by ClusterRoleBinding \"system:discovery\" of ClusterRole \"system:discovery\" to Group \"system:authenticated\"" } }

How to remove dependencies during deletion?

Hello, I am trying to use this repo in my own program, and I am confused about how to delete an object and its dependencies with the registry.(*Store).Delete() method.

I will really appreciate it if you could provide some demo codes or reference about where to find the hint. I have read the sample-server, but found it's not very clear for this situation.

I want to know:

  • how to maintain dependencies in Object (such as attach an object to another as its dependency)
  • how to delete an object and its dependencies with Delete()?

I have tried for a while on this, and it just does not work. The followings are some part of my code.

My data structs:

type Parent struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`

	Foo      int64   `json:"foo,omitempty" protobuf:"varint,2,opt,name=foo"`
	Children []Child `json:"children,omitempty" protobuf:"bytes,3,rep,name=children"`
}

type Child struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
}

My deletion method:

foregroundDelete := metav1.DeletePropagationForeground
deleteOptions := &metav1.DeleteOptions{
	PropagationPolicy: &foregroundDelete,
}
_, _, err := storage.(*Storage).Delete(request.NewContext(), parent.GetName(), deleteOptions)
if err != nil {
	log.Fatal(err)
}

With deleteOptions := metav1.NewDeleteOptions(0), I could delete the parent only, but with the code above, I can delete nothing now.

Complains that file does not exist... But it does

osboxes@master:/var/log/pods$ sudo tail -f fdb932ada5768a1891d839f8cf2306a9/kube-apiserver/31.log 
{"log":"      --tls-private-key-file string                             File containing the default x509 private key matching --tls-cert-file.\n","stream":"stderr","time":"2018-08-01T17:04:12.869978989Z"}
{"log":"      --tls-sni-cert-key namedCertKey                           A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: \"example.crt,example.key\" or \"foo.crt,foo.key:*.foo.com,foo.com\". (default [])\n","stream":"stderr","time":"2018-08-01T17:04:12.869981784Z"}
{"log":"      --token-auth-file string                                  If set, the file that will be used to secure the secure port of the API server via token authentication.\n","stream":"stderr","time":"2018-08-01T17:04:12.869985782Z"}
{"log":"  -v, --v Level                                                 log level for V logs\n","stream":"stderr","time":"2018-08-01T17:04:12.869988586Z"}
{"log":"      --version version[=true]                                  Print version information and quit\n","stream":"stderr","time":"2018-08-01T17:04:12.869991231Z"}
{"log":"      --vmodule moduleSpec                                      comma-separated list of pattern=N settings for file-filtered logging\n","stream":"stderr","time":"2018-08-01T17:04:12.86999605Z"}
{"log":"      --watch-cache                                             Enable watch caching in the apiserver (default true)\n","stream":"stderr","time":"2018-08-01T17:04:12.869998906Z"}
{"log":"      --watch-cache-sizes strings                               List of watch cache sizes for every resource (pods, nodes, etc.), comma separated. The individual override format: resource[.group]#size, where resource is lowercase plural (no version), group is optional, and size is a number. It takes effect when watch-cache is enabled. Some resources (replicationcontrollers, endpoints, nodes, pods, services, apiservices.apiregistration.k8s.io) have system defaults set by heuristics, others default to default-watch-cache-size\n","stream":"stderr","time":"2018-08-01T17:04:12.870002021Z"}
{"log":"\n","stream":"stderr","time":"2018-08-01T17:04:12.870005608Z"}
{"log":"error: loading audit policy file: failed to read file path \"/etc/kubernetes/audit.yaml\": open /etc/kubernetes/audit.yaml: no such file or directory\n","stream":"stderr","time":"2018-08-01T17:04:12.870008063Z"}
jjjjjjjjjjj^C
osboxes@master:/var/log/pods$ ls -altr /etc/kubernetes/audit.yaml
-rwxrwxrwx 1 root root 113 Aug  1 11:34 /etc/kubernetes/audit.yaml

I've added the following line to the api-server manifest yaml config file thingamabob:
- --audit-policy-file=/etc/kubernetes/audit.yaml

1000s of warnings when the apiserver aggregator is enabled.

Fairly recently we changed our apiserver settings in order to enable to the use of the aggregator (to enable metrics-server and other api extensions). Here are the settings applied: -

    - --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem
    - --requestheader-allowed-names=aggregator
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --enable-aggregator-routing=false

Whilst everything appears to be functioning correctly we are setting a large number of warnings being generated in our apiserver logs, e.g. :-

W0920 09:00:34.491307 1 x509.go:172] x509: subject with cn=system:node:ip-10-29-11-214.us-west-2.compute.internal is not in the allowed list: [aggregator]
W0920 09:00:34.498896 1 x509.go:172] x509: subject with cn=system:node:ip-10-29-11-214.us-west-2.compute.internal is not in the allowed list: [aggregator]
W0920 09:00:34.718998 1 x509.go:172] x509: subject with cn=system:node:ip-10-29-20-126.us-west-2.compute.internal is not in the allowed list: [aggregator]
W0920 09:00:34.806978 1 x509.go:172] x509: subject with cn=system:node:ip-10-29-19-28.us-west-2.compute.internal is not in the allowed list: [aggregator]
W0920 09:00:34.815000 1 x509.go:172] x509: subject with cn=system:node:ip-10-29-19-28.us-west-2.compute.internal is not in the allowed list: [aggregator]

I debugged the issue for a while thinking that there was something broken in our configuration causing apiserver issues (we were also seeing poor performance). I eventually had a look in the apiserver code and realised that what I was seeing was benign but making a lot of noise with warnings where perhaps there should not be any.

When you have requestheader authentication enabled, all regular x509 authentication requests also pass through this x509.Verifier object first, where their certs and common names are verified regardless of whether they actually contain an embedded requestheader authentication or not. When their common names do not match the configured allowed list then a warning is generated to the apiserver logs. The warning is generated but the union authenticator happily continues down the chain until it reaches the regular x509 authenticator which is able to authenticate the request and the request is authenticated and proceeds through rbac etc.

My issue is with the large number of warnings being generated in what is normal operation - they make looking at the apiserver logs a bit painful and I'm sure that they are wasting cpu and io for logging warnings on so many requests.

API server panic due to http handler timeout

Observed multiple panics in k8s apiserver as well as kube metric server version v0.3.1. Metrics server apis stop responding to hpa after this.

Below are traces from k8s api server logs

/workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:2065
/usr/local/go/src/runtime/asm_amd64.s:2361
E0626 00:13:15.271956       1 wrap.go:32] apiserver panic'd on GET /apis/metrics.k8s.io/v1beta1/nodes
I0626 00:13:15.272052       1 log.go:172] http2: panic serving 10.218.178.248:46060: killing connection/stream because serving request timed out and response had been started
goroutine 386036669 [running]:
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).runHandler.func1(0xc7b9b18908, 0xc4bf1bffaf, 0xc654e7b880)
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:2058 +0x190
panic(0x47345e0, 0xc420235060)
    /usr/local/go/src/runtime/panic.go:502 +0x229
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0xc4bf1bfce8, 0x1, 0x1)
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x107
panic(0x47345e0, 0xc420235060)
    /usr/local/go/src/runtime/panic.go:502 +0x229
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).timeout(0xc845415ee0, 0xc7140f3560)
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:234 +0x190
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc439aedaa0, 0x6a5df80, 0xc839457a40, 0xc7a9d4d900)
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:118 +0x2c1
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithWaitGroup.func1(0x6a5df80, 0xc839457a40, 0xc7a9d4d800)
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:47 +0xd4
net/http.HandlerFunc.ServeHTTP(0xc423372ea0, 0x6a5df80, 0xc839457a40, 0xc7a9d4d800)
    /usr/local/go/src/net/http/server.go:1947 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1(0x6a5df80, 0xc839457a40, 0xc79b873000)
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39 +0x181
net/http.HandlerFunc.ServeHTTP(0xc423372ed0, 0x6a5df80, 0xc839457a40, 0xc79b873000)
    /usr/local/go/src/net/http/server.go:1947 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1(0x6a5df80, 0xc839457a40, 0xc79b873000)
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:46 +0x11a
net/http.HandlerFunc.ServeHTTP(0xc439aedac0, 0x6a530c0, 0xc7b9b18908, 0xc79b873000)
    /usr/local/go/src/net/http/server.go:1947 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(0xc423372f00, 0x6a530c0, 0xc7b9b18908, 0xc79b873000)
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:189 +0x51
net/http.serverHandler.ServeHTTP(0xc42c68c270, 0x6a530c0, 0xc7b9b18908, 0xc79b873000)
    /usr/local/go/src/net/http/server.go:2697 +0xbc
net/http.initNPNRequest.ServeHTTP(0xc8885a7c00, 0xc42c68c270, 0x6a530c0, 0xc7b9b18908, 0xc79b873000)
    /usr/local/go/src/net/http/server.go:3263 +0x9a
net/http.(Handler).ServeHTTP-fm(0x6a530c0, 0xc7b9b18908, 0xc79b873000)
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:241 +0x4d
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).runHandler(0xc654e7b880, 0xc7b9b18908, 0xc79b873000, 0xc839d5e9c0)
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:2065 +0x89
created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).processHeaders
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:1799 +0x46b

and

E0625 20:53:58.158321       1 wrap.go:32] apiserver panic'd on GET /api/v1/pods?limit=500&resourceVersion=0
I0625 20:53:58.158455       1 log.go:172] http2: panic serving 10.218.176.149:60430: killing connection/stream because serving request timed out and response had been started
goroutine 367905912 [running]:
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).runHandler.func1(0xc607cbccd0, 0xc73d1abfaf, 0xc42a1101c0)
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:2058 +0x190
panic(0x47345e0, 0xc420235060)
    /usr/local/go/src/runtime/panic.go:502 +0x229
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0xc73d1abce8, 0x1, 0x1)
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x107
panic(0x47345e0, 0xc420235060)
    /usr/local/go/src/runtime/panic.go:502 +0x229
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).timeout(0xc5af63d780, 0xc4c1eb4870)
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:234 +0x190
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc439aedaa0, 0x6a5df80, 0xc4abc62fc0, 0xc4ad418500)
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:118 +0x2c1
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithWaitGroup.func1(0x6a5df80, 0xc4abc62fc0, 0xc4ad418400)
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:47 +0xd4
net/http.HandlerFunc.ServeHTTP(0xc423372ea0, 0x6a5df80, 0xc4abc62fc0, 0xc4ad418400)
    /usr/local/go/src/net/http/server.go:1947 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1(0x6a5df80, 0xc4abc62fc0, 0xc495f36500)
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39 +0x181
net/http.HandlerFunc.ServeHTTP(0xc423372ed0, 0x6a5df80, 0xc4abc62fc0, 0xc495f36500)
    /usr/local/go/src/net/http/server.go:1947 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1(0x6a5df80, 0xc4abc62fc0, 0xc495f36500)
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:46 +0x11a
net/http.HandlerFunc.ServeHTTP(0xc439aedac0, 0x6a530c0, 0xc607cbccd0, 0xc495f36500)
    /usr/local/go/src/net/http/server.go:1947 +0x44
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(0xc423372f00, 0x6a530c0, 0xc607cbccd0, 0xc495f36500)
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:189 +0x51
net/http.serverHandler.ServeHTTP(0xc42c68c270, 0x6a530c0, 0xc607cbccd0, 0xc495f36500)
    /usr/local/go/src/net/http/server.go:2697 +0xbc
net/http.initNPNRequest.ServeHTTP(0xc63fe6e380, 0xc42c68c270, 0x6a530c0, 0xc607cbccd0, 0xc495f36500)
    /usr/local/go/src/net/http/server.go:3263 +0x9a
net/http.(Handler).ServeHTTP-fm(0x6a530c0, 0xc607cbccd0, 0xc495f36500)
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:241 +0x4d
k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).runHandler(0xc42a1101c0, 0xc607cbccd0, 0xc495f36500, 0xc5aac283a0)
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:2065 +0x89
created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).processHeaders
    /workspace/anago-v1.12.8-beta.0.57+a89f8c11a5f4f1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/server.go:1799 +0x46b

Please let me know if any more information is needed.

"go get -u k8s.io/apiserver/pkg/server/routes" fails

go get -u k8s.io/apiserver/pkg/server/routes

Fails with:

# k8s.io/apiserver/pkg/server/routes
../go/src/k8s.io/apiserver/pkg/server/routes/openapi.go:38:91: cannot use c.RegisteredWebServices() (type []*"k8s.io/apiserver/vendor/github.com/emicklei/go-restful".WebService) as type []*"github.com/emicklei/go-restful".WebService in argument to handler.BuildAndRegisterOpenAPIService
../go/src/k8s.io/apiserver/pkg/server/routes/openapi.go:42:97: cannot use c.RegisteredWebServices() (type []*"k8s.io/apiserver/vendor/github.com/emicklei/go-restful".WebService) as type []*"github.com/emicklei/go-restful".WebService in argument to handler.BuildAndRegisterOpenAPIVersionedService

probably due to an out-of-date vendor.

gcr.io/google-containers/kube-apiserver:v1.18.9 not found

Hi,

hopefully I can get some support here - currently trying to deploy k8s cluster via kubespray v2.14.1.
Seems that latest k8s images are not available:
└─ $ ▶ docker image pull gcr.io/google-containers/kube-apiserver:v1.18.9 Error response from daemon: manifest for gcr.io/google-containers/kube-apiserver:v1.18.9 not found: manifest unknown: Failed to fetch "v1.18.9" from request "/v2/google-containers/kube-apiserver/manifests/v1.18.9".

What can we do about this issue? Is there a way to request these versions to be available on gcr and how?

Problems trying to update go modules; seems to be referencing types from version 1.21

I can see that the master egress_selector.go types have been changed; but go get seems to be referencing those in 1.21

(base) ➜  armada-lb git:(manual-deps) ✗ go get -u=patch ./...                                                          git:(manual-deps|✚3
# k8s.io/apiserver/pkg/server/egressselector
../../go/pkg/mod/k8s.io/[email protected]/pkg/server/egressselector/egress_selector.go:158:17: g.tunnel.Dial undefined (type client.Tunnel has no field or method Dial)
../../go/pkg/mod/k8s.io/[email protected]/pkg/server/egressselector/egress_selector.go:206:49: cannot use udsName (type string) as type context.Context in argument to client.CreateSingleUseGrpcTunnel:
	string does not implement context.Context (missing Deadline method)
../../go/pkg/mod/k8s.io/[email protected]/pkg/server/egressselector/egress_selector.go:206:49: cannot use dialOption (type grpc.DialOption) as type string in argument to client.CreateSingleUseGrpcTunnel

Naming resources for Aggregated APIs

The service-catalog system has an aggregated API server that names four resource types:

  • Broker
  • ServiceClass
  • Instance
  • Binding

The fully-qualified names for these resources are all listed as *.servicecatalog.k8s.io, but one may still use the short version in kubectl. However, binding poses a problem because it conflicts with the core resource name. It is possible to simply use the fully-qualified name, but this conflict raises a bigger question: is there any guidance on naming resources in aggregated API servers, or should there be?

For example, should we tell users to choose resource names that do not conflict with core resource names (I think so)? Are there ways we can suggest for users to make their resource names more descriptive (we have chosen to prefix every name with ServiceCatalog, but I hardly think that is the optimal solution)?

what's CONNECT and PROXY use case

At https://github.com/kubernetes/apiserver/blob/master/pkg/endpoints/installer.go#L82 , toDiscoveryKubeVerb is initialized as bellow

var toDiscoveryKubeVerb = map[string]string{
"CONNECT": "", // do not list in discovery.
"DELETE": "delete",
"DELETECOLLECTION": "deletecollection",
"GET": "get",
"LIST": "list",
"PATCH": "patch",
"POST": "create",
"PROXY": "proxy",
"PUT": "update",
"WATCH": "watch",
"WATCHLIST": "watch",
}

I don't understand how and when to use CONNECT and PROXY action. Is there some documents or examples .

Update gnostic to the latest version

Context: google/gnostic#195
gnostic upgrade from yaml.v2 to yaml.v3 breaks some clients

Can we use apiserver with the latest github.com/googleapis/gnostic (0.5.0 or later)?
There is an incompatibility between yaml v2 used by apiserver and yaml v3 used by gnostic since version 0.5.0

Without this fix, it is impossible to import this library and controller-runtime v0.8.3
error log:

# k8s.io/apiserver/pkg/util/openapi
../../../pkg/mod/k8s.io/[email protected]/pkg/util/openapi/proto.go:43:36: cannot use info (type "gopkg.in/yaml.v2".MapSlice) as type *"gopkg.in/yaml.v3".Node in argument to openapi_v2.NewDocument

To fix this, as suggested in google/gnostic#195, we could replace this line with the ParseDocument functions defined in openapiv2/document.go and openapiv3/document.go. That would make the calling code look like this:

document, err := openapi_v2.ParseDocument(b)

where b is a []byte of the JSON or YAML file to be parsed.

Update Kubernetes dependencies to release kubernetes-1.16.3

Please update the Kubernetes dependencies from legacy commits closer to kubernetes-1.16.3 releases, so that this library can be consumed more easily with other Kubernetes 1.16.3 libraries.

[WARN]	Conflict: k8s.io/client-go rev is currently kubernetes-1.16.0, but k8s.io/apiserver wants 1fbdaa4c8d90

Returning nil TTL will not keep the same value as the comment mentioned

Hi,

As the comment mentioned, when tryUpdate UpdateFunc returns a nil TTL in method GuaranteedUpdate, the TTL attached to the key will not change. But in current codes under pkg/storage/etcd3, the TTL will be unset, because it will return 0 when ttlPtr is nil. And a zero TTL will finally unset the TTL.

The following test case will fail as timeout without key deleted:

func TestGuaranteedUpdateWithNilTTL(t *testing.T) {
	ctx, store, cluster := testSetup(t)
	defer cluster.Terminate(t)

	input := &example.Pod{ObjectMeta: metav1.ObjectMeta{Name: "foo"}}
	key := "/somekey"

	out := &example.Pod{}
	err := store.GuaranteedUpdate(ctx, key, out, true, nil,
		func(_ runtime.Object, _ storage.ResponseMeta) (runtime.Object, *uint64, error) {
			ttl := uint64(1)
			return input, &ttl, nil
		})
	if err != nil {
		t.Fatalf("Create failed: %v", err)
	}

	err = store.GuaranteedUpdate(ctx, key, out, true, nil,
		func(_ runtime.Object, _ storage.ResponseMeta) (runtime.Object, *uint64, error) {
			input.Namespace = "update"
			return input, nil, nil
		})
	if err != nil {
		t.Fatalf("Update failed: %v", err)
	}

	w, err := store.Watch(ctx, key, out.ResourceVersion, storage.Everything)
	if err != nil {
		t.Fatalf("Watch failed: %v", err)
	}
	testCheckEventType(t, watch.Deleted, w)
}

I think it is a bug. And it should be fixed.

Deprecated/Missing Kubernetes API Server Metrics

It appears that the following Kubernetes apiserver metrics were deprecated, with the old list being the following:

apiserver_request_latencies_bucket
apiserver_requests
apiserver_request_count
apiserver_request_errors
apiserver_latency

After extensive searches through the documentation, the only deprecated metric I can find from the list is this:

apiserver_request_count = apiserver_request_total

As for the others, they do not appear anywhere that I have looked in the Kubernetes code base on github, I had thought that they would be here somehwere, given that this is the page for apiserver, but only the one above is listed.

The metrics are being used to determine the health of TKGI, with output that used to look like this:

# Recording rule expr
histogram_quantile ( 0.90, sum by (le, verb)( rate(apiserver_request_latencies_bucket[5m]) ) ) / 1e3 > 0
histogram_quantile ( 0.90, sum by (le, job, verb, instance)( rate(apiserver_request_latencies_bucket[5m]) ) ) / 1e3
sum by()(probe_success{provider=\"kubernetes\", component=\"apiserver\"})
sum without (instance)(kubernetes:job_verb_code_instance:apiserver_requests:rate5m)
sum by (job, verb, code, instance)(rate(apiserver_request_count[5m]))
sum without (instance)(kubernetes:job_verb_code_instance:apiserver_requests:ratio_rate5m)
kubernetes:job_verb_code_instance:apiserver_requests:rate5m / ignoring(verb, code) group_left sum by (job, instance)(kubernetes:job_verb_code_instance:apiserver_requests:rate5m)
sum by (job)(kubernetes:job_verb_code_instance:apiserver_requests:ratio_rate5m{verb=~\"GET|POST|DELETE|PATCH\", code=~\"5..\", cluster=\"$cluster\"})
histogram_quantile ( 0.90, sum by (le, job)( rate(apiserver_request_latencies_bucket{verb=~\"GET|POST|DELETE|PATCH\", cluster=\"$cluster\"}[5m]) ) ) / 1e3
kubernetes:job:apiserver_request_errors:ratio_rate5m < bool 0.01 * kubernetes::job:apiserver_latency:pctl90rate5m < bool 200
kubernetes:job:apiserver_request_errors:ratio_rate5m < bool Inf * kubernetes::job:apiserver_latency:pctl90rate5m < bool Inf

Any help from anyone on this would be greatly appreciated, we would like to know what happened to these metrics OR where they are stored.

Thanks

The traversal code in the managedfields.encodeManagedFields method is simplified

The location of the issues

func encodeManagedFields(managed ManagedInterface) (encodedManagedFields []metav1.ManagedFieldsEntry, err error) {
	if len(managed.Fields()) == 0 {
		return nil, nil
	}
	encodedManagedFields = []metav1.ManagedFieldsEntry{} 
        // Directly use key-value traversal,no need to obtain the value according to the key alone.
	for manager := range managed.Fields() {
		versionedSet := managed.Fields()[manager]
		v, err := encodeManagerVersionedSet(manager, versionedSet)
		if err != nil {
			return nil, fmt.Errorf("error encoding versioned set for %v: %v", manager, err)
		}
		if t, ok := managed.Times()[manager]; ok {
			v.Time = t
		}
		encodedManagedFields = append(encodedManagedFields, *v)
	}
	return sortEncodedManagedFields(encodedManagedFields)
}

Clarification on watch when server (or store) is partitioned

After reading up on some corner cases with etcd client behavior (connections, streams, etc.) I came across the following option that has been introduced in etcd a while ago:

// WithRequireLeader requires client requests to only succeed
// when the cluster has a leader.
func WithRequireLeader(ctx context.Context) context.Context {...}

(Source)

From the official etcd docs on client behavior:

Client-side keepalive ping still does not reason about network partitions. Streaming request may get stuck with a partitioned node. Advanced health checking service need to be implemented to understand the cluster membership (see etcd#8673 for more detail).
Source: clientv3-grpc1.23: Balancer Limitation

I do not see this option being used in the API server when Watch() is established.

Is there code in the API server that deals with these cases (etcd/API server node paritioned) and disconnects clients (REST, SDK) appropriately? Or are we at risk of having dangling (hanging) consumer-side client watches due to not using this option in the etcd client?

Related discussions:

clientv3: clarify "WithRequireLeader" for network partition
Network-partition aware health service

cc/ @dims @jingyih @jpbetz @sttts

etcd backend sharding support

As the number of nodes in our k8s cluster increases significantly, we saw etcd gradually becoming our cluster performance bottleneck. apiserver already supports different objects using different etcd cluster. But in our case, we saw a large number of pods objects and we investigate if it is possible to use multiple shards within one resource type.

Some companies already do the sharding based on the key, similar to what TiKV did. However, I want to discuss the possibilities to use etcd shards based on the key hash.

For example, I have key k1 and key k2, after getting their md5 and mod ops, I will put k1 to etcd shard1 and k2 to etcd shard2. This will balance the loads among different etcd clusters and get higher throughput.

AFAIK, apiserver uses only a limited number of operations supported by etcd. These are Range/Txn/Watch operations. Also, regarding Txn operation, they are only simple transactions doing single Create, Update or Delete operation.

If we use the proposed sharding, for single Create/Update/Delete operations, it seems simple. But for Watch and Range request, apiserver needs to maintain a connection to each one of the etcd shard. Could there be any issue with this regarding the Range/Watch performance?

When apiserver holds multiple etcd shard connections, apiserver also need to remember each etcd shard's latest revision. Regarding this, I am thinking make changes to the APIObjectVersioner object so that a resource version vector is supported. It will be something similar to "{Shard1:Rev1,Shard2:Rev2}". So that the revision positions in each of the shard connection is kept.

Do you guys see any issue with this general design? I want to get some feedback from you guys so that if it is do-able, we can make this change and contribute this back to the opensource community.

Client-side backwards compatibility flags

Hello -

Acknowledging the basically "no compatibility guarantee" offered in the README, we're still wondering if you have any advice on the following situation:

  • we have a controller that makes an apiserver
  • we'd like to upgrade to newer versions of kubernetes libraries while still supporting older cluster versions.

In this particular case, the specifics are around the flowcontrol library which went from alpha to beta in 1.19->1.20.

So there's a flag that seems to work for turning off the whole thing:
https://github.com/kubernetes/apiserver/blob/master/pkg/server/options/recommended.go#L138

However there's concern that this would spill out of just our little controller and affect the whole cluster where our apiserver is installed.

Do you have any guidance around how to support previous versions of kubernetes runtimes from newer versions of the libraries?

Thanks!

apiserver aggregator can not change the request.Host when access the external http service

When apiserver is using apiservice and service EXTERNAL-IP to access the external http service(a sample metric service),the http request header Host is still the apiserver's Host, causing the external http proxy services are failed to forward the request normally.

The requested link is like this, client -> apiserver -> external http proxy service -> real http service(metric service)

I read the source codes:k8s.io/client-go/transport.(*debuggingRoundTripper) # RoundTrip ,net/http/request.go # WithContext, /net/http2/transport.go # encodeHeaders,found that apiserver isn't change the Request.Host,as a result, it forwarded out with original host.

I want to know why kubernetes does not use the target Url.Host when accessing external services, but instead uses the original Request.Host,thanks

My kubernetes cluster version is v1.10

[root@8c00516de625 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.254.0.1 443/TCP 2d
power-metrics ExternalName test-ext-adaptor.dailyevn.net 2d

[root@8c00516de625 ~]# kubectl describe apiservices v1alpha1.power.metrics.sigma
Name: v1alpha1.power.metrics.sigmaiservices v1alpha1.power.metrics.sigma
Namespace:
Labels:
Annotations:
API Version: apiregistration.k8s.io/v1
Kind: APIService
Metadata:
Creation Timestamp: 2018-11-30T07:54:23Z
Resource Version: 38765
Self Link: /apis/apiregistration.k8s.io/v1/apiservices/v1alpha1.power.metrics.sigma
UID: 2702bed7-f475-11e8-a76f-02427ea09f19
Spec:
Group: power.metrics.sigma
Group Priority Minimum: 1000
Insecure Skip TLS Verify: true
Service:
Name: power-metrics
Namespace: default
Version: v1alpha1
Version Priority: 15
Status:
Conditions:
Last Transition Time: 2018-12-03T05:51:12Z
Message: all checks passed
Reason: Passed
Status: True
Type: Available
Events:

Unreachable code in tests

Hi. I am writing a tool to detect unreachable code. I used your project to test it and found an issue:

t.Fatalf("unexpected response: %s %#v", request.URL, res)

t.Fatalf call stops the execution of the test. As a result, the lines below will never be executed. Probably, t.Fatalf should be replaced with ``t.Errorf`.

[OIDC] x509: certificate signed by unknown authority

I get an error like following when I use the OIDC setting in the apiserver.

error log
E1206 06:12:46.728701 1 oidc.go:190] oidc authenticator: failed to fetch provider discovery data: Get https://keycloak.xxxxx.com/auth/realms/k8s/.well-known/openid-configuration: x509: certificate signed by unknown authority
E1206 06:12:46.728752 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, [crypto/rsa: verification error, fetch provider config: Get https://keycloak.xxxxx.com/auth/realms/k8s/.well-known/openid-configuration: x509: certificate signed by unknown authority]]]

kube-apiserver configuration regarding oidc
- --oidc-client-id=account
- --oidc-issuer-url=https://keycloak.xxxxx.com/auth/realms/k8s
- --oidc-username-claim=email
- --oidc-groups-claim=group

I also added the Root CA certificate and the certificate for OIDC application(KeyCloak) in the host server(Master Node) using the ca-certificates package.(http://manuals.gfi.com/en/kerio/connect/content/server-configuration/ssl-certificates/adding-trusted-root-certificates-to-the-server-1605.html)
I am using the CentOS 7.

Additionally, I tested the connectivity to the OIDC server from the host server(Master Node) and kube-apiserver pod using the wget utility and the result is like following.

  1. on host server(Master Node)
    [root@dev ~]# wget https://keycloak.xxxxx.com
    --2017-12-06 01:33:26-- https://keycloak.xxxxx.com/
    Resolving keycloak.kloudz.xyz (keycloak.xxxxx.com)... 169.56.xx.xx
    Connecting to keycloak.kloudz.xyz (keycloak.xxxxx.com)|169.56.xx.xx|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 1087 (1.1K) [text/html]
    Saving to: ‘index.html’

  2. kube-apiserver pod
    / # wget https://keycloak.xxxxx.com
    Connecting to keycloak.kloudz.xyz (169.56.xx.xx:443)
    wget: TLS error from peer (alert code 40): handshake failure
    wget: error getting response: Connection reset by peer

As you know, kube-apiserver also has the certificates regarding OIDC provider.
Why could not connect the apiserver pod to the OIDC server??

Error is returned when TableConverter is nil

According to the documentation on the Store.TableConverter field, it can be nil:

// TableConvertor is an optional interface for transforming items or lists
// of items into tabular output. If unset, the default will be used.
TableConvertor rest.TableConvertor

And, where it's used, a nil check is done and the behavior matches that comment:

func (e *Store) ConvertToTable(ctx context.Context, object runtime.Object, tableOptions runtime.Object) (*metav1.Table, error) {
if e.TableConvertor != nil {
return e.TableConvertor.ConvertToTable(ctx, object, tableOptions)
}
return rest.NewDefaultTableConvertor(e.DefaultQualifiedResource).ConvertToTable(ctx, object, tableOptions)
}

However, when it is nil, validation in the CompleteWithOptions method returns an error, contrary to the docs and the behavior:

if e.TableConvertor == nil {
return fmt.Errorf("store for %s must set TableConvertor; rest.NewDefaultTableConvertor(e.DefaultQualifiedResource) can be used to output just name/creation time", e.DefaultQualifiedResource.String())
}

Go 1.10 : apiserver_test.go:1961: Fatalf format %v reads arg #2, but call has only 1 arg

1.9.6 does not pass unit tests with Go 1.10. At least:

+ GOPATH=/builddir/build/BUILD/apiserver-kubernetes-1.9.6/_build:/usr/share/gocode
+ go test -buildmode pie -compiler gc -ldflags '-extldflags '\''-Wl,-z,relro  -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld '\'''
# k8s.io/apiserver/pkg/endpoints
./apiserver_test.go:1961: Fatalf format %v reads arg #2, but call has only 1 arg
./apiserver_test.go:1965: Fatalf format %#v reads arg #2, but call has only 1 arg
./apiserver_test.go:2086: Errorf format %v reads arg #2, but call has only 1 arg
./apiserver_test.go:2091: Errorf format %#v reads arg #2, but call has only 1 arg

Openapi Proto.go break

Since gnostic update to yaml.v3, Proto.go can't no long use
doc, err := openapi_v2.NewDocument(info, compiler.NewContext("$root", nil))
instead it will have to change to
`doc, err := openapi_v2.ParseDocument(specBytes)

accourding to this issue: google/gnostic#195

301 for lumberjack.v2 dependency

When I try to build this using glide, I get the following error:

[WARN]	Unable to checkout gopkg.in/natefinch/lumberjack.v2
[ERROR]	Error looking for gopkg.in/natefinch/lumberjack.v2: Unable to get repository

Trying to get the package directly results in

go get gopkg.in/natefinch/lumberjack.v2
# cd .; git clone https://gopkg.in/natefinch/lumberjack.v2 /XXX/sample-apiserver/src/gopkg.in/natefinch/lumberjack.v2
Cloning into '/XXX/sample-apiserver/src/gopkg.in/natefinch/lumberjack.v2'...
error: RPC failed; HTTP 301 curl 22 The requested URL returned error: 301
fatal: The remote end hung up unexpectedly
package gopkg.in/natefinch/lumberjack.v2: exit status 128

MutatingWebHook Configuration Changing Name of Resource

Hey,

I've been working on a MutatingWebHookConfiguration that modifies the name of a resource (Only on CREATE, I know it's immutable beyond that), depending on the metadata. I'm happy to get into the use-case, if we feel it's pertinent to provide more context.

The problem is, when the next CREATE comes in, I want to modify the name again and UPDATE the existing resource. Is it / will it be possible to "upgrade", for lack of a better term, a CREATE to an UPDATE during this phase?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.