Code Monkey home page Code Monkey logo

eventrouter's Introduction

Eventrouter

This repository contains a simple event router for the Kubernetes project. The event router serves as an active watcher of event resource in the kubernetes system, which takes those events and pushes them to a user specified sink. This is useful for a number of different purposes, but most notably long term behavioral analysis of your workloads running on your kubernetes cluster.

Goals

This project has several objectives, which include:

  • Persist events for longer period of time to allow for system debugging
  • Allows operators to forward events to other system(s) for archiving/ML/introspection/etc.
  • It should be relatively low overhead
  • Support for multiple sinks should be configurable

NOTE:

By default, eventrouter is configured to leverage existing EFK stacks by outputting wrapped json object which are easy to index in elastic search.

Non-Goals:

  • This service does not provide a querable extension, that is a responsibility of the sink
  • This service does not serve as a storage layer, that is also the responsibility of the sink

Running Eventrouter

Standup:

$ kubectl create -f https://raw.githubusercontent.com/heptiolabs/eventrouter/master/yaml/eventrouter.yaml

Teardown:

$ kubectl delete -f https://raw.githubusercontent.com/heptiolabs/eventrouter/master/yaml/eventrouter.yaml

Inspecting the output

$ kubectl logs -f deployment/eventrouter -n kube-system 

Watch events roll through the system and hopefully stream into your ES cluster for mining, Hooray!

eventrouter's People

Contributors

ahmetb avatar alok87 avatar blakeroberts-wk avatar cdornsife avatar fabxc avatar gianrubio avatar johnschnake avatar juan-lee avatar kevingessner avatar liztio avatar nalum avatar richm avatar rosskukulinski avatar thehackercat avatar timothysc avatar vigith avatar xigang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eventrouter's Issues

Duplicate events due to re-listing on eventrouter restart (update/crash)

Hello,

If the eventrouter controller were to be updated and/or crashes I believe it will re-drive the list operation on restart and correspondingly resend all the existing events (mind you by default k8 deletes events every hour I believe) ?

Assuming this is the case, while ideally the consumers are idempotent and can tolerate an at least once delivery semantic, it could potentially generate a lot of traffic if there are a lot of events.

Any thoughts on adding an annotation/label to the event "sent" on success to avoid this ? Of course there's still the window of:

  • send (to whatever sink)
  • crash
  • add annotation

But that's ok this is not meant for exactly once delivery but rather to handle more generic update/crash case and avoid a lot of resends.

I could see some hesitation on mucking with the k8 event though.

Any thoughts ?

Webhook sink

Nice project !!

Any webhook sink is available ?

eventrouter produces duplicate events once in a while

use glogsink and see logs as below every 10 minutes or so.
however, no new events produced and no change to events existed.
in the log below, contents of new events and old events are same, i cannot find any update

I1220 08:50:11.664885 6 glogsink.go:42] {"verb":"UPDATED","event":{"metadata":{"name":"eventrouter-6bc87c6f9d-8glzg.1571fd5c758fb587","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/events/eventrouter-6bc87c6f9d-8glzg.1571fd5c758fb587","uid":"b8858267-0431-11e9-942f-fa163e2715bb","resourceVersion":"4615160","creationTimestamp":"2018-12-20T08:32:00Z"},"involvedObject":{"kind":"Pod","namespace":"kube-system","name":"eventrouter-6bc87c6f9d-8glzg","uid":"51b434e2-041b-11e9-94c1-fa163e65c2aa","apiVersion":"v1","resourceVersion":"4596917","fieldPath":"spec.containers{kube-eventrouter}"},"reason":"Killing","message":"Killing container with id docker://kube-eventrouter:Need to kill Pod","source":{"component":"kubelet","host":"192.168.81.105"},"firstTimestamp":"2018-12-20T08:32:00Z","lastTimestamp":"2018-12-20T08:32:00Z","count":1,"type":"Normal","eventTime":null,"reportingComponent":"","reportingInstance":""},"old_event":{"metadata":{"name":"eventrouter-6bc87c6f9d-8glzg.1571fd5c758fb587","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/events/eventrouter-6bc87c6f9d-8glzg.1571fd5c758fb587","uid":"b8858267-0431-11e9-942f-fa163e2715bb","resourceVersion":"4615160","creationTimestamp":"2018-12-20T08:32:00Z"},"involvedObject":{"kind":"Pod","namespace":"kube-system","name":"eventrouter-6bc87c6f9d-8glzg","uid":"51b434e2-041b-11e9-94c1-fa163e65c2aa","apiVersion":"v1","resourceVersion":"4596917","fieldPath":"spec.containers{kube-eventrouter}"},"reason":"Killing","message":"Killing container with id docker://kube-eventrouter:Need to kill Pod","source":{"component":"kubelet","host":"192.168.81.105"},"firstTimestamp":"2018-12-20T08:32:00Z","lastTimestamp":"2018-12-20T08:32:00Z","count":1,"type":"Normal","eventTime":null,"reportingComponent":"","reportingInstance":""}}

Easier Elasticsearch Indexing

Currently the sink output looks like this:

I0316 18:06:16.068912 7 glogsink.go:42] {"verb":"UPDATED","event":{"metadata":{"name":"curator-1521158460-pn977.151c3d919e2cc527","namespace":"es-extern",....

is there a way to remove the `I0316 18:06:16.068912 7 glogsink.go:42] from the logline , this would make the indexing in Elasticsearch much easier

Add JSON output format

I would like to store the events in Elasticsearch, JSON output would be nice for this.

Would it be possible to have a new release?

We've got some policies that don't allow us to use :latest, it would be useful if there was a new release. (In particular including #58).

While we can pin to the sha hash, :latest moves on that can be garbage collected by the registry.

http sink receiving invalid sequence unparsable by rfc5424

I have a fairly simple http sink receiver side implementation below. Very frequently it's failing to parse the incoming sequence using rfc5424.Message.ReadFrom method.

The error from ReadFrom method reads:

strconv.Atoi: parsing "\n807": invalid syntax

All of the source code:

package main

import (
	"fmt"
	"io"
	"log"
	"net/http"

	"github.com/crewjam/rfc5424"
)

func handler(w http.ResponseWriter, r *http.Request) {
	log.Printf("Method: %s", r.Method)
	if r.Body != nil {
		defer r.Body.Close()
	}
	m := new(rfc5424.Message)
	for {
		n, err := m.ReadFrom(r.Body)
		if err == io.EOF {
			break
		} else if err != nil {
			log.Fatalf("READERROR: %+v", err)
		}
		log.Printf("n=%d, host=%s, app=%s msg=%q", n, m.Hostname, m.AppName, string(m.Message))
	}
}

func main() {
	log.Println("starting server!")
	http.HandleFunc("/", handler)
	log.Fatal(http.ListenAndServe(":8080", nil))
}

It's fairly easy to replicate. I simply use this project's manifests to redeploy: kubectl delete -f ./yaml; kubectl apply -f ./yaml and it almost immediately triggers the error on the first message:

2017/10/25 03:22:31 starting server!
2017/10/25 03:22:45 Method: POST
2017/10/25 03:22:45 n=789, host=, app=default-scheduler msg="{\"verb\":\"ADDED\",\"event\":{\"metadata\":{\"name\":\"httpsink-395313959-234dm.14f0b1777bf8e722\",\"namespace\":\"default\",\"selfLink\":\"/api/v1/namespaces/default/events/httpsink-395313959-234dm.14f0b1777bf8e722\",\"uid\":\"fdb530dc-b931-11e7-96cb-2e6a5c70d9f0\",\"resourceVersion\":\"14958\",\"creationTimestamp\":\"2017-10-25T03:10:01Z\"},\"involvedObject\":{\"kind\":\"Pod\",\"namespace\":\"default\",\"name\":\"httpsink-395313959-234dm\",\"uid\":\"fdac2d9e-b931-11e7-96cb-2e6a5c70d9f0\",\"apiVersion\":\"v1\",\"resourceVersion\":\"14952\"},\"reason\":\"Scheduled\",\"message\":\"Successfully assigned httpsink-395313959-234dm to minikube\",\"source\":{\"component\":\"default-scheduler\"},\"firstTimestamp\":\"2017-10-25T03:10:01Z\",\"lastTimestamp\":\"2017-10-25T03:10:01Z\",\"count\":1,\"type\":\"Normal\"}}"
2017/10/25 03:22:45 READERROR: strconv.Atoi: parsing "\n807": invalid syntax

Any ideas why this might be happening?

Support for ElasticSearch Forwarding

Not unlike #2, but for ElasticSearch for archival and alerting. In the past when I've worked with a vendor like Datadog, I've asked for similar visibility into the Events stream. Now that most of the systems I've got deployed leverage ElasticSearch & X-Pack for logging & alerting, I've lost visibility into the Kubernetes event stream, especially when running in GKE.

modernize eventrouter

We internally rely on eventrouter to log events to stdout, which is then relayed to our central logging platform. Then came a requirement that we have to publish the events to couple of other sinks for more processing (streaming and big data). As we looked into the source code to contribute/extend we felt it would be great to make following core changes too.

Please let me know whether the following changes resonate well with you.

Remove Vendoring

  • Today upgrading between go version brings in huge import PRs. Lots of vendor files are changed.
  • Adding new sinks also adds new modules in vendor directory.

If we move to go 1.13 which supports module mirroring, we can remove vendoring all together (quite close to #80).

Multiple Sink Support

There are few use cases where we would like to send the same event to multiple sinks simultaneously, eg. send the events to local file, Kafka and say, AWS Kinesis. In the current implementation, we will have to run multiple eventrouters to send to multiple sinks simultaneously. (#79)

This will be bring a problem on how to deal, where we are able to write to 1 sink while the other fails. Simplest solution is to ask each individual sink writers to retry if they see a failure. Also the writes should be async so that a slow sink won't slow down the rest.

Controller structure

The official Kubernetes GitHub repository has a nice controller sample-controller design diagram, which employs following components.

Workqueue

Workqueue will help us implement queuing and retrying mechanism.

Cache

Repeatedly fetching information from the API server can become expensive and can be alleviated using cache.

ListWatch

eventrouter use informers.NewSharedInformerFactory which listens to all the Namespaces

	sharedInformers := informers.NewSharedInformerFactory(clientset, resync)
	eventsInformer := sharedInformers.Core().V1().Events()

We could move to cache.ListWatch and provide ListFunc and WatchFunc to filter out namespaces.

Improved Testing

Currently there is no code coverage for controller in eventrouter, adding more tests will be very helpful to boost the confidence during refactoring.

Provide example http sink server

I think there should be an example httpsink receiver server included within the project for demo purposes, just like the stdout or glog mode.

I might take a look at this sometime soon.

Sink documentation

Could someone add docs into README.md on how to configure sinks?
Preferably in a form that k8s noobs could use

S3Sink - should support instance-profile for bucket access

Currently the s3sink interface panics if s3SinkAccessKeyID / s3SinkSecretAccessKey are not provided.

	case "s3sink":
		accessKeyID := viper.GetString("s3SinkAccessKeyID")
		if accessKeyID == "" {
			panic("s3 sink specified but s3SinkAccessKeyID not specified")
		}

		secretAccessKey := viper.GetString("s3SinkSecretAccessKey")
		if secretAccessKey == "" {
			panic("s3 sink specified but s3SinkSecretAccessKey not specified")

Ignoring the fact that this should probably use a secret vs. a configmap to store this secret, for my use-case I need to use an instance profile to access the bucket.

This will not work with current code as it simply panics.

Suggested change:

  • if key/secret not provided, check if instance-profile is attached
  • if instance-profile is attached and no creds provided, use the instance-profile to try and access the bucket
  • if both not provided (no instance-profile attached) - panic.

Happy to send a PR for this if you are open to having eventrouter support instance-profile for access.

Is this project still active? no release here for the last year+

File output sink?

Hi All,

Very cool project. One thing I'd find useful would be a file sink, so I can then follow files in Splunk. Would this be a much to do ? Also, if you've no time, I'd like to write it - how do I even start ?

Public Image?

As an open-source user, I would expect to be able to use the gcr.io/heptio-images/eventrouter:v0.1 image that is referenced in the yaml directory. At this time, it appears to be a private repo which means I must build, deploy, and manage my own image.

Add some debug logs to successful httpsink requests

Currently -v=10 doesn't report anything about what the httpsink is up to as long as it doesn't fail. I think adding some logs indicating "a request is made, N messages, M bytes sent" would be nice in verbose mode.

RBAC rules needed

Just guessing, but I am seeing this on a fresh kubeadm cluster with v1.7.3:

I0811 16:06:20.252002      16 reflector.go:198] Starting reflector *v1.Event (30m0s) from github.com/heptio/eventrouter/vendor/k8s.io/client-go/informers/factory.go:69
I0811 16:06:20.252024      16 reflector.go:236] Listing and watching *v1.Event from github.com/heptio/eventrouter/vendor/k8s.io/client-go/informers/factory.go:69
E0811 16:06:20.283343      16 reflector.go:201] github.com/heptio/eventrouter/vendor/k8s.io/client-go/informers/factory.go:69: Failed to list *v1.Event: User "system:serviceaccount:kube-system:default" cannot list events at the cluster scope. (get events)

Didn't check more, plan to do later.

Why not eventer from the heapster project?

I'm wondering if you are all aware of the eventer tool which does the same things as eventrouter, but is part of the heapster project, https://github.com/kubernetes/heapster/blob/master/events/eventer.go. It's got good support for various sinks, etc. I wonder if perhaps there could be efforts to collaborate instead of rebuilding the same functionality twice.

I don't work on eventer, but use it heavily, and I'm just curious if there's any reason for not using eventer besides that it's an upstream project and thus development may be slightly slower. If the concern is speed of development, the work could likely be done in a fork, but seems unnecessary to re-do a lot of the work already done in the other project.

Event container/fieldPath prometheus label

I have a need to identify metrics involving init containers and am wondering whether it makes sense to add the event.InvolvedObject.FieldPath as a label in the prometheus counter metrics.

This would allow me to match the label value against a regex (eg. heptio_eventrouter_warnings_total{involved_object_fieldpath=~".*init.*")

If this makes sense, I can easily PR it.

About contributing a NewRelic sink

Hello, I work a New Relic as part of the K8S monitoring team, we would like to contribute a sink that pushes data to our platform, this issue is just to start the conversation and let you know what we have in mind and see if our plans fit the project.

The sink will be pretty simple, it will use the newrelic sdk to push data to an infrastructure agent, meaning that we would have to import the sdk as a dependency.

We would like it to live under it's own directory under the sinks, the reason is that we want to have our own README.md (if we could link it to the root readme it would be awesome) with examples on how to set it up and an example manifest file for installing it.

For this to work with the infrastructure agent we will need a pod with both the router and the integration, that's why we want to add the example manifest I mentioned before and extend the helm chart to add this.

Cool project and let me know what you think. cheers.

[AWS Sink] Add s3 as a sink

What?

Solution with EFK is good but expensive.
Need to store events in s3 and load in redshift and view using redshift db

Detecting patched deployments

I am fairly new to k8s land and trying to get our patched deployments. What is the correct way to get deployment updates? I am actually using edit command to patch deployments but could not find an exact fit in prometheus endpoints of eventrouter. I have just seen heptio_eventrouter_normal{involved_object_kind="ReplicaSet"} and heptio_eventrouter_normal{involved_object_kind="Deployment"} . But these seems to me not exact solution because the number indicates larger values than I expect. What is the correct way to get it?

Support for Multiple Sinks

Readme mentions one of the goals of this project is Support for multiple sinks should be configurable, but looking at the code I don't see that it currently supports using multiple sinks.
It would be good to support this, for example stdout + s3sink.

the configuration json object should parse a list instead of a string and have the config under some object, for example:

apiVersion: v1
kind: ConfigMap
metadata:
  name: eventrouter
  namespace: eventrouter
data:
  config.json: |-
    {
      "sink": ["stdout", "s3sink"],
      "stdout": {},
      "s3sink": {}
    }

Am I missing something? is this currently somehow possible?

Prometheus metrics doesn't report events even when 'enable-prometheus' is true

After commit 6a815be, the http /metrics interface gives no kubernetes events either with enable-prometheus set to true or not specified at all. Events are indeed recorded because it is logged in glog.

After fast skimming I think the viper imported in eventrouter.go is not the same instance in main.go so the config doesn't apply. But since I don't know about go I maybe wrong.
https://github.com/heptiolabs/eventrouter/blob/6a815beeeab89b29e5c283f7c793964b0b67e05a/eventrouter.go#L25

Prometheus Metrics

As well as logging the event I think it would be worth while adding a metrics endpoint for Prometheus.

Do you have plans to do this?

I wouldn't mind contributing to get this off the ground if it was something you'd be happy to take a PR on.

Example no longer works due to k8s API changes

See https://kubernetes.io/docs/setup/release/notes/#api-changes

The following APIs are no longer served by default: (#70672, @liggitt) * All resources under apps/v1beta1 and apps/v1beta2 - use apps/v1 instead * daemonsets, deployments, replicasets resources under extensions/v1beta1 - use apps/v1 instead * networkpolicies resources under extensions/v1beta1 - use networking.k8s.io/v1 instead * podsecuritypolicies resources under extensions/v1beta1 - use policy/v1beta1 instead

I tried the example in the README.md and got:

$ kubectl create -f https://raw.githubusercontent.com/heptiolabs/eventrouter/master/yaml/eventrouter.yaml
serviceaccount/eventrouter created
clusterrole.rbac.authorization.k8s.io/eventrouter created
clusterrolebinding.rbac.authorization.k8s.io/eventrouter created
configmap/eventrouter-cm created
error: unable to recognize "https://raw.githubusercontent.com/heptiolabs/eventrouter/master/yaml/eventrouter.yaml": no matches for kind "Deployment" in version "apps/v1beta2"

This was on a v1.16.0 cluster.

JSON namespacing

Is there any way to configure the output of eventrouter so that it namespaces events, e.g wrap every output in {'eventrouter': {<all fields in here>}}?

We manage our elasticsearch indexes using a company-wide index template and having it properly namespaced (preferably in a configurable way, i.e we'd be able to pass in what it wraps the json object in) would help to ensure the event related fields don't get mistakenly used for other purposes.

Wrong JSON output escaping

I noticed that fluent-bit's json parser can't parse some of eventrouter's log messages.

Eventrouter config

{
  "sink": "stdout"
}

Eventrouter stdout sample

{"verb":"UPDATED","event":{"metadata":{"name":"fluent-bit-txzvr.1577e0b2a83a3735","namespace":"lunar-system","selfLink":"/api/v1/namespaces/lunar-system/events/fluent-bit-txzvr.1577e0b2a83a3735","uid":"fe6eb861-157c-11e9-a342-5aae0cc8125e","resourceVersion":"23100854","creationTimestamp":"2019-01-11T08:43:40Z"},"involvedObject":{"kind":"Pod","namespace":"lunar-system","name":"fluent-bit-txzvr","uid":"7fc47232-1344-11e9-b82e-663e4d499873","apiVersion":"v1","resourceVersion":"22351098","fieldPath":"spec.containers{fluent-bit}"},"reason":"Pulling","message":"pulling image "fluent/fluent-bit:1.0.1"","source":{"component":"kubelet","host":"aks-nodepool1-30459648-3"},"firstTimestamp":"2019-01-08T12:54:14Z","lastTimestamp":"2019-01-11T08:43:40Z","count":6,"type":"Normal"},"old_event":{"metadata":{"name":"fluent-bit-txzvr.1577e0b2a83a3735","namespace":"lunar-system","selfLink":"/api/v1/namespaces/lunar-system/events/fluent-bit-txzvr.1577e0b2a83a3735","uid":"fe6eb861-157c-11e9-a342-5aae0cc8125e","resourceVersion":"23100854","creationTimestamp":"2019-01-11T08:43:40Z"},"involvedObject":{"kind":"Pod","namespace":"lunar-system","name":"fluent-bit-txzvr","uid":"7fc47232-1344-11e9-b82e-663e4d499873","apiVersion":"v1","resourceVersion":"22351098","fieldPath":"spec.containers{fluent-bit}"},"reason":"Pulling","message":"pulling image "fluent/fluent-bit:1.0.1"","source":{"component":"kubelet","host":"aks-nodepool1-30459648-3"},"firstTimestamp":"2019-01-08T12:54:14Z","lastTimestamp":"2019-01-11T08:43:40Z","count":6,"type":"Normal"}}

If I try to parse it, I got this

screen shot 2019-01-11 at 12 47 31 pm

Add support for regex alerting.

At first it will just be email alerting on health, but the other end of the alert could be anything.

regex will be more useful once I unify upstream handling of events.

constantly crashing in 1 cluster

I'm getting the following error in one of my Kubernetes clusters. It works fine in other clusters of the same version & configuration. I have no idea how to reproduce this; it just keeps happening in this one cluster. It happened when my cluster was running k8s v1.13.7 and still happens after upgrading to v1.14.6. All other clusters of the same k8s versions works fine.

Does anyone have any suggestions for how to resolve this?

Container: gcr.io/heptio-images/eventrouter:v0.2
Kubernetes: v1.14.6

E0820 17:52:52.754409       6 runtime.go:66] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/go/src/github.com/heptiolabs/eventrouter/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72
/go/src/github.com/heptiolabs/eventrouter/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/go/src/github.com/heptiolabs/eventrouter/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/asm_amd64.s:509
/usr/local/go/src/runtime/panic.go:491
/usr/local/go/src/runtime/panic.go:63
/usr/local/go/src/runtime/signal_unix.go:367
/go/src/github.com/heptiolabs/eventrouter/eventrouter.go:152
/go/src/github.com/heptiolabs/eventrouter/eventrouter.go:113
/go/src/github.com/heptiolabs/eventrouter/eventrouter.go:86
/go/src/github.com/heptiolabs/eventrouter/vendor/k8s.io/client-go/tools/cache/controller.go:195
<autogenerated>:1
/go/src/github.com/heptiolabs/eventrouter/vendor/k8s.io/client-go/tools/cache/shared_informer.go:545
/go/src/github.com/heptiolabs/eventrouter/vendor/k8s.io/client-go/tools/cache/shared_informer.go:381
/go/src/github.com/heptiolabs/eventrouter/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71
/usr/local/go/src/runtime/asm_amd64.s:2337
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x119a173]

goroutine 91 [running]:
github.com/heptiolabs/eventrouter/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/github.com/heptiolabs/eventrouter/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x111
panic(0x1300660, 0x1e748a0)
	/usr/local/go/src/runtime/panic.go:491 +0x283
main.prometheusEvent(0xc420e8d200)
	/go/src/github.com/heptiolabs/eventrouter/eventrouter.go:152 +0xf3
main.(*EventRouter).addEvent(0xc4200ca4c0, 0x147bf80, 0xc420e8d200)
	/go/src/github.com/heptiolabs/eventrouter/eventrouter.go:113 +0x3c
main.(*EventRouter).(main.addEvent)-fm(0x147bf80, 0xc420e8d200)
	/go/src/github.com/heptiolabs/eventrouter/eventrouter.go:86 +0x3e
github.com/heptiolabs/eventrouter/vendor/k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(0xc4202107b0, 0xc4202107c0, 0xc4202107d0, 0x147bf80, 0xc420e8d200)
	/go/src/github.com/heptiolabs/eventrouter/vendor/k8s.io/client-go/tools/cache/controller.go:195 +0x49
github.com/heptiolabs/eventrouter/vendor/k8s.io/client-go/tools/cache.(*ResourceEventHandlerFuncs).OnAdd(0xc4201a6e20, 0x147bf80, 0xc420e8d200)
	<autogenerated>:1 +0x62
github.com/heptiolabs/eventrouter/vendor/k8s.io/client-go/tools/cache.(*processorListener).run(0xc42041a780)
	/go/src/github.com/heptiolabs/eventrouter/vendor/k8s.io/client-go/tools/cache/shared_informer.go:545 +0x272
github.com/heptiolabs/eventrouter/vendor/k8s.io/client-go/tools/cache.(*processorListener).(github.com/heptiolabs/eventrouter/vendor/k8s.io/client-go/tools/cache.run)-fm()
	/go/src/github.com/heptiolabs/eventrouter/vendor/k8s.io/client-go/tools/cache/shared_informer.go:381 +0x2a
github.com/heptiolabs/eventrouter/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc4201d4ad8, 0xc420210880)
	/go/src/github.com/heptiolabs/eventrouter/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x4f
created by github.com/heptiolabs/eventrouter/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/go/src/github.com/heptiolabs/eventrouter/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:69 +0x62

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.