elastic / cloud-on-k8s Goto Github PK
View Code? Open in Web Editor NEWElastic Cloud on Kubernetes
License: Other
Elastic Cloud on Kubernetes
License: Other
For now we:
During step 1 we might double the actual node count, which is not great.
Auth between clusters needs to be configured in this case. Should not require the target cluster to be within the same installation even. Some manual steps may then have to be taken, but we can probably provide some more automation if it's within the same installation, and also some (other) automation if it's in a remote installation where we have some (stack-operators level) federation support available.
This might have more involved implications when it comes to accepting connections from remote clusters, possibly accessed through some form of ingress-like proxy (here we'd have to enforce the ip filtering in the proxy and allow the proxy to connect to the ES instances).
Perhaps network policies would make more sense than using the xpack ip filtering support directly in some cases? Or would both make sense still?
Integrations includes a large section around configuration management where customers want to integrate with tools such as terraform / ansible. As Kubernetes already has a ton of community integrations with these configuration management tools we can build on top of what already exists. Note: That while we gain some benefit with pre-existing configuration code we would essentially trade those benefits for additional testing. Customer tooling would become a regression that we would need to rewrite in Kubernetes. As mentioned above with Kubernetes we can use functions such as the internal describe and logging functionality, but these would need to be properly combined to match fidelity already found in ECE diagnostics today.
For now we check for green health before we consider an ES pod to be ready.
This is a problem during topology changes.
Instead we should consider the pod to be ready when it sees a master ES node (call to _cat/master).
I tried tweaking quickly with UnmarshalJSON on the NodeTypesSpec struct but it never got called by kubebuilder... Not sure why.
For now in the stack we default to false
on every type, meaning we deploy a coordinator node.
ML type defaults to false in Elasticsearch, unless X-Pack is activated. In such case it defaults to true. Not sure how to handle that.
Operator needs a service account with a specific set of responsibilities.
It would be great to limit this to just the concrete resources etc we need within a namespace, but depending on the granularity we can express, we might have to accept some namespace defining boundaries in some cases.
Design TBD, but we can start with something obvious.
At some point, we'd like to be able to support licensing ECE-style where the install contains a pool of licenses applied to clusters (basically copy the implementation / behavioral details from ECE, roughly).
Shorter term perhaps a dedicated resource for this could make just as much sense? E.g user has to update a placeholder resource we create as part of the cluster with the license? Or reference a secret containing the license?
Perhaps not something to directly bundle into our resources, but bake some supporting infra / labels etc into this?
A script or something otherwise runnable could suffice, and what we'd want is a final artifact of a reasonable size that will encapsulate all information (without containing secrets etc) we'd need when remotely debugging weird or unexpected behavior on a system we can't or won't directly access.
This is something we'd like to have working on both a installation-wide basis as well as on a deployment-wide basis.
Follow up to #24
See https://blog.cloudflare.com/the-complete-guide-to-golang-net-http-timeouts/
We want probably fairly short client timeouts in the controller, to avoid blocking the reconcile loop for longer than necessary. Instead we should retry the request in the next iteration of the loop
Alternative: pull the request out of the reconciliation loop and allow longer timeouts.
Create various topology for Elasticsearch / Kibana
We should enable capturing resource utilization and enable support for metering as a way to handle pluggable and auditable show/charge-backs as well as potential billing.
ES instance exit codes are important to get visibility into
Bootlooping detection
Currently the operator generates private keys and pushes them to the pod through a Secret. This isn't optimal
Support YAML settings in the Elasticsearch Spec similar to what we currently have with user_settings_override
It would be nice to have a concept of dry-running changes without affecting the stability or actual state of the system.
E.g realm configuration changes: validation, connectivity checks etc to external systems.
Once we have authenticated access to Elasticsearch/Kibana we need to provide those secrets to liveliness and readiness probes
Not quite clear where this would fall in terms of responsibilities. Could be as simple as Kibana for the single-operator case.
Might become clearer what we need / want once we start working on a multiple-stacks operator?
How should we expose these to our users?
We don't want to store binary data in K8s apis, so perhaps the ECE way of declaring URLs are acceptable? Does tie the availability of nodes to the availability of the URLs.
Perhaps a new CRD for these would make sense as well? They are potentially something we'd like shared between clusters, and they need to be inherently versioned. (Plugins more-so than bundles, so perhaps we split this into a subtask for each?)
Options:
We are already mirroring the http
attribute that we have in the Elasticsearch CRD in the Kibana CRD but no behaviour is currently associated with that structure.
We should therefore:
http.tls
attributeBy default we will use the file realm for our internal uses and let consumers of our deployment use the ES native realm. For larger deployments, custom realms become more important:
These resources might have to include a versioned component as well (not all of these are as important as the others):
In some cases, this would also entail installing a certificate (e.g ldaps) so we can support encryption.
When and how should we make the decision to restore from a snapshot?
Restores are not always the same:
Possibility to tie into cluster lifecycle state (as part of the controller state machine system?)
Queueing of one-off operations as a general feature?
We want to provide the ability for clusters to perform snapshots:
This is as a "simple" hacky solution. See discussion on #34.
See the official doc: https://kubernetes.io/docs/concepts/storage/volumes/#local
The idea behind persistent volumes is to create:
The problem here is we need to create these "PersistentVolume", but we don't know in advance what are going to be the pod's storage requirements. That's why some persistent volumes have "dynamic" provisioner, in the sense that the PV will be created automatically by a controller to match a given PVC. There is no dynamic provisioner yet for local persistent volumes.
It is marked as beta
in Kubernetes v1.10.
Stands for Container Storage Interface (similar to CNI: Container Networking Interface). A project to standardize the way vendors implement their k8s storage API.
Spec: https://github.com/container-storage-interface/spec
K8S doc: https://kubernetes-csi.github.io/docs/
A short read on how it works: https://medium.com/google-cloud/understanding-the-container-storage-interface-csi-ddbeb966a3b
A more complete read on how to implement a CSI: https://arslan.io/2018/06/21/how-to-write-a-container-storage-interface-csi-plugin/
CSI community sync agenda: https://docs.google.com/document/d/1-oiNg5V_GtS_JBAEViVBhZ3BYVFlbSz70hreyaD7c5Y/edit#heading=h.h3flg2md1zg
Components:
K8S-internal (not vendor specific, maintained by K8S team):
CSI driver - vendor specific (all these should implement the gRPC CSI standard interface), composed of 3 components:
FlexVolumes can be seen as the old, unclean version of CSI. It also allows vendors to write their own storage plugins. The plugin driver needs to be installed to a specific path on each node (/usr/libexec/kubernetes/kubelet-plugins/volume/exec/
). It's basically a binary exec file that needs to support a few subcommands (init
, attach
, detach
, mount
, unmount
etc.).
We need dynamically provisioned local persistent volumes. Which means a controller should take care of mapping existing PVC to a new PV of the expected size.
Also, we expect the size to act as a quota: when reached, the user should not be able to write on disk anymore. This is a strong requirement: for instance using ext4 behind our persistent volumes would probably not guarantee this.
Links: https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume, https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume/provisioner
A static local volume provisioner, running as a daemon set on all nodes of the cluster. It monitors mount points on the system, and maps it to the creation of a PV of the corresponding size.
Mount points are discovered in the configured discovery dir (eg. /mnt/disks
).
To work with directory-based volumes instead of device-based volumes, we can simply symlink dirs to the directory we want into the discovery dir.
It does not handle any quota; but the backing FS could (eg. XFS or LVM).
Code is open source, quite small and simple to understand.
Dynamic provisioning seems to be WIP according to this issue. There is a PR open for design proposal and a PR open for implementation.
Based on the design doc:
Based on this comment, the dynamic CSI provisioner is still at the level of "internal discussions".
lichuqiang seems to be pretty involved in that. Interestingly, he created a Github repo for a CSI driver which is mostly based on mesosphere's csilvm.
Overall, this looks very promising and close to what we need. Patience is required :)
Link: https://github.com/mesosphere/csilvm
A CSI for LVM2. It lives as a single csilvm
binary that implements both Node and Controller plugins, to be installed on every node. The names of the volume group (VG) and the physical volumes (PVs) it consists of are passed to the plugin at launch time as command-line parameters.
It is originally intended to work on Mesos, not on Kubernetes. But the CSI standard is supposed to work for both.
The code is quite clean and easy to understand :)
This issue contains some interesting comments (from july) on how the project does not exactly comply with the expected k8s interface.
I could not find any reference of someone using it as a daemonset on a k8s cluster.
Link: https://github.com/wavezhang/k8s-csi-lvm
Seems a bit less clean than csilvm, but explicitely targets Kubernetes.
Not much doc, and only a few commits, but the code looks quite good.
Based on the code and examples, can be deployed as a DaemonSet (along with the required kubernetes CSI stuff and apiserver configuration).
It relies on having lvmd
installed on he host, with a LVM volume group pre-created. See this bash script, supposed to be run on each node.
Link: https://github.com/akrog/ember-csi
Link: https://github.com/scality/metalk8s
Link: https://github.com/monostream/k8s-localflex-provisioner
Best solution seems to be:
/mnt/disks
Best solution seems to rely on LVM + XFS:
* Create one volume group from the available physical disks or partitions
* Create one logical volume per pod, with the requested size: the volume size acts as the quota
* Use thin volume if we need to overcommit on disk space
* Then we need to choose over XFS or ext4 filesystems for the logical volumes:
* XFS is supposed to have better performance, but cannot be shrinked without being unmounted first
* ext4 does support hot grow/shrink
XFS is an I/O optimized file system (compared to eg. ext4).
It supports quotas per directory (xfs_quota
command), allowing us to create a directory per user, and associate a quota to it.
This is the solution we use on Elastic Cloud.
LVM allows to gather multiple disks or partitions to form a logical Volume Group (VG) (vgcreate
command), where physical disks are abstracted. In this volume group, multiple Logical Volumes (LV) can be created with the chosen size (pvcreate
command), and formatted with any FS we want (mkfs
command). A logical volume may span over multiple physical disks.
LVM thin provisioning allows to create a thin pool on which we can allocate multiple thin volumes with a given size. That size will appear as the volume size, but the occupied underlying disk space will not be reserved for the volume. For instance, we could have a 10GB thin pool with 3x5GB thin volumes. Each volume would see 5GB, and everything is fine as long as the entire underlying 10GB is not fully occupied. It allows us to overcommit on disk space.
The "quota" in LVM would simply be the size of the created logical volume.
Volume group and logical volumes can be resized without unmounting them. However the logical volumes FS also needs to support that. Ext4 does support hot resize (grow and shrink), but XFS does only support hot grow (not shrink).
I/Os can be limited through Linux cgroups per logical volume (see https://serverfault.com/questions/563129/i-o-priority-per-lvm-volume-cgroups).
It seems that we're issuing the migrate calls before the ES pod is ready.
Causes the migrate check to pollute the logs
make run
go generate ./pkg/... ./cmd/...
goimports -w pkg cmd
go vet ./pkg/... ./cmd/...
go run ./cmd/manager/main.go
{"level":"info","ts":1541597261.359024,"logger":"entrypoint","caller":"manager/main.go:21","msg":"setting up client for manager"}
{"level":"info","ts":1541597261.360708,"logger":"entrypoint","caller":"manager/main.go:29","msg":"setting up manager"}
{"level":"info","ts":1541597262.9435961,"logger":"entrypoint","caller":"manager/main.go:36","msg":"Registering Components."}
{"level":"info","ts":1541597262.9436312,"logger":"entrypoint","caller":"manager/main.go:39","msg":"setting up scheme"}
{"level":"info","ts":1541597262.943677,"logger":"entrypoint","caller":"manager/main.go:46","msg":"Setting up controller"}
{"level":"info","ts":1541597262.943923,"logger":"kubebuilder.controller","caller":"controller/controller.go:120","msg":"Starting EventSource","Controller":"stack-controller","Source":"kind source: /, Kind="}
{"level":"info","ts":1541597262.9442098,"logger":"kubebuilder.controller","caller":"controller/controller.go:120","msg":"Starting EventSource","Controller":"stack-controller","Source":"kind source: /, Kind="}
{"level":"info","ts":1541597262.944353,"logger":"kubebuilder.controller","caller":"controller/controller.go:120","msg":"Starting EventSource","Controller":"stack-controller","Source":"kind source: /, Kind="}
{"level":"info","ts":1541597262.94458,"logger":"kubebuilder.controller","caller":"controller/controller.go:120","msg":"Starting EventSource","Controller":"stack-controller","Source":"kind source: /, Kind="}
{"level":"info","ts":1541597262.944698,"logger":"entrypoint","caller":"manager/main.go:52","msg":"setting up webhooks"}
{"level":"info","ts":1541597262.9447172,"logger":"entrypoint","caller":"manager/main.go:59","msg":"Starting the Cmd."}
{"level":"info","ts":1541597263.1489651,"logger":"kubebuilder.controller","caller":"controller/controller.go:134","msg":"Starting Controller","Controller":"stack-controller"}
{"level":"info","ts":1541597263.253657,"logger":"kubebuilder.controller","caller":"controller/controller.go:153","msg":"Starting workers","Controller":"stack-controller","WorkerCount":1}
{"level":"info","ts":1541597290.767325,"logger":"stack-controller","caller":"stack/stack_controller.go:274","msg":"Created Pod stack-sample-es-mzgmvh9t6f","iteration":1}
{"level":"info","ts":1541597290.8207848,"logger":"stack-controller","caller":"stack/stack_controller.go:274","msg":"Created Pod stack-sample-es-vlb8kq58pf","iteration":1}
{"level":"info","ts":1541597290.873826,"logger":"stack-controller","caller":"stack/stack_controller.go:274","msg":"Created Pod stack-sample-es-dntfxxqr6j","iteration":1}
{"level":"info","ts":1541597290.873926,"logger":"stack-controller","caller":"stack/service_control.go:26","msg":"Creating service default/stack-sample-es-discovery","iteration":1}
{"level":"info","ts":1541597290.965258,"logger":"stack-controller","caller":"stack/service_control.go:26","msg":"Creating service default/stack-sample-es-public","iteration":1}
{"level":"info","ts":1541597291.0271301,"logger":"stack-controller","caller":"stack/deployment_control.go:77","msg":"Creating Deployment default/stack-sample-kibana","iteration":1}
{"level":"info","ts":1541597291.1224709,"logger":"stack-controller","caller":"stack/service_control.go:26","msg":"Creating service default/stack-sample-kb","iteration":1}
{"level":"error","ts":1541597291.25213,"logger":"kubebuilder.controller","caller":"controller/controller.go:209","msg":"Reconciler error","Controller":"stack-controller","Request":"default/stack-sample","error":"Error during migrate data: Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host","errorVerbose":"Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host\nError during migrate data\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).DeleteElasticsearchPods\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:327\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).Reconcile\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:147\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:207\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/Cellar/go/1.11/libexec/src/runtime/asm_amd64.s:1333","stacktrace":"github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1541597292.255729,"logger":"kubebuilder.controller","caller":"controller/controller.go:209","msg":"Reconciler error","Controller":"stack-controller","Request":"default/stack-sample","error":"Error during migrate data: Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host","errorVerbose":"Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host\nError during migrate data\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).DeleteElasticsearchPods\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:327\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).Reconcile\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:147\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:207\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/Cellar/go/1.11/libexec/src/runtime/asm_amd64.s:1333","stacktrace":"github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1541597293.261707,"logger":"kubebuilder.controller","caller":"controller/controller.go:209","msg":"Reconciler error","Controller":"stack-controller","Request":"default/stack-sample","error":"Error during migrate data: Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host","errorVerbose":"Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host\nError during migrate data\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).DeleteElasticsearchPods\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:327\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).Reconcile\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:147\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:207\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/Cellar/go/1.11/libexec/src/runtime/asm_amd64.s:1333","stacktrace":"github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1541597294.2684531,"logger":"kubebuilder.controller","caller":"controller/controller.go:209","msg":"Reconciler error","Controller":"stack-controller","Request":"default/stack-sample","error":"Error during migrate data: Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host","errorVerbose":"Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host\nError during migrate data\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).DeleteElasticsearchPods\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:327\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).Reconcile\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:147\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:207\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/Cellar/go/1.11/libexec/src/runtime/asm_amd64.s:1333","stacktrace":"github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1541597295.275768,"logger":"kubebuilder.controller","caller":"controller/controller.go:209","msg":"Reconciler error","Controller":"stack-controller","Request":"default/stack-sample","error":"Error during migrate data: Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host","errorVerbose":"Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host\nError during migrate data\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).DeleteElasticsearchPods\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:327\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).Reconcile\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:147\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:207\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/Cellar/go/1.11/libexec/src/runtime/asm_amd64.s:1333","stacktrace":"github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1541597296.2815711,"logger":"kubebuilder.controller","caller":"controller/controller.go:209","msg":"Reconciler error","Controller":"stack-controller","Request":"default/stack-sample","error":"Error during migrate data: Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host","errorVerbose":"Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host\nError during migrate data\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).DeleteElasticsearchPods\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:327\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).Reconcile\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:147\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:207\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/Cellar/go/1.11/libexec/src/runtime/asm_amd64.s:1333","stacktrace":"github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1541597297.2881322,"logger":"kubebuilder.controller","caller":"controller/controller.go:209","msg":"Reconciler error","Controller":"stack-controller","Request":"default/stack-sample","error":"Error during migrate data: Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host","errorVerbose":"Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host\nError during migrate data\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).DeleteElasticsearchPods\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:327\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).Reconcile\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:147\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:207\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/Cellar/go/1.11/libexec/src/runtime/asm_amd64.s:1333","stacktrace":"github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1541597298.295207,"logger":"kubebuilder.controller","caller":"controller/controller.go:209","msg":"Reconciler error","Controller":"stack-controller","Request":"default/stack-sample","error":"Error during migrate data: Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host","errorVerbose":"Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host\nError during migrate data\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).DeleteElasticsearchPods\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:327\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).Reconcile\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:147\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:207\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/Cellar/go/1.11/libexec/src/runtime/asm_amd64.s:1333","stacktrace":"github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1541597299.301722,"logger":"kubebuilder.controller","caller":"controller/controller.go:209","msg":"Reconciler error","Controller":"stack-controller","Request":"default/stack-sample","error":"Error during migrate data: Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host","errorVerbose":"Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host\nError during migrate data\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).DeleteElasticsearchPods\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:327\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).Reconcile\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:147\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:207\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/Cellar/go/1.11/libexec/src/runtime/asm_amd64.s:1333","stacktrace":"github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1541597300.588762,"logger":"kubebuilder.controller","caller":"controller/controller.go:209","msg":"Reconciler error","Controller":"stack-controller","Request":"default/stack-sample","error":"Error during migrate data: Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host","errorVerbose":"Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host\nError during migrate data\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).DeleteElasticsearchPods\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:327\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).Reconcile\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:147\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:207\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/Cellar/go/1.11/libexec/src/runtime/asm_amd64.s:1333","stacktrace":"github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1541597303.1528802,"logger":"kubebuilder.controller","caller":"controller/controller.go:209","msg":"Reconciler error","Controller":"stack-controller","Request":"default/stack-sample","error":"Error during migrate data: Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host","errorVerbose":"Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host\nError during migrate data\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).DeleteElasticsearchPods\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:327\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).Reconcile\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:147\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:207\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/Cellar/go/1.11/libexec/src/runtime/asm_amd64.s:1333","stacktrace":"github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1541597308.27569,"logger":"kubebuilder.controller","caller":"controller/controller.go:209","msg":"Reconciler error","Controller":"stack-controller","Request":"default/stack-sample","error":"Error during migrate data: Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host","errorVerbose":"Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host\nError during migrate data\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).DeleteElasticsearchPods\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:327\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).Reconcile\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:147\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:207\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/Cellar/go/1.11/libexec/src/runtime/asm_amd64.s:1333","stacktrace":"github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1541597317.761508,"logger":"kubebuilder.controller","caller":"controller/controller.go:209","msg":"Reconciler error","Controller":"stack-controller","Request":"default/stack-sample","error":"Error during migrate data: Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host","errorVerbose":"Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host\nError during migrate data\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).DeleteElasticsearchPods\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:327\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).Reconcile\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:147\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:207\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/Cellar/go/1.11/libexec/src/runtime/asm_amd64.s:1333","stacktrace":"github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1541597318.768116,"logger":"kubebuilder.controller","caller":"controller/controller.go:209","msg":"Reconciler error","Controller":"stack-controller","Request":"default/stack-sample","error":"Error during migrate data: Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host","errorVerbose":"Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host\nError during migrate data\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).DeleteElasticsearchPods\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:327\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).Reconcile\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:147\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:207\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/Cellar/go/1.11/libexec/src/runtime/asm_amd64.s:1333","stacktrace":"github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1541597319.774601,"logger":"kubebuilder.controller","caller":"controller/controller.go:209","msg":"Reconciler error","Controller":"stack-controller","Request":"default/stack-sample","error":"Error during migrate data: Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host","errorVerbose":"Put http://stack-sample-es-public:9200/_cluster/settings: dial tcp: lookup stack-sample-es-public: no such host\nError during migrate data\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).DeleteElasticsearchPods\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:327\ngithub.com/elastic/stack-operators/pkg/controller/stack.(*ReconcileStack).Reconcile\n\t/Users/marc/go/src/github.com/elastic/stack-operators/pkg/controller/stack/stack_controller.go:147\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:207\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/usr/local/Cellar/go/1.11/libexec/src/runtime/asm_amd64.s:1333","stacktrace":"github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\ngithub.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/Users/marc/go/src/github.com/elastic/stack-operators/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
Battery style? Could port the old found-based one to be a generic K8s-style daemonset-feature?
discovery.zen.minimum_master_nodes
needs to be adjusted based on the desired number of master eligible nodes in the cluster to make sure the cluster does not desintegrate on downscales because the originally configured value cannot be satisfied anymore.
This should be done via the cluster settings API as we otherwise would have to restart all the pods to update this setting which is currently configured in an environment variable
Use a deployment for now to do that.
For hot-warm and similar architectures it would be great to have some built-in index curation support to match ECE.
Conceptually one or maps that translates a stack version to images for our stack.
Things to decide on:
Stack versions are similar to stack packs providing concrete stack image names, available plugins, default plugins, node type support, platform/controller/operator version requirements etc.
Depends on: #36
We'd like to have snapshots taken at specified intervals for DR and backup purposes.
We have a rather flat structure in the elasticsearch
package, that could be broken down into multiple more scoped packages.
Also we should probably lighten a bit the main controller file.
Similar to existing constructor logic:
fmt, vet, compile, test, build docker, push.
Decouple “one-time” action (eg. cluster restart) from state change reconciliation (eg. ES deployment or version upgrade). But make sure both cannot run at the same time.?
Figure out how to handle this kind of “one-time” request, probably with sub-resources CRD webhooks. Related: https://groups.google.com/forum/#!msg/kubernetes-sig-api-machinery/wMblxpOSoiA/T96qSXwZBQAJ and https://kubernetes.io/blog/2018/07/27/kubevirt-extending-kubernetes-with-crds-for-virtualized-workloads/
It would be great to be able to pause the stack operator from doing anything, which could be useful in everything from "hey, I think we're having a cascading failure situation here, and things need to calm down before we continue" to "ok, for testing purposes, let's purposely put the system in this specific state", then start reconciling.
Perhaps using https://github.com/cloudflare/cfssl
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.