Code Monkey home page Code Monkey logo

elasticsearch-cloud-deploy's Introduction

Deploy Elasticsearch on the cloud easily

This repository contains a set of tools and scripts to deploy an Elasticsearch cluster on the cloud, using best-practices and state of the art tooling.

Note: This branch supports Elasticsearch 7.x only. For other Elasticsearch versions see elasticsearch-5.x and elasticsearch-6.x branches.

You need to use the latest version of Terraform and Packer for all features to work correctly.

Features:

  • Deployment of data and master nodes as separate nodes
  • Client node with Kibana, Cerebro, Grafana and authenticated Elasticsearch access
  • DNS and load-balancing access to client nodes
  • Sealed from external access, only accessible via password-protected external facing client nodes
  • AWS deployment support (under terraform-aws)
  • Azure deployment (under terraform-azure)
  • Google Cloud Platform deployment (coming soon)

Usage

Clone this repo to work locally. You might want to fork it in case you need to apply some additional configurations or commit changes to the variables file.

Create images with Packer (see packer folder in this repo), and then go into the terraform folder and run terraform plan. See README files in each respective folder.

tfstate

Once you run terraform apply on any of the terraform folders in this repo, a file terraform.tfstate will be created. This file contains the mapping between your cloud elements to the terraform configuration. Make sure to keep this file safe.

See this guide for a discussion on tfstate management and locking between team members.

elasticsearch-cloud-deploy's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

elasticsearch-cloud-deploy's Issues

Kibana deployed with no config?

Did the terraform apply and out of the box I receive a “Kibana server is not ready”. I ssh and check kibana.yml and everything is commented out

Adding logstash module

need help adding logstash for data ingest and making it an ELK stack with the same module setup.

Count error with all resource like loadbalancer_id

current terraform code not able to count

getting error with terraform 1.12


Error: Missing resource instance key

  on lb.tf line 42, in resource "azurerm_lb_backend_address_pool" "clients-lb-backend":
  42:   loadbalancer_id = "${var.associate_public_ip == true ? azurerm_lb.clients-public.id : azurerm_lb.clients.id}"

Because azurerm_lb.clients-public has "count" set, its attributes must be
accessed on specific instances.

For example, to correlate with indices of a referring resource, use:
    azurerm_lb.clients-public[count.index]


Error: Missing resource instance key

  on lb.tf line 42, in resource "azurerm_lb_backend_address_pool" "clients-lb-backend":
  42:   loadbalancer_id = "${var.associate_public_ip == true ? azurerm_lb.clients-public.id : azurerm_lb.clients.id}"

Because azurerm_lb.clients has "count" set, its attributes must be accessed on
specific instances.

For example, to correlate with indices of a referring resource, use:
    azurerm_lb.clients[count.index]


Error: Missing resource instance key

  on lb.tf line 53, in resource "azurerm_lb_probe" "clients-httpprobe":
  53:   loadbalancer_id = "${var.associate_public_ip == true ? azurerm_lb.clients-public.id : azurerm_lb.clients.id}"

Because azurerm_lb.clients-public has "count" set, its attributes must be
accessed on specific instances.

For example, to correlate with indices of a referring resource, use:
    azurerm_lb.clients-public[count.index]


Error: Missing resource instance key

  on lb.tf line 53, in resource "azurerm_lb_probe" "clients-httpprobe":
  53:   loadbalancer_id = "${var.associate_public_ip == true ? azurerm_lb.clients-public.id : azurerm_lb.clients.id}"

Because azurerm_lb.clients has "count" set, its attributes must be accessed on
specific instances.

For example, to correlate with indices of a referring resource, use:
    azurerm_lb.clients[count.index]


Error: Missing required argument

  on masters.tf line 50, in resource "azurerm_virtual_machine_scale_set" "master-nodes":
  50:     ip_configuration {

The argument "primary" is required, but no definition was found.


Error: Missing resource instance key

  on single-node.tf line 57, in resource "azurerm_virtual_machine" "single-node":
  57:   network_interface_ids = ["${azurerm_network_interface.single-node.id}"]

Because azurerm_network_interface.single-node has "count" set, its attributes
must be accessed on specific instances.

For example, to correlate with indices of a referring resource, use:
    azurerm_network_interface.single-node[count.index]

Invalid value "variables" for flag -var-file

When I run the line packer build -only=azure-rm -var-file=variables.json elasticsearch5-node.packer.json it gives me the error above stating open variables: The system cannot find the file specified. I am running packer from the directory where the file is located but it looks like it is ignoring the .json extension.

Kibana reported that ES is not working in EC2

Hi,
I've deployed the TF in my AWS environment and when I logged in to Kibana I found out that ES and Logstash is not working at all.

plugin:[email protected] - Service Unavailable
plugin:[email protected] - Elasticsearch cluster did not respond with license information

I also get multiple errors on x-pack services with the same errors on license information.

I've tested out the network from the coordinated node (aka client) to the masters and also to the data and verified that I was able to reach to ports 9300 and 9200.
What else is needed to check in order to get it working?
screenshot from 2018-07-08 19-07-30

Required plugin discovery-ec2 not installed on single-node setup (AWS)

It seems that the discovery-ec2 plugin is required here

Which I assume is fine for the multi-node elasticsearch cluster (I have not tried this yet), however, in the single-node deployment on AWS the instance is ran with the kibana AMI and the plugin is not installed which causes an error when the elasticsearch service runs on the instance.

I would make a PR but I'm not sure what the correct approach is here, should the plugin be installed on the kibana AMI as well? Happy to implement this if that is the case - otherwise I might need some advice.

packer fail to build the kibana aim

I am currently trying to build the kibana6 aim image on ap-southeast-2.
packer build -only=amazon-ebs -var-file=variables.json kibana6-node.packer.json

The build get stuck at: amazon-ebs: Executing /lib/systemd/systemd-sysv-install enable kibana
from there it will wait indefinitely

 amazon-ebs: 0 upgraded, 1 newly installed, 0 to remove and 6 not upgraded.
    amazon-ebs: Need to get 208 MB of archives.
    amazon-ebs: After this operation, 503 MB of additional disk space will be used.
    amazon-ebs: Get:1 https://artifacts.elastic.co/packages/6.x/apt stable/main amd64 kibana amd64 6.5.2 [208 MB]
    amazon-ebs: debconf: unable to initialize frontend: Dialog
    amazon-ebs: debconf: (Dialog frontend will not work on a dumb terminal, an emacs shell buffer, or without a controlling terminal.)
    amazon-ebs: debconf: falling back to frontend: Readline
    amazon-ebs: debconf: unable to initialize frontend: Readline
    amazon-ebs: debconf: (This frontend requires a controlling tty.)
    amazon-ebs: Fetched 208 MB in 30s (6,819 kB/s)
    amazon-ebs: debconf: falling back to frontend: Teletype
    amazon-ebs: dpkg-preconfigure: unable to re-open stdin:
    amazon-ebs: Selecting previously unselected package kibana.
    amazon-ebs: (Reading database ... 59295 files and directories currently installed.)
    amazon-ebs: Preparing to unpack .../kibana_6.5.2_amd64.deb ...
    amazon-ebs: Unpacking kibana (6.5.2) ...
    amazon-ebs: Processing triggers for systemd (229-4ubuntu21.10) ...
    amazon-ebs: Processing triggers for ureadahead (0.100.0-19) ...
    amazon-ebs: Setting up kibana (6.5.2) ...
    amazon-ebs: Processing triggers for systemd (229-4ubuntu21.10) ...
    amazon-ebs: Processing triggers for ureadahead (0.100.0-19) ...
    amazon-ebs: Synchronizing state of kibana.service with SysV init with /lib/systemd/systemd-sysv-install...
    amazon-ebs: Executing /lib/systemd/systemd-sysv-install enable kibana

I commented the line: #bin/kibana-plugin install x-pack || true because it was returning an error message but this is not related to the issue I am facing.
I tried a t2.large instance with the same result

auth failure when xpack enabled

Testing out the repo (single node, AWS), and things work fine when variable "security_enabled" is "false". Setting it to true and enabling xpack and I'm unable to log in with exampleuser credentials:

{"statusCode":401,"error":"Unauthorized","message":"[security_exception] unable to authenticate user [exampleuser] for REST request [/_xpack/security/_authenticate], with { header={ WWW-Authenticate=\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\" } }"}

For reference, I'm running terraform v0.11.5. I'm relatively new to elasticsearch, so forgive me if this is expected. I tried configuring xpack security per the document below, but did not have any success.

https://www.elastic.co/guide/en/x-pack/current/security-getting-started.html

Please let me know if I can post any additional information to help.

ELB cannot be attached to multiple subnets in the same AZ

Current Setup:

  • VPC in us-east-1
  • private & public subnet in each availability zone

When explicitly specifying a list of availability zones through the "availability_zone" variable and subsequently defining the private subnets in those associated zones deployment through the "vpc_subnets" variable fails with the message below:

1 error(s) occurred:

* aws_elb.es_client_lb: 1 error(s) occurred:

* aws_elb.es_client_lb: InvalidConfigurationRequest: ELB cannot be attached to multiple subnets in the same AZ.
	status code: 409, request id:

This is likely due to vpc subnets being pulled automatically from the VPC and availability_zone variable not being respected. Unless it is a design feature that needs to be documented.

workaround:

  • Delete all public subnets form VPC
  • Deploy ES using terraform scripts
  • Reconfigure public subnets

unknown setting [cloud.aws.region]

nodes won't start as long as cloud.aws.region is in elasticsearch.yml removing this entry solves this issue, however, the discovery service won't work outside of us-east-1. Was this a breaking change between 5.x and 6.x?

[2018-02-07T02:24:42,643][INFO ][o.e.p.PluginsService     ] [ip-10-0-9-209] loaded plugin [x-pack]
[2018-02-07T02:24:44,278][ERROR][o.e.b.Bootstrap          ] Exception
java.lang.IllegalArgumentException: unknown setting [cloud.aws.region] please check that any required plugins are installed, or check the breaking changes documentation for removed settings
        at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:312) ~[elasticsearch-6.1.3.jar:6.1.3]
        at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:276) ~[elasticsearch-6.1.3.jar:6.1.3]

ec2-discovery not configured correctly at network.host

Current configuration provided in the user-data for the network.host is:
network.host: ec2:privateIpv4,localhost

This cause the client node not to see the other nodes in the cluster with the following error:
[o.e.d.z.ZenDiscovery ] [Elasticsearch-Master] not enough master nodes discovered during pinging.

Removing the localhost and adding ec2.endpoint partially resolves this issue, now the client node is able to ping all the masters however on Kibana it is reported that it cannot access elasticsearch on http://localhost:9200.

To solve this, on the client node I had to change it from ec2:privateIpv4 to 0.0.0.0 and everything started to work.

Packer hang's on building ES (grub-pc)

Packer hangs when running the build command for elasticsearch

    amazon-ebs: Setting up grub2-common (2.02~beta2-36ubuntu3.16) ...
    amazon-ebs: Setting up grub-pc-bin (2.02~beta2-36ubuntu3.16) ...
    amazon-ebs: Setting up grub-pc (2.02~beta2-36ubuntu3.16) ...
    amazon-ebs: debconf: unable to initialize frontend: Dialog
    amazon-ebs: debconf: (Dialog frontend will not work on a dumb terminal, an emacs shell buffer, or without a controlling terminal.)
    amazon-ebs: debconf: falling back to frontend: Readline
    amazon-ebs: Configuring grub-pc
    amazon-ebs: -------------------
    amazon-ebs:
    amazon-ebs: A new version (/tmp/grub.ZtqN4eeYBC) of configuration file /etc/default/grub is
    amazon-ebs: available, but the version installed currently has been locally modified.
    amazon-ebs:
    amazon-ebs:   1. install the package maintainer's version
    amazon-ebs:   2. keep the local version currently installed
    amazon-ebs:   3. show the differences between the versions
    amazon-ebs:   4. show a side-by-side difference between the versions
    amazon-ebs:   5. show a 3-way difference between available versions
    amazon-ebs:   6. do a 3-way merge between available versions (experimental)
    amazon-ebs:   7. start a new shell to examine the situation
    amazon-ebs:

Workaround

  • comment out grub and upgrade lines from update_machine.sh

Kibana reported that ES is not working in EC2

Hi,
I've deployed the TF in my AWS environment and when I logged in to Kibana I found out that ES is not working at all.

I also get multiple errors on x-pack services with the same errors on license information.

What else is needed to check in order to get it working?

es_issue

Data nodes are missing perms to /opt/elasticsearch/data

Data nodes won't start

[2018-02-07T17:22:24,081][INFO ][o.e.n.Node               ] [ip-10-0-7-212] initializing ...
[2018-02-07T17:22:24,089][ERROR][o.e.b.Bootstrap          ] Exception
java.lang.IllegalStateException: Failed to create node environment
        at org.elasticsearch.node.Node.<init>(Node.java:267) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.node.Node.<init>(Node.java:246) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:213) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:213) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) [elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) [elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) [elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) [elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) [elasticsearch-cli-6.2.0.jar:6.2.0]
        at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-cli-6.2.0.jar:6.2.0]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) [elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) [elasticsearch-6.2.0.jar:6.2.0]
Caused by: java.nio.file.AccessDeniedException: /opt/elasticsearch/data/nodes
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84) ~[?:?]
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:?]
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:?]
        at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384) ~[?:?]
        at java.nio.file.Files.createDirectory(Files.java:674) ~[?:1.8.0_161]
        at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781) ~[?:1.8.0_161]
        at java.nio.file.Files.createDirectories(Files.java:767) ~[?:1.8.0_161]
        at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:204) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.node.Node.<init>(Node.java:264) ~[elasticsearch-6.2.0.jar:6.2.0]
        ... 11 more
[2018-02-07T17:22:24,097][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [ip-10-0-7-212] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: Failed to create node environment
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-6.2.0.jar:6.2.0]
        at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.2.0.jar:6.2.0]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) ~[elasticsearch-6.2.0.jar:6.2.0]
Caused by: java.lang.IllegalStateException: Failed to create node environment
        at org.elasticsearch.node.Node.<init>(Node.java:267) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.node.Node.<init>(Node.java:246) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:213) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:213) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-6.2.0.jar:6.2.0]
        ... 6 more
Caused by: java.nio.file.AccessDeniedException: /opt/elasticsearch/data/nodes
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84) ~[?:?]
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:?]
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:?]
        at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384) ~[?:?]
        at java.nio.file.Files.createDirectory(Files.java:674) ~[?:1.8.0_161]
        at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781) ~[?:1.8.0_161]
        at java.nio.file.Files.createDirectories(Files.java:767) ~[?:1.8.0_161]
        at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:204) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.node.Node.<init>(Node.java:264) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.node.Node.<init>(Node.java:246) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:213) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:213) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-6.2.0.jar:6.2.0]
        ... 6 more

If memory serves correctly permissions should be applied after mounting the EBS at least for the first mount as permissions default to root on the initial mount.

EC2 check does not work on some instance types

Hypervisor uuid check does not work in i3.2xlarge and so packer fails to install discovery-ec2 plugin
Adding following before the check for azure fixed it in me local environment

elif [[ `dmidecode --string system-uuid | head -c 3` == "EC2" ]]; then
  # install AWS-specific plugins only if running on AWS
  # see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/identify_ec2_instances.html
  sudo bin/elasticsearch-plugin install --batch discovery-ec2
  sudo bin/elasticsearch-plugin install --batch repository-s3

Maybe we can add this or a better check.

Single node deployment issue

This completely fails out of the box. Kibana doesn't even work. How can you release code that has Kibana checking for localhost:9200 when Elastic isn't even running on that same VM?

Cannot get through nginx auth

When I ran the default single-node setup in AWS, I was unable to authenticate through nginx with the user/password written in the README which was exampleuser/changeme.

I also tried using the randomly generated password which is outputted at the end of the terraform apply command but that also did not work.

Not being particularly familiar with nginx, it was rather annoying to run into this issue. I think this is a great repository which will be useful to a lot of people but if it worked out of the box and the documentation was accurate this would make it much more accessible :)

I had to create a new user/password in order to get access in the end.

Disable ingest exporter on disabled ingest nodes

Nodes flagged with

node.ingest: false

must have ingest disabled when the default local exporter is being used

xpack.monitoring.exporters.my_local:
  type: local
  use_ingest: false

Otherwise the following error will occur

[2018-02-07T16:54:02,572][INFO ][o.e.n.Node               ] [ip-10-0-7-247] started
[2018-02-07T16:54:07,417][WARN ][o.e.x.m.MonitoringService] [ip-10-0-7-247] monitoring execution failed
org.elasticsearch.xpack.monitoring.exporter.ExportException: Exception when closing export bulk
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$1$1.<init>(ExportBulk.java:107) ~[?:?]
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$1.onFailure(ExportBulk.java:105) ~[?:?]
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound$1.onResponse(ExportBulk.java:218) ~[?:?]
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound$1.onResponse(ExportBulk.java:212) ~[?:?]
        at org.elasticsearch.xpack.core.common.IteratingActionListener.onResponse(IteratingActionListener.java:108) ~[?:?]
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.lambda$doFlush$0(ExportBulk.java:176) ~[?:?]
        at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:68) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$1(LocalBulk.java:127) ~[?:?]
        at org.elasticsearch.action.ActionListener$1.onFailure(ActionListener.java:68) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.action.support.ContextPreservingActionListener.onFailure(ContextPreservingActionListener.java:50) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:91) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:173) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:139) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:81) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:83) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:72) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:405) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.client.support.AbstractClient.bulk(AbstractClient.java:482) ~[elasticsearch-6.2.0.jar:6.2.0]
        at org.elasticsearch.xpack.core.ClientHelper.executeAsyncWithOrigin(ClientHelper.java:73) ~[x-pack-core-6.2.0.jar:6.2.0]
        at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.doFlush(LocalBulk.java:120) ~[?:?]
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk.flush(ExportBulk.java:72) ~[?:?]
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.lambda$doFlush$1(ExportBulk.java:166) ~[?:?]
        at org.elasticsearch.xpack.core.common.IteratingActionListener.run(IteratingActionListener.java:93) [x-pack-core-6.2.0.jar:6.2.0]
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.doFlush(ExportBulk.java:182) [x-pack-monitoring-6.2.0.jar:6.2.0]
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk.flushAndClose(ExportBulk.java:96) [x-pack-monitoring-6.2.0.jar:6.2.0]
        at org.elasticsearch.xpack.monitoring.exporter.ExportBulk.close(ExportBulk.java:86) [x-pack-monitoring-6.2.0.jar:6.2.0]
        at org.elasticsearch.xpack.monitoring.exporter.Exporters.export(Exporters.java:205) [x-pack-monitoring-6.2.0.jar:6.2.0]
        at org.elasticsearch.xpack.monitoring.MonitoringService$MonitoringExecution$1.doRun(MonitoringService.java:231) [x-pack-monitoring-6.2.0.jar:6.2.0]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.2.0.jar:6.2.0]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_161]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_161]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:566) [elasticsearch-6.2.0.jar:6.2.0]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]

Kibana Image not getting created

I am getting this error when creating the Image using packer for Azure

3 error(s) occurred:

  • An image_publisher must be specified
  • An image_offer must be specified
  • An image_sku must be specified

Terraform 0.12 Compatibility, Support for Open Distro, Implement Elasticsearch Best Practices and more

Hi there,

This project is fantastic, well documented and was a great starter for my project. I've seen added to the project and will send over a pull request but wanted to highlight areas of issues or changes i'd like to see:

Issues:

  • Minor changes are needed to the project to work with Terraform 0.12.
  • Elasticsearch packer builder needed shell tweaks to support multiple environment variables
  • The build is currently adding Python 2.x which is not officially supported from 2020. I see however, based upon usage no real need for it

Enhancements:

  • I've added a variable to packer to support Open Distro for Elasticsearch. If true it will install that with the OSS version
  • Likewise to above, same for Kibana
  • Userdata.sh template has been enhanced to support open distro for elasticsearch
  • Added conditional variable for client nodes to just be elasticsearch client (e.g. no Kibana, Nginx, etc)
  • Elasticsearch best practice documents usage of instance store over EBS. I've made changes here to handle

Work in Progress:

  • Move from ELB Classic to ALB
  • Add HTTPS. Self signed node creation for clients and then ACM for Load Balancer
  • Updates to documentation

Purpose of var.elasticsearch_volume_size

Could you please elaborate on defining var.elasticsearch_volume_size if by default you expect nvme1n1 that is 493G

root@ip-10-0-44-42:~# df -Ph /opt/elasticsearch/data
Filesystem      Size  Used Avail Use% Mounted on
/dev/nvme1n1    493G   70M  467G   1% /opt/elasticsearch/data

on c5.2xlarge

root@ip-10-0-44-42:~# curl -sf http://169.254.169.254/latest/meta-data/instance-type | xargs
c5.2xlarge

Red cluster after deployment

I'm trying to settup the cluster in the default single node mode, after completion, I go to Kibana:
http://<MY-EC2-IP>:8080
I have a red cluster with the following:

image

Can you tell me what I'm missing ?

Thanks a lot

Access to client nodes

Great work and will support where I can.

Just wondering how you envisioned access to the private subnet. Running the production variant provides an internal DNS such as internal-domain-es-client-lb-248273487.eu-west-2.elb.amazonaws.com . However, I did notice a kibana public security group.

I just wondering if you planned to use a bastion host or some other VPN method to access cluster for administration and Kibana on port 8080
Regards,

pmoc

Recommendations for scaling up/down and disk size changes

Hi, I'm having trouble with scaling. I can add N nodes fine, and drop 1 at a time and wait for shards to allocate. But if I try to increase disk size and add N nodes then the new nodes don't join the cluster. Is there a way to get the ASG to have some intelligence around scaling elasticsearch safely, mainly when changing disk size? Or is it best to just create a new cluster for disk size changes?

Some guidelines for scaling safely will be much appreciated. Or pointers to docs and code that explain this. Thanks.

There is no variable named "DEVICE_NAME"

Hi,

this repo has been very helpful. Thanks for that.

However, I'm trying to deploy it and I'm getting this error on Terraform Apply:

4 errors occurred: * data.template_file.data_userdata_script: data.template_file.data_userdata_script: failed to render : <template_file>:91,41-52: Unknown variable; There is no variable named "DEVICE_NAME"., and 2 other diagnostic(s) * data.template_file.client_userdata_script: data.template_file.client_userdata_script: failed to render : <template_file>:91,41-52: Unknown variable; There is no variable named "DEVICE_NAME"., and 2 other diagnostic(s) * data.template_file.single_node_userdata_script: data.template_file.single_node_userdata_script: failed to render : <template_file>:91,41-52: Unknown variable; There is no variable named "DEVICE_NAME"., and 2 other diagnostic(s) * data.template_file.master_userdata_script: data.template_file.master_userdata_script: failed to render : <template_file>:91,41-52: Unknown variable; There is no variable named "DEVICE_NAME"., and 2 other diagnostic(s)

Any clues about what might be causing it? I have changed the type of instances on AWS. Maybe it's related to it.

No data nodes running

I may be a bit of a n00b but I'm facing a strange issue.

I've modified the variables to run a configuration involving 3 master nodes, 3 data and 1 client. Doing plan and apply goes smoothly and all items are created but ...

I have no data nodes running, despite autoscaling group and launch configurations for data nodes being there. 1 client node is running, 3 masters are running but no data nodes are created and I have no errors or anything like it.

I'm wondering if someone has faced this situation before?

Azure templates not working - maybe deprecated?

Hello there.

I hope you guys are doing fine. After the easy pick fixes, I stuck here. [1]

  • Terraform v0.12.8
  • "azurerm" (hashicorp/azurerm) 1.34.0...
  • "random" (hashicorp/random) 2.2.1...
  • "template" (hashicorp/template) 2.1.2...

Linux born-Surface-Book-2 5.1.15-surface-linux-surface #8 SMP Thu Jun 27 12:03:55 EDT 2019 x86_64 x86_64 x86_64 GNU/Linux

I changed the number of the variables.tf to 5 :
masters_count results in the first error.

Call to function "format" failed: unsupported value for "%d" at 0: an integer

[1] Error:

Error: Error in function call

  on clients.tf line 14, in data "template_file" "client_userdata_script":
  14:     minimum_master_nodes    = "${format("%d", var.masters_count / 2 + 1)}"
    |----------------
    | var.masters_count is "1"

Call to function "format" failed: unsupported value for "%d" at 0: an integer
is required.


Error: Error in function call

  on datas.tf line 14, in data "template_file" "data_userdata_script":
  14:     minimum_master_nodes    = "${format("%d", var.masters_count / 2 + 1)}"
    |----------------
    | var.masters_count is "1"

Call to function "format" failed: unsupported value for "%d" at 0: an integer
is required.

  on main.tf line 1, in provider "azurerm":
   1: provider "azurerm" {

Error: Error in function call

  on masters.tf line 14, in data "template_file" "master_userdata_script":
  14:     minimum_master_nodes    = "${format("%d", var.masters_count / 2 + 1)}"
    |----------------
    | var.masters_count is "1"

Call to function "format" failed: unsupported value for "%d" at 0: an integer

Just to make it run, I have changed master_count to 6:

variable "masters_count" {
  default = "6"
}

variable "datas_count" {
  default = "5"
}

Then just to take a look at it running.

 
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.template_file.master_userdata_script: Refreshing state...
data.template_file.data_userdata_script: Refreshing state...
data.azurerm_image.elasticsearch: Refreshing state...
data.azurerm_image.kibana: Refreshing state...

Error: failed to render : :5,9-23: Unknown variable; There is no variable named "bootstrap_node"., and 10 other diagnostic(s)

  on datas.tf line 1, in data "template_file" "data_userdata_script":
   1: data "template_file" "data_userdata_script" {



Error: Error: Unable to list images for Resource Group "packer-elasticsearch-images"

  on images.tf line 1, in data "azurerm_image" "elasticsearch":
   1: data "azurerm_image" "elasticsearch" {



Error: Error: Unable to list images for Resource Group "packer-elasticsearch-images"

  on images.tf line 7, in data "azurerm_image" "kibana":
   7: data "azurerm_image" "kibana" {



Error: failed to render : :5,9-23: Unknown variable; There is no variable named "bootstrap_node"., and 10 other diagnostic(s)

  on masters.tf line 1, in data "template_file" "master_userdata_script":
   1: data "template_file" "master_userdata_script" {

Unexpected operator error install-cloud-plugin.sh

The logic for checking whether or not on an EC2 instance v.s. Azure fails which results in plugin installations to get skipped.

==> amazon-ebs: Provisioning with shell script: install-cloud-plugin.sh
    amazon-ebs: /tmp/script_2282.sh: 7: [: ec2: unexpected operator

`Error: Unsupported block type` with Terraform v0.12.6

When running terraform plan from the terraform-aws directory, using Terraform v0.12.6, I get many of these errors:

Error: Unsupported block type

  on client.tf line 4, in data "template_file" "client_userdata_script":
   4:   vars {

Blocks of type "vars" are not expected here. Did you mean to define argument
"vars"? If so, use the equals sign to assign it a value.

It seems like a syntax incompatibility.

Any ideas?

Immediate packer build fail

Reading your article, after I created a Service Principal and the Resource Group, I did the packer build for the elasticsearch6-node.packer.json, however, it immediately fails.

`azure-arm output will be in this color.

==> azure-arm: Running builder ...
==> azure-arm: Getting tokens using client secret
azure-arm: Creating Azure Resource Manager (ARM) client ...
==> azure-arm: ERROR: -> ResourceNotFound : The Resource 'Microsoft.Compute/images/elasticsearch6-2019-01-31T115715' under resource group 'packer-elasticsearch-images' was not found.
==> azure-arm:
==> azure-arm: resources.GroupsClient#CheckExistence: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: error response cannot be parsed: "" error: EOF
Build 'azure-arm' errored: resources.GroupsClient#CheckExistence: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: error response cannot be parsed: "" error: EOF

==> Some builds didn't complete successfully and had errors:
--> azure-arm: resources.GroupsClient#CheckExistence: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: error response cannot be parsed: "" error: EOF

==> Builds finished but no artifacts were created.`

required field is not set error when running terraform plan (Azure)

Getting theses errors when trying to create the cluster, this is the output of terraform plan

Error: azurerm_virtual_machine_scale_set.data-nodes: "network_profile": required field is not set

Error: azurerm_virtual_machine_scale_set.master-nodes: "network_profile.0.ip_configuration.0.primary": required field is not set

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.