Code Monkey home page Code Monkey logo

elasticsearch's People

Contributors

antonbormotov avatar daghack avatar docker-library-bot avatar ebuildy avatar gerardorochin avatar ggtools avatar hikariii avatar jarpy avatar jasontedor avatar jethr0null avatar joelwurtz avatar khezen avatar laurentgoderre avatar mausch avatar md5 avatar samhogg avatar skyred avatar tianon avatar webwurst avatar yosifkit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

elasticsearch's Issues

Git versions don't match tags in dockerhub

Currently this could happen:

  1. I set elasticsearch up on a docker host in my production environment using the image from dockerhub. Works fine.
  2. Someone makes a breaking change in github without my knowledge and publishes it to dockerhub, overwriting the image that I had previously pulled down. IE it may pass their tests but fail for my configuration.
  3. Production server restarts for some reason. Docker, noticing that the remote version with the same tag is different, pulls down the latest version with the breaking change, breaking the production server.

This happened to me recently when using the 'official' rmq dockerhub image, so now we keep our own version in our own docker repo - I'd prefer not to have to do this though.

A way around it would be to use tags that indicate the version on github, either an explicit version or commit hash. This way when someone does a change in github, it will be reflected in dockerhub.

The tags used in dockerhub could be something like 2.2 (elasicsearch version) - a4fe2 (dockerfile commit hash)

What do ye think about this?

FailedToResolveConfigException[Failed to resolve config path

I am using official elasticsearch docker image to run my elasticsearch instance.
I am trying to create an index with some filters which require to load some files (synonym filter, stopword filter...) none of such filters works because I get errors such as this one:

{
   "error": "RemoteTransportException[[Hood][inet[/172.17.0.6:9300]][indices:admin/create]]; nested: IndexCreationException[[admin] failed to create index]; nested: FailedToResolveConfigException[Failed to resolve config path [/usr/share/elasticsearch/config/synonyms/sk_SK.txt], tried file path [/usr/share/elasticsearch/config/synonyms/sk_SK.txt], path file [/opt/logstash/config/usr/share/elasticsearch/config/synonyms/sk_SK.txt], and classpath]; ",
   "status": 500
}

this is my code to create the index:

POST /admin
{
   "settings": {
      "analysis": {
         "filter": {
           "synonym_filter": {
              "type": "synonym",
              "synonyms_path": "/usr/share/elasticsearch/config/synonyms/sk_SK.txt",
              "ignore_case": true
           },
           "sk_SK" : {
              "type" : "hunspell",
              "locale" : "sk_SK",
              "dedup" : true,
              "recursion_level" : 0
            },
            "nGram_filter": {
               "type": "nGram",
               "min_gram": 2,
               "max_gram": 20,
               "token_chars": [
                  "letter",
                  "digit",
                  "punctuation",
                  "symbol"
               ]
            }
         },
         "analyzer": {
            "slovencina_synonym": {
              "type": "custom",
              "tokenizer": "standard",
              "filter": [
                "lowercase",
                "synonym_filter",
                "asciifolding"
                ]
            },
            "slovencina": {
              "type": "custom",
              "tokenizer": "standard",
              "filter": [
                "lowercase",
                "asciifolding"
                ]
            },
            "nGram_analyzer": {
               "type": "custom",
               "tokenizer": "whitespace",
               "filter": [
                  "lowercase",
                  "asciifolding",
                  "nGram_filter"
               ]
            },
            "whitespace_analyzer": {
               "type": "custom",
               "tokenizer": "whitespace",
               "filter": [
                  "lowercase",
                  "asciifolding"
               ]
            }
         }
      }
   }
}

The file is there on that path of course inside this docker container.

I was googling the error and I have read some posts that the problem is in file/folder permissions, so I tried the following:

sudo chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/config/*
chmod o+x /usr
chmod o+x /usr/share
chmod o+x /usr/share/elasticsearch
chmod o+x /usr/share/elasticsearch/config/

still did not work after this.

Since this is happening inside docker container which is using official docker elasticsearch image, I consider this as a bug and this should work out of the box, yet I did not find any solution for this. I would appreciate if anyone could post some workaround.

Container eats too much disk

I'm running this container on Ubuntu 14.04.2 with and docker 1.5.0 and it eats a lot of space over time.

/dev/disk/by-label/cloudimg-rootfs seems to eat all free space and the container stops to work.

Any idea what might be a cause? or where to look for more info?

Not able to set mappings via JSON file

I'm having trouble setting up a config mapping file as explained here. I'm able to set it up on my host machine, but when applied to my Elasticsearch container (running this base image), there is no effect. I may be placing the mapping file in the wrong directory in my container.

I am setting the mapping file in /usr/share/elasticsearch/config/mappings/<index name>/<mapping>.json

Automatically set `network.publish_host` during startup

From my understanding this docker image cannot be used in a cluster environment as the instances are exposing their internal IP address (of the container) rather than the one of the host. So this will simply not work out of the box. I've played with various solutions, e.g. passing "-Des.network.publish_host=..." when invoking the docker run command, but it's cumbersome.

Therefore I suggest to update the docker-entrypoint.sh to automatically set the network.publish_host config setting during startup. This should make things a lot easier for everybody.

Something like this should work:

#!/bin/bash

set -e

NETWORK_PUBLISH_HOST=${NETWORK_PUBLISH_HOST:-`/sbin/ip route|awk '/default/ { print $3 }'`}

# Add elasticsearch as command if needed
if [ "${1:0:1}" = '-' ]; then
    set -- elasticsearch
fi

# Drop root privileges if we are running elasticsearch
if [ "$1" = 'elasticsearch' ]; then
    # Change the ownership of /usr/share/elasticsearch/data to elasticsearch
    chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data
    exec gosu elasticsearch "$@" -Des.network.publish_host=$NETWORK_PUBLISH_HOST
fi

# As argument is not related to elasticsearch,
# then assume that user wants to run his own process,
# for example a `bash` shell to explore this image
exec "$@"

This way it would be even possible to overwrite the host by providing a NETWORK_PUBLISH_HOST environment variable.

Any thoughts or suggestions?

It's currently hard to use custom config settings with official elasticsearch images

Currently the config for logging is being copied into the image before it is being uploaded to docker hub so in order to change the configs one has to manually add that instruction to their docker file.

I propose adding ONBUILD instruction in front of the COPY config... command so that it would be much easier to reuse official elasticsearch images. see #34 for a suggested implementation. This is very similar to python onbuild images where requirements.txt is being copied onbuild and saves users time.

invalid line return character in docker-entrypoint.sh prevents from using a data volume

Linked to #5 and #6, it would appear that the docker-entrypoint.sh contains an invalid line return, which comments out the "chown" command, and fails the startup.

Test case to reproduce the problem :

  • Host on a Centos 7 machine, with standard docker installation
  • Install docker :
yum install docker
service docker start
docker pull docker.io/elasticsearch
  • create local folder for data and config :
mkdir -p /opt/elasticsearch/data
mkdir -p /opt/elasticsearch/config
docker run --name es -v /opt/elasticsearch/data:/usr/share/elasticsearch/data -v /opt/elasticsearch/config:/usr/share/elasticsearch/config elasticsearch

Note that you will get :

chown: changing ownership of ‘/usr/share/elasticsearch/data’: Permission denied

If you inspect the docker-entrypoint.sh, you will see that the "chown" command is commented out like so :

docker exec -ti es cat /docker-entrypoint.sh

You will get this extract as part of the file :

# Drop root privileges if we are running elasticsearch
if [ "$1" = 'elasticsearch' ]; then
        # Change the ownership of /usr/share/elasticsearch/data to elasticsearch        chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data
        exec gosu elasticsearch "$@"
fi

Note that the chown is commented out. If you uncomment it on a new line, everything works fine.

Please confirm whether this is a bug.

iptables issues?

Hello,

Just installed this containe on 4 RedHat entr Linux 7 VMs.... with a config like so:

cluster.name: Aves
node.name: "aves-01"
path.logs: /var/log/elasticsearch
network.publish_host: 192.168.88.145
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.timeout: 3s
discovery.zen.minimum_master_nodes: 3
discovery.zen.ping.unicast.hosts: [
"192.168.88.190:9300",
"192.168.88.191:9300",
"192.168.88.215:9300",

  ]

When I start the containers... it seems to work fine (kopf plugin shows the cluster state as being good), but, all instances constantly log network connection errors like so:

[2015-08-22 17:44:20,006][WARN ][transport.netty          ] [aves-01] exception caught on transport layer [[id: 0xfda51f31]], closing connection
java.net.NoRouteToHostException: No route to host
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
        at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152)
        at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
        at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
        at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
        at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
        at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
        at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

and tcpdump shows constant reject like so:

listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
13:58:52.095383 IP 192.168.88.145 > 172.17.0.2: ICMP host 192.168.88.145 unreachable - admin prohibited, length 68
13:58:52.095387 IP 192.168.88.145 > 172.17.0.2: ICMP host 192.168.88.145 unreachable - admin prohibited, length 68
13:58:52.095621 IP 192.168.88.145 > 172.17.0.2: ICMP host 192.168.88.145 unreachable - admin prohibited, length 68
13:58:52.095623 IP 192.168.88.145 > 172.17.0.2: ICMP host 192.168.88.145 unreachable - admin prohibited, length 68

As if iptables is rejecting traffic from the container to itself? 192.168.88.145 being the host IP, and 172.17.0.2 the container IP. the icmp packet details show this is for tcp port 9300.

Whats going on?

Thank you so much.

Permission issues with own built Docker container with Elasticsearch 2.1

When I pull an image from public elasticsearch repo, spawning container with that pulled image is working fine for me with no permission issues.

docker pull elasticsearch
docker run -d elasticsearch

But when I spawn a container with the Dockerfile which is available there with the public repo gives me permission issues. I do have a same directory structure as public repo.

myfolder/Dockerfile
myfolder/docker-entrypoint.sh
myfolder/config/elasticsearch.yml
myfolder/config/logging.yml

https://github.com/docker-library/elasticsearch/tree/0d393d9a0a2e24fca022a89ad10c7050b2925292/2.1

Commands:-

  1. To build an image with the Dockerfile
    sudo docker build -t testuser/testelastic:v1 .

  2. Spawn container out of the built image
    sudo docker run -d --name elastic -v ./config:/config testuser/testelastic:v1

But it gives me following error everytime when I tried to spawn any container out of the above custom build image.
Error response from daemon: Cannot start container 8e72f3c33d054f5883b2de9e7673bc032333e633e3f43905d7d22a12ea76ad04: [8] System error: exec: "/docker-entrypoint.sh": permission denied

How to install plugin on first start or generate template on runtime for many project?

I want to extend ELS container on create .
With Postgres or Mysql, .. I have folder that i can user to customise Container on runtime.

Exemple of code:
for f in /docker-entrypoint-initdb.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*.sql) echo "$0: running $f"; "${mysql[@]}" < "$f"; echo ;;
*.sql.gz) echo "$0: running $f"; gunzip -c "$f" | "${mysql[@]}"; echo ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done

Is it possible to add this on your docker-entrypoint?

Access Denied for shared volume on OSX

I'm attempting to create a new elasticsearch container with a shared volume on my osx machine (virtualbox host). It doesn't seem to be working; I'm getting the following in my logs.

[2015-09-03 17:37:12,223][INFO ][node                     ] [Vidar] version[1.7.1], pid[1], build[b88f43f/2015-07-29T09:54:16Z]
[2015-09-03 17:37:12,223][INFO ][node                     ] [Vidar] initializing ...
[2015-09-03 17:37:12,285][INFO ][plugins                  ] [Vidar] loaded [], sites []
{1.7.1}: Initialization Failed ...
- ElasticsearchIllegalStateException[Failed to created node environment]
    AccessDeniedException[/usr/share/elasticsearch/data/elasticsearch]

I am executing the following vanilla command as per the documentation page...

docker run -d -v "$PWD/esdata":/usr/share/elasticsearch/data elasticsearch

Poking through the issue tracker, it looks like this has something to do with your stepping down the permissions for ES on the container, but I have no ideas on how to mitigate that in my container. Any help would be much appreciated.

How initialize with some mappings or templates?

Hello,

I would like to be able to set some templates and mapping if they are not present yet in ES data. What the best way to achieve it?

(To me, the difficult part is to wait that elasticsearch has been started & ready).

Thanks you,

[Groovy script] java.lang.NoClassDefFoundError:

I am trying to execute a groovy-script I have stored on the server, and I get this error, that almost sound like some groovy related library is missing from the image?

{
  "error": {
    "root_cause": [
      {
        "type": "script_exception",
        "reason": "failed to run file script [sort_by_array_index] using lang [groovy]"
      }
    ],
    "type": "search_phase_execution_exception",
    "reason": "all shards failed",
    "phase": "query",
    "grouped": true,
    "failed_shards": [
      {
        "shard": 0,
        "index": "development_1",
        "node": "IxIgjCSmR_SW_5Jj468IOA",
        "reason": {
          "type": "script_exception",
          "reason": "failed to run file script [sort_by_array_index] using lang [groovy]",
          "caused_by": {
            "type": "bootstrap_method_error",
            "reason": "java.lang.NoClassDefFoundError: org/codehaus/groovy/runtime/wrappers/Wrapper",
            "caused_by": {
              "type": "no_class_def_found_error",
              "reason": "org/codehaus/groovy/runtime/wrappers/Wrapper",
              "caused_by": {
                "type": "class_not_found_exception",
                "reason": "org.codehaus.groovy.runtime.wrappers.Wrapper"
              }
            }
          }
        }
      }
    ]
  },
  "status": 500
}

Or is there something I need to install? Groovy is built in AFAIK.

Dockerfile

FROM elasticsearch:2.2

RUN plugin install analysis-icu &&\
    plugin install lmenezes/elasticsearch-kopf/2.x &&\
    plugin install delete-by-query

COPY sort_by_array_index.groovy /usr/share/elasticsearch/config/scripts

sort_by_array_index.groovy

def lowestIdx = ids.size()+1;
def idx = -1;
for (v in doc[fieldName].values) {
    idx = ids.indexOf((Integer)v);
    if (idx != -1 && lowestIdx > idx) {
        lowestIdx = idx;
    }
};
return lowestIdx;

And the trigger part in search query:

"sort": [
    {
      "_script": {
        "lang": "groovy",
        "type": "number",
        "order": "asc",
        "script": {
          "file": "sort_by_array_index",
          "params": {
            "fieldName": "list",
            "ids": [1,2,3,4,5]
          }
        }
      }
    }
  ]

multicast between different virtual machines

I have extended this image, exposing port 54328 for multicast. I am running 2 different vm's, each one with one container. But, none of the containers seem to find the other.

Here is my vagrantfile config for the network

      config.vm.define "node1" do |node1|
        node1.vm.network :private_network, :ip => "172.16.255.250"
        node1.vm.network "forwarded_port", guest: 9200, host: 10000
        node1.vm.hostname = "node1"
      end
      config.vm.define "node2" do |node2|
        node2.vm.network :private_network, :ip => "172.16.255.251"
        node2.vm.network "forwarded_port", guest: 9200, host: 11000
        node2.vm.hostname = "node2"
      end

This is my Dockerfile

    FROM elasticsearch
    EXPOSE 54328

This is the command I am using to start the containers on each vm

sudo docker run -d -p 9200:9200 -p 54328:54328 -p 9300:9300 elasticsearch-multicast

Is there anything I am forgeting to do?

ES 2.0.0: Cannot `mlockall` even when `privileged` or with `--cap-add=IPC_LOCK`

With Docker 1.9.0 & elasticsearch:2.0.0:

$ docker run --rm --privileged elasticsearch:2.0.0 -Des.bootstrap.mlockall=true
[2015-11-20 09:06:08,908][WARN ][bootstrap                ] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
[2015-11-20 09:06:08,908][WARN ][bootstrap                ] This can result in part of the JVM being swapped out.
[2015-11-20 09:06:08,908][WARN ][bootstrap                ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2015-11-20 09:06:08,908][WARN ][bootstrap                ] These can be adjusted by modifying /etc/security/limits.conf, for example: 
        # allow user 'elasticsearch' mlockall
        elasticsearch soft memlock unlimited
        elasticsearch hard memlock unlimited

I assume this is still recommended & supposed to work?

[Coral] high disk watermark [90%] exceeded

We spin up a new container for a CI build and there is currently a very low volume of data being added to the elasticsearch container. Instantly, we see many of these:

[WARN ][cluster.routing.allocation.decider] [Coral] high disk watermark [90%] exceeded on [pwWXZlUxQDmHoLRVijqLWA][Coral] free: 975.6mb[5.2%], shards will be relocated away from this node

We are using:

elasticsearch:
  image: library/elasticsearch:1.7
  ports:
    - "9200:9200"

I'm a bit new to docker/compose, is there a way to allocate a bit more free space to the container to avoid this?

pid failure preventing startup with ports forwarded

When starting docker elasticsearch with the following command
run elasticsearch -p 9200:9200 -p 9300:9300 --name es -Des.node.name="TestNode"

I get the following exception:

{1.7.3}: pid Failed ...
FileNotFoundException[9300:9300 (Permission denied)]
java.io.FileNotFoundException: 9300:9300 (Permission denied)
    at java.io.FileOutputStream.open0(Native Method)
    at java.io.FileOutputStream.open(FileOutputStream.java:270)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
        at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
    at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:194)
    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)

If I run the following:
run elasticsearch --name es -Des.node.name="TestNode"

It starts up just fine, but the ports are obviously not exposed, which I need because the application accessing it is not a docker image and is remote.

This is using the elasticsearch image directly from Dockerhub.

Executable file not found in $PATH

Hi,

I want to start an elasticsearch:1.6 container but when I try to run docker run -ti elasticsearch:1.6, I get the following error: error: exec: "elasticsearch": executable file not found in $PATH

I'm using:

Client version: 1.6.1
Client API version: 1.18
Go version (client): go1.3.3
Git commit (client): a8a31ef/1.6.1
OS/Arch (client): linux/amd64
Server version: 1.6.1
Server API version: 1.18
Go version (server): go1.3.3
Git commit (server): a8a31ef/1.6.1
OS/Arch (server): linux/amd64

Storage Driver: btrfs
Execution Driver: native-0.2
Kernel Version: 3.8.13-68.1.2.el6uek.x86_64
Operating System: Oracle Linux 6.6

It's working correctly on my MacBook using boot2docker with Docker 1.4.1.

Any idea?

CircleCI : can't start ES 2.0

hi
i would like to launch ES 2.0 on a CircleCI build, i"ve got this error :

ubuntu@box684:~$ docker run --name elasticsearch  -it --rm=true elasticsearch:2.0 --network.host=_non_loopback_
Unable to find image 'elasticsearch:2.0' locally
2.0: Pulling from library/elasticsearch
575489a51992: Pull complete 
6845b83c79fb: Pull complete 
f9fffdafe16d: Pull complete 
06059b5e7950: Pull complete 
efbfbb2501e1: Pull complete 
be2d5fd45a31: Pull complete 
598179ea500b: Pull complete 
391eba5e09bd: Pull complete 
28e1cdf46191: Pull complete 
502293e1b7a6: Pull complete 
e8c260e35308: Pull complete 
67720b0c8716: Pull complete 
31152391da21: Pull complete 
fff711d06832: Pull complete 
45c9593e1dfb: Pull complete 
69276da6d2ee: Pull complete 
ec07ee46a006: Pull complete 
f440e8cb0417: Pull complete 
7c3969cf1c4d: Pull complete 
d5a7624cb70c: Pull complete 
66fa90dca619: Pull complete 
3b9ff4307668: Pull complete 
e66c7a2d67f7: Pull complete 
db53c59953a7: Pull complete 
9288178fb5db: Pull complete 
6a815676799e: Pull complete 
ec1ba95674b4: Pull complete 
79438b7c8046: Pull complete 
library/elasticsearch:2.0: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provide security.
Digest: sha256:9247276d14203fcc5225627d546499d39cae273d5df3e2a068ff0e3866c230c7
Status: Downloaded newer image for elasticsearch:2.0
WARNING: Your kernel does not support memory swappiness capabilities, memory swappiness discarded.
[2015-11-03 09:49:20,109][INFO ][node                     ] [Liz Allan] version[2.0.0], pid[1], build[de54438/2015-10-22T08:09:48Z]
[2015-11-03 09:49:20,122][INFO ][node                     ] [Liz Allan] initializing ...
[2015-11-03 09:49:20,254][INFO ][plugins                  ] [Liz Allan] loaded [], sites []
[2015-11-03 09:49:20,520][INFO ][env                      ] [Liz Allan] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/xvdh)]], net usable_space [173.6gb], net tota
l_space [239.9gb], spins? [possibly], types [btrfs]
[2015-11-03 09:49:26,441][INFO ][node                     ] [Liz Allan] initialized
[2015-11-03 09:49:26,441][INFO ][node                     ] [Liz Allan] starting ...
Exception in thread "main" BindTransportException[Failed to bind to [9300-9400]]; nested: ChannelException[Failed to bind to: /fe80:0:0:0:42:acff:fe11:b%eth0:9400]; nested: BindExcepti
on[Cannot assign requested address];
Likely root cause: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:391)
        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:315)
        at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
        at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
        at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Refer to the log for complete error details.
[2015-11-03 09:49:26,810][INFO ][node                     ] [Liz Allan] stopping ...
[2015-11-03 09:49:26,820][INFO ][node                     ] [Liz Allan] stopped
[2015-11-03 09:49:26,820][INFO ][node                     ] [Liz Allan] closing ...
[2015-11-03 09:49:26,833][INFO ][node                     ] [Liz Allan] closed
Error deleting container: Error response from daemon: Cannot destroy container 7ec4735cd0b0fe5ea8dfc524e5c602dbe95c87bdc20147af122312eda3b1bf63: Driver btrfs failed to remove root file
system 7ec4735cd0b0fe5ea8dfc524e5c602dbe95c87bdc20147af122312eda3b1bf63: Failed to destroy btrfs snapshot: operation not permitted

any idea to fix that ?
Thanks

Cannot set up an elasticsearch:2 cluster in docker any more

Hi all,

Before the v2, I used to set up my elasticsearch cluster in docker like this:

docker run --name myes --restart=always -p 9200:9200 -p 9300:9300 -d elasticsearch
docker run --name myes2 --restart=always -p 9201:9200 -p 9301:9300 -d elasticsearch

But with the v2, since I need to use the --network.host _non_loopback_ option, my elasticsearch instances are not able to "zen" discover themselves anymore.

I also tried with the -Des.network.bind_host=0.0.0.0 option but with no more success.

My version of docker is 1.5.0 and everything run in a vagrant of ubuntu-vivid64.

Any advice would be appreciated.
Arnaud

Unexpected result using relative path with `VOLUME`

This change does not work for me as expected:

-VOLUME /usr/share/elasticsearch/data
+VOLUME ./data

Inspecting resulting container shows up like this:

{
  "Name": "63cf3dbc86f43f34f3758064810c426f2711788514aaeb681d92f87b97faec39",
  "Source": "/var/lib/docker/volumes/63cf3dbc86f43f34f3758064810c426f2711788514aaeb681d92f87b97faec39/_data",
  "Destination": "data",
  "Driver": "local",
  "Mode": "",
  "RW": true,
  "Propagation": ""
}

There is no absolute path for destination. And within the container mount shows this:

/dev/vda1 on /data type ext4 (rw,relatime,data=ordered)

instead of /usr/share/elasticsearch/data.

Data volume is not actually used by ElasticSearch

Hi,

ElasticSearch defaults to putting data in /var/lib/elasticsearch/<clustername>.

While /usr/share/elasticsearch/data is set up properly in the Dockerfiles/entrypoint, nowhere is there any configuration telling ElasticSearch to actually use it. As such, following the instructions in the README to store your data on a host/attached volume won't actually put your data in the mapped volume.

If you're really unlucky/stupid you'll only notice this after waiting for an hour or two for an import job to run.

Search works in Docker but not in normal installed Linux

Hi,
maybe the problem dosen't belongs here, but i need to try it.
Elasticsearch works in docker fine. Indexing works fine, as well as searching.

But when i install elasticsearch on ubuntu using this guide (https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-repositories.html), only indexing works.
My more_like_this query returns no more results.

Are there any configurations in the docker, wich had to be done on the linux machine?

Thank you very much.

old data in data dir not showing in elasticsearch indices

Migrating from the old /data location to /usr/share/elasticsearch/data, so I changed my mount points (-v /data:/usr/share/elasticsearch/data) I can see one index (.kibana), and was able to create another (customer) like in the example ES docs

curl -XPUT 'localhost:9200/customer?pretty'

and customer and .kibana now show in my data dir, but how to get elastic search to once again recognize the logstash indices?

 root@fef0aed8a3c7:/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices# ls -al
total 204
drwxr-xr-x 51 elasticsearch elasticsearch 4096 May 22 16:47 .
drwxr-xr-x  4 elasticsearch elasticsearch 4096 May 20 05:05 ..
drwxr-xr-x  4 elasticsearch elasticsearch 4096 Mar 29 05:56 .kibana
drwxr-xr-x  8 elasticsearch elasticsearch 4096 May 22 16:47 customer
drwxr-xr-x  5 elasticsearch elasticsearch 4096 Mar 29 06:10 logstash-2015.03.29
drwxr-xr-x  8 elasticsearch elasticsearch 4096 Mar 29 23:54 logstash-2015.03.30

truncated the rest of that ls -la

yet only the two indices show:

 curl 'localhost:9200/_cat/indices?v'
health status index    pri rep docs.count docs.deleted store.size pri.store.size 
yellow open   .kibana    1   1          1            0      2.5kb          2.5kb 
yellow open   customer   5   1          0            0       575b           575b

Run as user "elasticsearch"

The elasticsearch package that is being installed for this image creates an "elasticsearch" user that is used by the init.d script:

$ docker run --rm elasticsearch:1.4 id elasticsearch
uid=105(elasticsearch) gid=108(elasticsearch) groups=108(elasticsearch)

This user is not being utilitized in the /usr/share/elasticsearch/bin/elasticsearch script, which means this image is currently running the ES service as root.

The image should be updated so that the service does not run as root. Since ES doesn't appear to have its own way to drop privileges, it seems like that way to go here is with gosu and a custom ENTRYPOINT.

Cant connect using java client (NoNodeAvailableException)

Hi,

Im starting the image using Kitematic on Mac OS X. The cluster starts perfectly and i can view the cluster status thru the browser. But, when trying to connecting to the cluster using the java client, i have the following issue:
[info] 2015-06-20 18:05:51 WARN ExceptionFilter:41 - exception: uri:/api/packages/locations exception: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
[info] org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
[info] at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:278)
[info] at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:197)
[info] at org.elasticsearch.client.transport.support.InternalTransportIndicesAdminClient.execute(InternalTransportIndicesAdminClient.java:86)
[info] at org.elasticsearch.client.support.AbstractIndicesAdminClient.exists(AbstractIndicesAdminClient.java:163)
[info] at org.elasticsearch.action.admin.indices.exists.indices.IndicesExistsRequestBuilder.doExecute(IndicesExistsRequestBuilder.java:53)
[info] at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:91)

Do i have to do any specific configuration to make the image work with the java client?

Trying to listen only 127.0.0.1 seems not be possible

I'm trying to test docker on a debian (8.2), while it works using:

  docker run -p 9200:9200 -p 9300:9300 elasticsearch

I'm trying to get it only listening on the local machine with:

  sudo docker run -p 9200:9200 -p 9300:9300 elasticsearch -Des.network.host=127.0.0.1
  [2016-03-08 17:31:33,606][INFO ][node                     ] [Isaiah Bradley] version[2.2.0], pid[1], build[8ff36d1/2016-01-27T13:32:39Z]
  [2016-03-08 17:31:33,607][INFO ][node                     ] [Isaiah Bradley] initializing ...
  [2016-03-08 17:31:33,856][INFO ][plugins                  ] [Isaiah Bradley] modules [lang-expression, lang-groovy], plugins [], sites []
  [2016-03-08 17:31:33,868][INFO ][env                      ] [Isaiah Bradley] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [12.1gb], net total_space [18.1gb], spins? [unknown], types [rootfs]
  [2016-03-08 17:31:33,868][INFO ][env                      ] [Isaiah Bradley] heap size [989.8mb], compressed ordinary object pointers [true]
  [2016-03-08 17:31:34,888][INFO ][node                     ] [Isaiah Bradley] initialized
  [2016-03-08 17:31:34,888][INFO ][node                     ] [Isaiah Bradley] starting ...
  [2016-03-08 17:31:34,935][INFO ][transport                ] [Isaiah Bradley] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
  [2016-03-08 17:31:34,941][INFO ][discovery                ] [Isaiah Bradley] elasticsearch/_fFI2ONATpy4dzopqtyMUA
  [2016-03-08 17:31:37,956][INFO ][cluster.service          ] [Isaiah Bradley] new_master {Isaiah Bradley}{_fFI2ONATpy4dzopqtyMUA}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
  [2016-03-08 17:31:37,964][INFO ][http                     ] [Isaiah Bradley] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
  [2016-03-08 17:31:37,964][INFO ][node                     ] [Isaiah Bradley] started
  [2016-03-08 17:31:38,027][INFO ][gateway                  ] [Isaiah Bradley] recovered [0] indices into cluster_state

But, it doesn't work:

  curl http://127.0.0.1:9200
  curl: (56) Recv failure: Connection reset by peer

Is there anythink I'm doing wrong?

Java version

Is there any particular reason to use Java 7 instead of Java 8? Java 7 will stop being updated in April.

Until a few days ago I was using https://registry.hub.docker.com/u/dockerfile/elasticsearch/ , which uses Java 8, but it's not tagged and now it's deprecated.

I have a plugin that runs on Java 8 so I can't use this image as it is because of this.

Acces denied to mounted volume config

If I un docker image with -v option for config eg: -v "$PWD/config":/usr/share/elasticsearch/config I got:

Exception in thread "main" java.lang.IllegalStateException: Unable to access 'path.scripts' (/usr/share/elasticsearch/config/scripts)
Likely root cause: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/config/scripts
...

elasticsearch 2.0.0-beta connection reset by peer

$ docker run --rm=true -p 9200:9200 -p 9300:9300 elasticsearch:2.0.0-beta1
[2015-09-01 01:15:11,841][INFO ][org.elasticsearch.node   ] [James Howlett] version[2.0.0-beta1], pid[1], build[bfa3e47/2015-08-24T08:41:25Z]
[2015-09-01 01:15:11,843][INFO ][org.elasticsearch.node   ] [James Howlett] initializing ...
[2015-09-01 01:15:11,946][INFO ][org.elasticsearch.plugins] [James Howlett] loaded [], sites []
[2015-09-01 01:15:12,328][INFO ][org.elasticsearch.env    ] [James Howlett] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda2)]], net usable_space [144.2gb], net total_space [229gb], spins? [possibly], types [ext4]
[2015-09-01 01:15:17,263][INFO ][org.elasticsearch.node   ] [James Howlett] initialized
[2015-09-01 01:15:17,264][INFO ][org.elasticsearch.node   ] [James Howlett] starting ...
[2015-09-01 01:15:17,364][INFO ][org.elasticsearch.transport.netty] [James Howlett] Bound profile [default] to address {127.0.0.1:9300}
[2015-09-01 01:15:17,365][INFO ][org.elasticsearch.transport.netty] [James Howlett] Bound profile [default] to address {[::1]:9300}
[2015-09-01 01:15:17,366][INFO ][org.elasticsearch.transport] [James Howlett] bound_address {127.0.0.1:9300}, publish_address {127.0.0.1:9300}
[2015-09-01 01:15:17,375][INFO ][org.elasticsearch.discovery] [James Howlett] elasticsearch/UXxoYMfvQAKAj54TalfmyQ
[2015-09-01 01:15:20,441][INFO ][org.elasticsearch.cluster.service] [James Howlett] new_master {James Howlett}{UXxoYMfvQAKAj54TalfmyQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2015-09-01 01:15:20,499][INFO ][org.elasticsearch.http.netty] [James Howlett] Bound http to address {127.0.0.1:9200}
[2015-09-01 01:15:20,500][INFO ][org.elasticsearch.http.netty] [James Howlett] Bound http to address {[::1]:9200}
[2015-09-01 01:15:20,501][INFO ][org.elasticsearch.http   ] [James Howlett] bound_address {127.0.0.1:9200}, publish_address {127.0.0.1:9200}
[2015-09-01 01:15:20,501][INFO ][org.elasticsearch.node   ] [James Howlett] started
[2015-09-01 01:15:20,800][INFO ][org.elasticsearch.gateway] [James Howlett] recovered [0] indices into cluster_state

$ curl http://127.0.0.1:9200                                                                                                                                                                         
curl: (56) Recv failure: Connection reset by peer

Any idea ? Thanks

New feature request: Docker files for Power platform (SLES, RHEL, Ubuntu).

Hi,

I have written dockerfile for building Logstash from source and to run the test cases. I have built and tested the source code available on git successfully through the dockerfile for PPC64LE architecture. The dockerfile is successfully run on following platforms:
Ubuntu 14.10
SUSE Linux 12.0
RHEL 7.1

Kindly suggest me where can I (which repository) contribute this dockerfile for Logstash.

Regards,
Christina

Logs volume

I want to put my logfiles on a volume so that they are not in the image. As elasticsearch runs as the elasticsearch user, when I mount a volume at /usr/share/elasticsearch/logs, it can not create the log file. When I use bash in the container and chown the logs directory to elasticsearch:elasticsearch then restart the container, the log files are written as I would expect.

Clarification in "How to use this image"

The documentation states that This image includes EXPOSE 9200 9300 (default http.port), so standard container linking will make it automatically available to the linked containers..

However, when running the first command in the doc, docker run -d elasticsearch, the container's ports are not published to the host. What I mean is that curl 127.0.0.1:9200 returns curl: (7) Failed to connect to 127.0.0.1 port 9200: Connection refused.

Thus wouldn't it be better to have the command docker run -p 9200:9200 -p 9300:9300 -d elasticsearch instead of docker run -d elasticsearch in the doc as it seems the standard use case of this image? With this command, curl 127.0.0.1:9200 returns info.

Can't chown files inside container while running on boot2docker

Running it an issue where we cannot run an elasticsearch container on boot2docker running on OSX while bind mounting the data and config directories. Our previous work around was to drop the chown and gosu lines from the docker_entrypoint.sh and run as root in the container since this was just a local dev environment. However, the latest dockerfile image is now checking if elasticsearch us running as root and bailing if it is. The new hacky fix is to run 'usermod -u 1000 elasticsearch' before chown in the entry point. From reading previous similar issues, it appears that there was a plan to use environment variables to let elasticsearch optionally run as root, but that no longer appears to be the case. Is there a better less fragile way to get this running?

Ports not being forwarded + "Empty reply from server" from ES container

On OSX Yosemite, using Docker 1.8 (with Docker Toolkit) and the 2.x tagged image for ES, I'm seeing some strange behavior. Let me walk through what's going on...

Have the default VM running + pulled down the ES v2 image:

→ docker-machine ls
NAME      ACTIVE   DRIVER       STATE     URL                         
default   *        virtualbox   Running   tcp://192.168.99.100:2376

→ docker images
REPOSITORY          TAG       IMAGE ID            CREATED             VIRTUAL SIZE
elasticsearch       2         f76e483b4712        6 days ago          522.7 MB

Let's boot up the elasticsearch container and forward the 9200 and 9300 ports:

→ docker run --name es -d -p 9200:9200 -p 9300:9300 elasticsearch:2
5c68ebc7f828ca5fe2426ce03b36ecb1373715c02d14da7e0ef34ec45fde208f

→ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                            NAMES
5c68ebc7f828        elasticsearch:2     "/docker-entrypoint.s"   58 seconds ago      Up 56 seconds       0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp   es

Now let's curl it to see if it's online. We should see some JSON:

→ curl http://192.168.99.100:9200
curl: (7) Failed to connect to 192.168.99.100 port 9200: Connection refused

Hm... Running a port scan, we can see that :9200 and :9300 aren't listed:

Port Scanning host: 192.168.99.100
   Open TCP Port:   22        ssh
   Open TCP Port:   2376
Port Scan has completed…

And it's not that ES spun up on the non-default ports:

→ docker logs es
[2015-09-04 05:34:59,296][INFO ][org.elasticsearch.node   ] [Tzabaoth] version[2.0.0-beta1], pid[1], build[bfa3e47/2015-08-24T08:41:25Z]
...
[2015-09-04 05:35:01,438][INFO ][org.elasticsearch.transport.netty] [Tzabaoth] Bound profile [default] to address {127.0.0.1:9300}
[2015-09-04 05:35:01,440][INFO ][org.elasticsearch.transport.netty] [Tzabaoth] Bound profile [default] to address {[::1]:9300}
[2015-09-04 05:35:01,441][INFO ][org.elasticsearch.transport] [Tzabaoth] bound_address {127.0.0.1:9300}, publish_address {127.0.0.1:9300}
...
[2015-09-04 05:35:04,551][INFO ][org.elasticsearch.http.netty] [Tzabaoth] Bound http to address {127.0.0.1:9200}
[2015-09-04 05:35:04,551][INFO ][org.elasticsearch.http.netty] [Tzabaoth] Bound http to address {[::1]:9200}
[2015-09-04 05:35:04,552][INFO ][org.elasticsearch.http   ] [Tzabaoth] bound_address {127.0.0.1:9200}, publish_address {127.0.0.1:9200}

The Docker Host VM seems to think it's listening on those ports, though:

→ docker-machine ssh default 'sudo netstat -atp tcp | grep -i "listen"'
tcp        0      0 0.0.0.0:ssh             0.0.0.0:*               LISTEN      970/sshd
tcp        0      0 :::2376                 :::*                    LISTEN      1032/docker
tcp        0      0 :::9200                 :::*                    LISTEN      1513/docker-proxy
tcp        0      0 :::9300                 :::*                    LISTEN      1505/docker-proxy
tcp        0      0 :::ssh                  :::*                    LISTEN      970/sshd

Strangely, curling from the VM returns an Empty reply from server:

→ docker-machine ssh default 'curl 0.0.0.0:9200'
SSH cmd error!
command: curl 0.0.0.0:9200
err    : exit status 52
output :   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (52) Empty reply from server

The weirdest part was that it worked briefly when I was fiddling with /etc/hosts and some /etc/resolver/ definitions, but I haven't been able to get it to reproduce. As a control, I did the same process with an nginx:latest image and was able to forward it without issue (to both :80 and :9200, just to be sure).

Any insight would be helpful 😄

Unable to access 'path.data'

Hi, I'm having this issue when trying to have a mounted data volume

$ mkdir esdata

$ docker run -it --rm -v "$PWD/esdata":/usr/share/elasticsearch/data elasticsearch
Exception in thread "main" java.lang.IllegalStateException: Unable to access 'path.data' (/usr/share/elasticsearch/data/elasticsearch)
Likely root cause: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/elasticsearch
    at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
    at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
    at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
    at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
    at java.nio.file.Files.createDirectory(Files.java:674)
    at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)
    at java.nio.file.Files.createDirectories(Files.java:767)
    at org.elasticsearch.bootstrap.Security.ensureDirectoryExists(Security.java:250)
    at org.elasticsearch.bootstrap.Security.addPath(Security.java:227)
    at org.elasticsearch.bootstrap.Security.addFilePermissions(Security.java:206)
    at org.elasticsearch.bootstrap.Security.createPermissions(Security.java:184)
    at org.elasticsearch.bootstrap.Security.configure(Security.java:105)
    at org.elasticsearch.bootstrap.Bootstrap.setupSecurity(Bootstrap.java:196)
    at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:167)
    at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:285)
    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.

$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
elasticsearch       latest              14d91f1920dc        6 days ago          345.7 MB
nginx               latest              198a73cfd686        11 days ago         132.8 MB
centos              7                   e9fa5d3a0d0e        6 weeks ago         172.3 MB
centos              latest              e9fa5d3a0d0e        6 weeks ago         172.3 MB

$ docker version
Client:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.3
 Git commit:   a34a1d5
 Built:        Fri Nov 20 17:56:04 UTC 2015
 OS/Arch:      darwin/amd64

Server:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.3
 Git commit:   a34a1d5
 Built:        Fri Nov 20 17:56:04 UTC 2015
 OS/Arch:      linux/amd64

If I mount /tmp/esdata from my host instead, surprisingly it works:

$ mkdir /tmp/esdata

$ docker run -it --rm -v /tmp/esdata:/usr/share/elasticsearch/data elasticsearch
[2015-12-01 09:11:59,916][INFO ][node                     ] [Specialist] version[2.1.0], pid[1], build[72cd1f1/2015-11-18T22:40:03Z]
[2015-12-01 09:11:59,917][INFO ][node                     ] [Specialist] initializing ...
[2015-12-01 09:11:59,982][INFO ][plugins                  ] [Specialist] loaded [], sites []
[2015-12-01 09:12:00,001][INFO ][env                      ] [Specialist] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda1)]], net usable_space [184.7gb], net total_space [195.8gb], spins? [possibly], types [ext4]
[2015-12-01 09:12:02,338][INFO ][node                     ] [Specialist] initialized
[2015-12-01 09:12:02,338][INFO ][node                     ] [Specialist] starting ...
[2015-12-01 09:12:02,470][WARN ][common.network           ] [Specialist] publish address: {0.0.0.0} is a wildcard address, falling back to first non-loopback: {172.17.0.2}
[2015-12-01 09:12:02,472][INFO ][transport                ] [Specialist] publish_address {172.17.0.2:9300}, bound_addresses {[::]:9300}
[2015-12-01 09:12:02,501][INFO ][discovery                ] [Specialist] elasticsearch/LsFT2a0rSp2Z1lWNMmU6dg
[2015-12-01 09:12:05,595][INFO ][cluster.service          ] [Specialist] new_master {Specialist}{LsFT2a0rSp2Z1lWNMmU6dg}{172.17.0.2}{172.17.0.2:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2015-12-01 09:12:05,661][WARN ][common.network           ] [Specialist] publish address: {0.0.0.0} is a wildcard address, falling back to first non-loopback: {172.17.0.2}
[2015-12-01 09:12:05,662][INFO ][http                     ] [Specialist] publish_address {172.17.0.2:9200}, bound_addresses {[::]:9200}
[2015-12-01 09:12:05,663][INFO ][node                     ] [Specialist] started
[2015-12-01 09:12:05,664][INFO ][gateway                  ] [Specialist] recovered [0] indices into cluster_state

Shield being installed in the wrong directory

This may be a shield issue. After several hours trying to figure out why authentication wasn't working, I noticed there are two config directories in my elasticsearch container. The shield data files are installed in /etc/elasticsearch but they need to be in /usr/share/elasticsearch/config/.

To solve this I moved the shield directory to the right location. Wondering if there is a better solution…

FROM elasticsearch:2.1.1

# Add custom roles config
ADD ./roles.yml /etc/elasticsearch/shield/roles.yml

# Install Shield
RUN /usr/share/elasticsearch/bin/plugin install license
RUN /usr/share/elasticsearch/bin/plugin install shield

# Create user 
RUN /usr/share/elasticsearch/bin/shield/esusers useradd abc -r abc -p xyzxyz

# Move shield files to the right location
RUN mv /etc/elasticsearch/shield /usr/share/elasticsearch/config/shield

Finding elasticsearch ip and port

I have followed the tutorial for installing and verifying docker (http://docs.docker.com/windows/started/) on Windows 7 and everything seems to be fine.

When I run an elasticsearch image (<docker run -d elasticsearch>) I don't see any errors but I can't find my elasticsearch ip and port. I ran an inspect (<CID=$(docker run -d elasticsearch)> , <docker inspect $CID>) and get the result below (sorry it's long, I don't have permission to attach a file). It appears that elasticsearch is running but I can't find it at localhost:9200 or anywhere else. What am I missing?

bgow@28544pc MINGW64 ~
$ docker run -d elasticsearch
b87eb14cdec662e2e84beba754fd0003a8f792903c5db8539fb30b26b2ecbe6b

bgow@28544pc MINGW64 ~
$ CID=$(docker run -d elasticsearch);

bgow@28544pc MINGW64 ~
$ docker inspect $CID
[
{
    "Id": "104032a5d0cd727e0c7a7beb9cc2d577868dcafdcda787163481004e85de948e",
    "Created": "2015-10-25T20:55:17.861786973Z",
    "Path": "/docker-entrypoint.sh",
    "Args": [
        "elasticsearch"
    ],
    "State": {
        "Running": true,
        "Paused": false,
        "Restarting": false,
        "OOMKilled": false,
        "Dead": false,
        "Pid": 22462,
        "ExitCode": 0,
        "Error": "",
        "StartedAt": "2015-10-25T20:55:18.075436784Z",
        "FinishedAt": "0001-01-01T00:00:00Z"
    },
    "Image": "b90dc4d186db3081cf523403ccc4d8d685a82292ffe76e47e349d4925505e721",
    "NetworkSettings": {
        "Bridge": "",
        "EndpointID": "53e06553156709c73bac01c50da86b1e92f68f045a03d600490a992969671aeb",
        "Gateway": "172.17.42.1",
        "GlobalIPv6Address": "",
        "GlobalIPv6PrefixLen": 0,
        "HairpinMode": false,
        "IPAddress": "172.17.0.16",
        "IPPrefixLen": 16,
        "IPv6Gateway": "",
        "LinkLocalIPv6Address": "",
        "LinkLocalIPv6PrefixLen": 0,
        "MacAddress": "02:42:ac:11:00:10",
        "NetworkID": "9df1f3cf567cd383caa25b4655a15c9a873408df2fd19a0f43fefd6edba7ea05",
        "PortMapping": null,
        "Ports": {
            "9200/tcp": null,
            "9300/tcp": null
        },
        "SandboxKey": "/var/run/docker/netns/104032a5d0cd",
        "SecondaryIPAddresses": null,
        "SecondaryIPv6Addresses": null
    },
    "ResolvConfPath": "/mnt/sda1/var/lib/docker/containers/104032a5d0cd727e0c7a7beb9cc2d577868dcafdcda787163481004e85de948e/resolv.conf",
    "HostnamePath": "/mnt/sda1/var/lib/docker/containers/104032a5d0cd727e0c7a7beb9cc2d577868dcafdcda787163481004e85de948e/hostname",
    "HostsPath": "/mnt/sda1/var/lib/docker/containers/104032a5d0cd727e0c7a7beb9cc2d577868dcafdcda787163481004e85de948e/hosts",
    "LogPath": "/mnt/sda1/var/lib/docker/containers/104032a5d0cd727e0c7a7beb9cc2d577868dcafdcda787163481004e85de948e/104032a5d0cd727e0c7a7beb9cc2d577868dcafdcda787163481004e85de948e-json.log",
    "Name": "/ecstatic_wozniak",
    "RestartCount": 0,
    "Driver": "aufs",
    "ExecDriver": "native-0.2",
    "MountLabel": "",
    "ProcessLabel": "",
    "AppArmorProfile": "",
    "ExecIDs": null,
    "HostConfig": {
        "Binds": null,
        "ContainerIDFile": "",
        "LxcConf": [],
        "Memory": 0,
        "MemorySwap": 0,
        "CpuShares": 0,
        "CpuPeriod": 0,
        "CpusetCpus": "",
        "CpusetMems": "",
        "CpuQuota": 0,
        "BlkioWeight": 0,
        "OomKillDisable": false,
        "MemorySwappiness": -1,
        "Privileged": false,
        "PortBindings": {},
        "Links": null,
        "PublishAllPorts": false,
        "Dns": null,
        "DnsSearch": null,
        "ExtraHosts": null,
        "VolumesFrom": null,
        "Devices": [],
        "NetworkMode": "default",
        "IpcMode": "",
        "PidMode": "",
        "UTSMode": "",
        "CapAdd": null,
        "CapDrop": null,
        "GroupAdd": null,
        "RestartPolicy": {
            "Name": "no",
            "MaximumRetryCount": 0
        },
        "SecurityOpt": null,
        "ReadonlyRootfs": false,
        "Ulimits": null,
        "LogConfig": {
            "Type": "json-file",
            "Config": {}
        },
        "CgroupParent": "",
        "ConsoleSize": [
            0,
            0
        ]
    },
    "GraphDriver": {
        "Name": "aufs",
        "Data": null
    },
    "Mounts": [
        {
            "Name": "14bda31e8d3cbca4d58598b1338e1b2bfc3b739ec21a5c8bd5ad00565ac49317",
            "Source": "/mnt/sda1/var/lib/docker/volumes/14bda31e8d3cbca4d58598b1338e1b2bfc3b739ec21a5c8bd5ad00565ac49317/_data",
            "Destination": "/usr/share/elasticsearch/data",
            "Driver": "local",
            "Mode": "",
            "RW": true
        }
    ],
    "Config": {
        "Hostname": "104032a5d0cd",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "ExposedPorts": {
            "9200/tcp": {},
            "9300/tcp": {}
        },
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": [
            "PATH=/usr/share/elasticsearch/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
            "LANG=C.UTF-8",
            "JAVA_VERSION=8u66",
            "JAVA_DEBIAN_VERSION=8u66-b01-1~bpo8+1",
            "CA_CERTIFICATES_JAVA_VERSION=20140324",
            "ELASTICSEARCH_MAJOR=1.7",
            "ELASTICSEARCH_VERSION=1.7.3",
            "ELASTICSEARCH_REPO_BASE=http://packages.elasticsearch.org/elasticsearch/1.7/debian"
        ],
        "Cmd": [
            "elasticsearch"
        ],
        "Image": "elasticsearch",
        "Volumes": {
            "/usr/share/elasticsearch/data": {}
        },
        "WorkingDir": "",
        "Entrypoint": [
            "/docker-entrypoint.sh"
        ],
        "OnBuild": null,
        "Labels": {}
    }
}
]

Plugins support

Hi! You did it again :) Another repo I don't need to maintain by myself anymore!

However, with my docker image I allowed the plugins folder to be mounted in the container, in the very same way the data folder can be mounted in a container using this official docker image.

Would this be the best way to add elasticsearch plugins? Or which is your recommended way of adding plugins to the container?

Cannot pull elasticsearch

Trying to pull elasticsearch, but get HTTP400 error

Pulling elasticsearch (elasticsearch:latest)...
Pulling repository docker.io/library/elasticsearch
d3aa5ff744e2: Error pulling image (latest) from docker.io/library/elasticsearch, HTTP code 400ps://registry-1.docker.io/v1/, HTTP code 400
8c00acfb0175: Download complete
8b49fe88b40b: Download complete
3bdf542c6cd7: Download complete
f25aff3c52d8: Download complete
1ae6b34191f6: Download complete
52d86395a92b: Download complete
ac33986dcda9: Download complete
7c66bfc43ad9: Download complete
bf5d4aae4686: Download complete
6707c13cc6f0: Download complete
81f1a5272622: Download complete
fd1daccb2022: Download complete
b4789d59ed14: Download complete
7b2de77edf80: Download complete
f5b215df68c4: Download complete
1b1bf86e647f: Download complete
d25acb534819: Download complete
be00ef0a0c4e: Error pulling dependent layers
Error pulling image (latest) from docker.io/library/elasticsearch, HTTP code 400

Got this error also for mongo for few days, but today i pulled mongo

Cannot change the permission of data folder when using boot2docker.

In docker_entrypoint.sh, this image will exec chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data to change the ownership of the mounted data folder.

But when I use boot2docker to run this image, I got an error as following:

[2015-05-14 22:02:10,344][INFO ][node ] [Her] version[1.5.2], pid[40], build[62ff986/2015-04-27T09:21:06Z]
[2015-05-14 22:02:10,345][INFO ][node ] [Her] initializing ...
[2015-05-14 22:02:10,356][INFO ][plugins ] [Her] loaded [marvel], sites [marvel]
{1.5.2}: Initialization Failed ...

  • ElasticsearchIllegalStateException[Failed to created node environment]
    AccessDeniedException[/usr/share/elasticsearch/data/elasticsearch]

This is because boot2docker uses vboxfs to share the files between Host Mac and the VM that actually runs docker, and VM cannot change the permission or ownership of the folders on Host Mac.

One workaround is to create the data folder inside the VM, and mount that folder on the container.

I guess using data container might also works, but haven't tried it, and even if it works, it might still have some performance issues.

showed "failed recovery" after upgrade elasticsearch from 2.0 to 2.2

For same reasons, i need upgrade my ELK docker images to latest, it showed many error info from elasticsearch logs, Could tell any one tell me the reason, Thanks a lot.

_sandal-es | [2016-03-21 01:59:08,491][WARN ][indices.cluster ] [Arc] [[.kibana][0]] marking and sending shard failed due to [failed recovery]_
sandal-es | [.kibana][[.kibana][0]] IndexShardRecoveryException[failed recovery]; nested: IllegalStateException[Checkpoint file translog-3.ckp already exists but has corrupted content expected: Checkpoint{offset=46155, numOps=31, translogFileGeneration= 3} but got: Checkpoint{offset=43, numOps=0, translogFileGeneration= 3}];
sandal-es | at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:179)
sandal-es | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
sandal-es | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
sandal-es | at java.lang.Thread.run(Thread.java:745)
_sandal-es | Caused by: java.lang.IllegalStateException: Checkpoint file translog-3.ckp already exists but has corrupted content expected: Checkpoint{offset=46155, numOps=31, translogFileGeneration= 3} but got: Checkpoint{offset=43, numOps=0, translogFileGeneration= 3}_
sandal-es | at org.elasticsearch.index.translog.Translog.recoverFromFiles(Translog.java:339)
sandal-es | at org.elasticsearch.index.translog.Translog.(Translog.java:179)
sandal-es | at org.elasticsearch.index.engine.InternalEngine.openTranslog(InternalEngine.java:208)
sandal-es | at org.elasticsearch.index.engine.InternalEngine.(InternalEngine.java:151)
sandal-es | at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)
sandal-es | at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1510)
sandal-es | at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1494)
sandal-es | at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:969)
sandal-es | at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:941)
sandal-es | at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:241)
sandal-es | at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)
sandal-es | at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)
sandal-es | ... 3 more

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.