Code Monkey home page Code Monkey logo

elasticsearch-action-updatebyquery's Introduction

ElasticSearch Update By Query Plugin

The update by query API allows all documents that with the query to be updated with a script. This is experimental.

This plugin is an adaptation of elasticsearch/elasticsearch#2230, see @martijnvg's branch.

Upgrade notes

Installation

Simply run at the root of your ElasticSearch v0.20.2+ installation:

bin/plugin -install com.yakaz.elasticsearch.plugins/elasticsearch-action-updatebyquery/2.6.0

This will download the plugin from the Central Maven Repository.

For older versions of ElasticSearch, you can still use the longer:

bin/plugin -url http://oss.sonatype.org/content/repositories/releases/com/yakaz/elasticsearch/plugins/elasticsearch-action-updatebyquery/1.0.0/elasticsearch-action-updatebyquery-1.0.0.zip install elasticsearch-action-updatebyquery

In order to declare this plugin as a dependency, add the following to your pom.xml:

<dependency>
    <groupId>com.yakaz.elasticsearch.plugins</groupId>
    <artifactId>elasticsearch-action-updatebyquery</artifactId>
    <version>2.5.1</version>
</dependency>

Version matrix:

┌───────────────────────────────┬────────────────────────┐
│ Update By Query Action Plugin │ ElasticSearch          │
├───────────────────────────────┼────────────────────────┤
│                               │ 2.0.0-beta1            │
├───────────────────────────────┼────────────────────────┤
│ 2.6.0                         │ 1.6.0 ─► (1.7.4)       │
├───────────────────────────────┼────────────────────────┤
│ 2.5.x                         │ 1.5.0 ─► (1.5.2)       │
├───────────────────────────────┼────────────────────────┤
│ 2.4.0                         │ 1.4.0 ─► (1.4.5)       │
├───────────────────────────────┼────────────────────────┤
│ 2.3.0                         │ 1.4.0.Beta1            │
├───────────────────────────────┼────────────────────────┤
│ 2.2.0                         │ 1.3.0 ─► (1.3.8)       │
├───────────────────────────────┼────────────────────────┤
│ 2.1.1                         │ 1.2.3                  │
├───────────────────────────────┼────────────────────────┤
│ 2.1.0                         │ 1.2.0 ─► 1.2.2         │
├───────────────────────────────┼────────────────────────┤
│ 2.0.x                         │ 1.1.0 ─► 1.1.2         │
├───────────────────────────────┼────────────────────────┤
│ 1.6.x                         │ 1.0.0 ─► 1.0.2         │
├───────────────────────────────┼────────────────────────┤
│ 1.5.x                         │ 1.0.0.Beta1            │
├───────────────────────────────┼────────────────────────┤
│ 1.4.x                         │ 0.90.10 ─► (0.90.13)   │
├───────────────────────────────┼────────────────────────┤
│ 1.4.0                         │ 0.90.6 ─► 0.90.9       │
├───────────────────────────────┼────────────────────────┤
│ 1.3.x                         │ 0.90.4 ─► 0.90.5       │
├───────────────────────────────┼────────────────────────┤
│ 1.2.x                         │ 0.90.3                 │
├───────────────────────────────┼────────────────────────┤
│ 1.1.x                         │ 0.90.0.beta1 ─► 0.90.2 │
├───────────────────────────────┼────────────────────────┤
│ 1.0.x                         │ 0.20.0 ─► 0.20.4       │
└───────────────────────────────┴────────────────────────┘

Description

The update by query API allows all documents that with the query to be updated with a script. This feature is experimental.

The update by query works a bit different than the delete by query. The update by query api translates the documents that match into bulk index / delete requests. After the bulk limit has been reached, the bulk requests created thus far will be executed. After the bulk requests have been executed the next batch of requests will be prepared and executed. This behavior continues until all documents that matched the query have been processed.

Example usage

Note: The following example uses dynamic scripts, disabled by default since ElasticSearch 1.2.0. To enable them, add script.disable_dynamic: false to your elasticsearch configuration.

Index an example document:

curl -XPUT 'localhost:9200/twitter/tweet/1' -d '
{
    "text" : {
        "message" : "you know for search"
    },
    "likes": 0
}'

Execute the following update by query command:

curl -XPOST 'localhost:9200/twitter/_update_by_query' -d '
{
    "query" : {
        "term" : {
            "message" : "you"
        }
    },
    "script" : "ctx._source.likes += 1"
}'

This will yield the following response:

{
  "ok" : true,
  "took" : 9,
  "total" : 1,
  "updated" : 1,
  "indices" : [ {
    "twitter" : { }
  } ]
}

By default no bulk item responses are included in the response. If there are bulk item responses included in the response, the bulk response items are grouped by index and shard. This can be controlled by the response option.

Options

Request body options:

  • query: The query that the documents must match to be updated.
  • script: The inline script source.
  • script_file: The file script name.
  • script_id: The indexed script id.
  • lang: The script language.
  • params: The script parameters.

Query string options:

  • consistency: The write consistency of the index/delete operation.
  • response: What bulk response items to include into the update by query response. This can be set to the following: none, failed and all. Defaults to none. Warning: all can result in out of memory errors when the query results in many hits.
  • routing: Sets the routing that will be used to route the document to the relevant shard.
  • timeout: Timeout waiting for a shard to become available.

Configuration options:

  • action.updatebyquery.bulk_size: The number of documents per update bulk. Defaults to 1000.
  • threadpool.bulk.queue_size: This plugins files bulk requests to perform the actual updates. You may decide to increase this value over its default of 50 if you are experiencing the following errors: EsRejectedExecutionException[rejected execution (queue capacity 50) on org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$1]

Context variables

NOTE: v2.0.0 of this plugin dropped the support of additional context variables in favor of a unified code path with the Update API. Pull request elasticsearch/elasticsearch#5724 aims at restoring them (except _uid). As such, the context variables available through the update by query feature are the same as those available in the Update API scripts.

Input variables

Just like in the Update API, the script has access to the following variables (as of Elasticsearch 1.1.0):

  • ctx
    • _source

And as of Elasticsearch 1.5.0:

  • ctx
    • _source
    • _index
    • _type
    • _id
    • _version
    • _routing
    • _parent
    • _timestamp
    • _ttl

Output variables

Just like in the Update API, you may update the following variables (as of Elasticsearch 1.1.0):

  • ctx
    • _timestamp
    • _ttl

elasticsearch-action-updatebyquery's People

Contributors

apeteri avatar baxford avatar k4200 avatar martijnvg avatar neilmunro avatar ofavre avatar tegansnyder avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

elasticsearch-action-updatebyquery's Issues

java.lang.NullPointerException on update by query

I'm getting a java.lang.NullPointerException after executing an update by query. I suspect this happens on an empty query result. Maybe.

Error Log

[2013-07-01 20:41:58,530][DEBUG][action.updatebyquery     ] [server-01] error while executing bulk operations for an update by query action, sending partial response...
java.lang.NullPointerException
    at org.elasticsearch.action.updatebyquery.IndexUpdateByQueryResponse.shardResponses(IndexUpdateByQueryResponse.java:66)
    at org.elasticsearch.action.updatebyquery.IndexUpdateByQueryResponse.<init>(IndexUpdateByQueryResponse.java:48)
    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$ShardResponseListener.finalizeAction(TransportUpdateByQueryAction.java:391)
    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$ShardResponseListener.handleResponse(TransportUpdateByQueryAction.java:373)
    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$2.onResponse(TransportUpdateByQueryAction.java:315)
    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$2.onResponse(TransportUpdateByQueryAction.java:312)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.finalizeBulkActions(TransportShardUpdateByQueryAction.java:325)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.onResponse(TransportShardUpdateByQueryAction.java:267)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.onResponse(TransportShardUpdateByQueryAction.java:220)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performReplicas(TransportShardReplicationOperationAction.java:607)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:533)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.start(TransportShardReplicationOperationAction.java:434)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.start(TransportShardReplicationOperationAction.java:343)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:107)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:61)
    at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:61)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.executeBulkIndex(TransportShardUpdateByQueryAction.java:310)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction.doExecuteInternal(TransportShardUpdateByQueryAction.java:164)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction.access$000(TransportShardUpdateByQueryAction.java:79)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$1.run(TransportShardUpdateByQueryAction.java:127)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:722)
[2013-07-01 20:41:58,533][DEBUG][action.updatebyquery     ] [server-01] [data_default][0] error while executing update by query shard request
java.lang.IndexOutOfBoundsException: index 2
    at java.util.concurrent.atomic.AtomicReferenceArray.checkedByteOffset(AtomicReferenceArray.java:78)
    at java.util.concurrent.atomic.AtomicReferenceArray.set(AtomicReferenceArray.java:139)
    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$ShardResponseListener.handleResponse(TransportUpdateByQueryAction.java:371)
    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$2.onResponse(TransportUpdateByQueryAction.java:315)
    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$2.onResponse(TransportUpdateByQueryAction.java:312)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.finalizeBulkActions(TransportShardUpdateByQueryAction.java:325)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.onFailure(TransportShardUpdateByQueryAction.java:287)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.onResponse(TransportShardUpdateByQueryAction.java:280)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.onResponse(TransportShardUpdateByQueryAction.java:220)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performReplicas(TransportShardReplicationOperationAction.java:607)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:533)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.start(TransportShardReplicationOperationAction.java:434)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.start(TransportShardReplicationOperationAction.java:343)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:107)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:61)
    at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:61)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.executeBulkIndex(TransportShardUpdateByQueryAction.java:310)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction.doExecuteInternal(TransportShardUpdateByQueryAction.java:164)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction.access$000(TransportShardUpdateByQueryAction.java:79)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$1.run(TransportShardUpdateByQueryAction.java:127)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:722)
[2013-07-01 20:41:58,535][DEBUG][action.bulk              ] [server-01] [data_default][0], node[6oEZYV2rQYC4dEs64YHYQg], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.bulk.PublicBulkShardRequest@5ac003ad]
java.lang.IndexOutOfBoundsException: index 3
    at java.util.concurrent.atomic.AtomicReferenceArray.checkedByteOffset(AtomicReferenceArray.java:78)
    at java.util.concurrent.atomic.AtomicReferenceArray.set(AtomicReferenceArray.java:139)
    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$ShardResponseListener.handleException(TransportUpdateByQueryAction.java:380)
    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$2.onFailure(TransportUpdateByQueryAction.java:319)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.onFailure(TransportShardUpdateByQueryAction.java:289)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.onResponse(TransportShardUpdateByQueryAction.java:280)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.onResponse(TransportShardUpdateByQueryAction.java:220)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performReplicas(TransportShardReplicationOperationAction.java:607)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:533)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.start(TransportShardReplicationOperationAction.java:434)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.start(TransportShardReplicationOperationAction.java:343)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:107)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:61)
    at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:61)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.executeBulkIndex(TransportShardUpdateByQueryAction.java:310)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction.doExecuteInternal(TransportShardUpdateByQueryAction.java:164)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction.access$000(TransportShardUpdateByQueryAction.java:79)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$1.run(TransportShardUpdateByQueryAction.java:127)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:722)
[2013-07-01 20:41:58,537][DEBUG][action.updatebyquery     ] [server-01] error while executing bulk operations for an update by query action, sending partial response...
java.lang.IndexOutOfBoundsException: index 3
    at java.util.concurrent.atomic.AtomicReferenceArray.checkedByteOffset(AtomicReferenceArray.java:78)
    at java.util.concurrent.atomic.AtomicReferenceArray.set(AtomicReferenceArray.java:139)
    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$ShardResponseListener.handleException(TransportUpdateByQueryAction.java:380)
    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$2.onFailure(TransportUpdateByQueryAction.java:319)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.onFailure(TransportShardUpdateByQueryAction.java:289)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.onResponse(TransportShardUpdateByQueryAction.java:280)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.onResponse(TransportShardUpdateByQueryAction.java:220)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performReplicas(TransportShardReplicationOperationAction.java:607)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:533)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.start(TransportShardReplicationOperationAction.java:434)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.start(TransportShardReplicationOperationAction.java:343)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:107)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:61)
    at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:61)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.executeBulkIndex(TransportShardUpdateByQueryAction.java:310)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction.doExecuteInternal(TransportShardUpdateByQueryAction.java:164)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction.access$000(TransportShardUpdateByQueryAction.java:79)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$1.run(TransportShardUpdateByQueryAction.java:127)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:722)
[2013-07-01 20:41:58,539][DEBUG][action.updatebyquery     ] [server-01] [data_default][0] error while executing update by query shard request
org.apache.lucene.store.AlreadyClosedException: this IndexReader is closed
    at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:224)
    at org.apache.lucene.search.SearcherManager.decRef(SearcherManager.java:111)
    at org.apache.lucene.search.SearcherManager.decRef(SearcherManager.java:58)
    at org.apache.lucene.search.ReferenceManager.release(ReferenceManager.java:253)
    at org.elasticsearch.index.engine.robin.RobinEngine$RobinSearcher.release(RobinEngine.java:1453)
    at org.elasticsearch.search.internal.SearchContext.release(SearchContext.java:218)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.finalizeBulkActions(TransportShardUpdateByQueryAction.java:315)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.onFailure(TransportShardUpdateByQueryAction.java:287)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:550)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.start(TransportShardReplicationOperationAction.java:434)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.start(TransportShardReplicationOperationAction.java:343)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:107)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:61)
    at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:61)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.executeBulkIndex(TransportShardUpdateByQueryAction.java:310)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction.doExecuteInternal(TransportShardUpdateByQueryAction.java:164)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction.access$000(TransportShardUpdateByQueryAction.java:79)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$1.run(TransportShardUpdateByQueryAction.java:127)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:722)
[2013-07-01 20:41:58,541][DEBUG][action.updatebyquery     ] [server-01] error while executing bulk operations for an update by query action, sending partial response...
java.lang.IndexOutOfBoundsException: index 4
    at java.util.concurrent.atomic.AtomicReferenceArray.checkedByteOffset(AtomicReferenceArray.java:78)
    at java.util.concurrent.atomic.AtomicReferenceArray.set(AtomicReferenceArray.java:139)
    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$ShardResponseListener.handleException(TransportUpdateByQueryAction.java:380)
    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$2.onFailure(TransportUpdateByQueryAction.java:319)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.onFailure(TransportShardUpdateByQueryAction.java:289)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:550)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.start(TransportShardReplicationOperationAction.java:434)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.start(TransportShardReplicationOperationAction.java:343)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:107)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction.doExecute(TransportShardReplicationOperationAction.java:61)
    at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:61)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.executeBulkIndex(TransportShardUpdateByQueryAction.java:310)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction.doExecuteInternal(TransportShardUpdateByQueryAction.java:164)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction.access$000(TransportShardUpdateByQueryAction.java:79)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$1.run(TransportShardUpdateByQueryAction.java:127)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:722)
[2013-07-01 20:41:58,542][DEBUG][action.updatebyquery     ] [server-01] [data_default][0] error while executing update by query shard request
org.apache.lucene.store.AlreadyClosedException: this IndexReader is closed
    at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:224)
    at org.apache.lucene.search.SearcherManager.decRef(SearcherManager.java:111)
    at org.apache.lucene.search.SearcherManager.decRef(SearcherManager.java:58)
    at org.apache.lucene.search.ReferenceManager.release(ReferenceManager.java:253)
    at org.elasticsearch.index.engine.robin.RobinEngine$RobinSearcher.release(RobinEngine.java:1453)
    at org.elasticsearch.search.internal.SearchContext.release(SearchContext.java:218)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.finalizeBulkActions(TransportShardUpdateByQueryAction.java:315)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.onFailure(TransportShardUpdateByQueryAction.java:287)
    at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:64)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$BatchedShardUpdateByQueryExecutor.executeBulkIndex(TransportShardUpdateByQueryAction.java:310)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction.doExecuteInternal(TransportShardUpdateByQueryAction.java:164)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction.access$000(TransportShardUpdateByQueryAction.java:79)
    at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$1.run(TransportShardUpdateByQueryAction.java:127)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:722)

Allow to put some limit on number of updated items

Hi Yakaz,

Is it possible to limit the items to be updated in some way? Let us say I have 10k documents matching my query but want to stop the update by query once 1k is updated.

Looking at the code, this is done a the shard level, so lets say we want to update 100 per shard or 1 bulk batch per shard.

How hard would it be to implement it?

Maybe I can do the pull request for it if you give me some hints on where this should be done and how feasible it is.

Thanks,
Igor

No such method error on ES 1.7.0

getting this error repeatedly in ES 1.7.0
{
"error": "NoSuchMethodError[org.elasticsearch.rest.RestRequest.contentUnsafe()Z]",
"status": 500
}

Some settings in my config file are:-
script.plugin: on
script.inline: on
script.indexed: on
script.update: on

Elasticsearch 1.2.0

Hi,

Any plans to support Elasticsearch 1.2.0? It doesn't look like it's compatible.

NoClassDefFoundError[org/elasticsearch/rest/XContentThrowableRestResponse]; nested: ClassNotFoundException[org.elasticsearch.rest.XContentThrowableRestResponse];

NoSuchMethod Mac OS install

Dears,

Using the plugin, I'm in front of this error when launching ES :

{1.4.0}: Initialization Failed ...

  1. NoSuchMethodError[org.elasticsearch.action.support.TransportAction.(Lorg/elasticsearch/common/settings/Settings;Ljava/lang/String;Lorg/elasticsearch/threadpool/ThreadPool;)V]2) NoSuchMethodError[org.elasticsearch.rest.BaseRestHandler.(Lorg/elasticsearch/common/settings/Settings;Lorg/elasticsearch/client/Client;)V]

ES working well when i remove the plugin.

Could you please help ?

Thanks

1.1.0 Doesn't work with ES 0.90.3

Exception in thread "elasticsearch[node][bulk][T#1]" java.lang.NoSuchMethodError: org.elasticsearch.search.internal.SearchContext.(JLorg/elasticsearch/search/internal/ShardSearchRequest;Lorg/elasticsearch/search/SearchShardTarget;Lorg/elasticsearch/index/engine/Engine$Searcher;Lorg/elasticsearch/index/service/IndexService;Lorg/elasticsearch/index/shard/service/IndexShard;Lorg/elasticsearch/script/ScriptService;)V
at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction.doExecuteInternal(TransportShardUpdateByQueryAction.java:139)
at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction.access$000(TransportShardUpdateByQueryAction.java:79)
at org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$1.run(TransportShardUpdateByQueryAction.java:127)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)

ES 1.0.0 support

Is this going to work on ES 1.0.0, as ES 1.0.0 is now released?

Update object

PUT authors/book/1
{
"name": "Revelation Space1",
"genre": "scifi1",
"publisher": "penguin1",
"author":[
{"author_id":1001,"author_name":"mahadoang"},
{"author_id":1002,"author_name":"bugping"}
]
}

PUT authors/book/2
{
"name": "Revelation Space2",
"genre": "scifi2",
"publisher": "penguin2",
"author":[
{"author_id":1001,"author_name":"mahadoang"},
{"author_id":1003,"author_name":"bugping3"}
]
}

PUT authors/book/3
{
"name": "Revelation Space3",
"genre": "scifi3",
"publisher": "penguin3",
"author":[
{"author_id":1001,"author_name":"mahadoang"}
]
}

GET authors/book/_search
{
"query": {
"query_string": {
"default_field": "author.author_id",
"query": 1001
}
}
}

i want update author.author_name form "mahadoang" to "somechai" where author.author_id = 1001

///////////////////// update ////////////////
POST authors/_update_by_query
{
"query": {
"query_string": {
"default_field": "author.author_id",
"query": 1001
}
},
"script" : "ctx._source.author.author_name = author_name",
"params" : {
"author_name" : "somechai"
}
}

not update help me.
Thank you.

SearchContext not released when no docs to update

The search context is not released when there aren't any documents to update.

Full details about this issue and its resolution can be found in the ES issue 5189, which should have been filed here, but it wasn't immediately apparent that the culprit was in this plugin.

I'm creating an issue in this repo for consistency's sake.

v2.0.0 not compatible with ES 1.3

ES v1.3 just came out and I wanted to give it a spin with the version 2.0.0 of this plugin. During startup, I see this:

{1.3.0}: Initialization Failed ...
- VerifyError[class org.elasticsearch.rest.action.updatebyquery.RestUpdateByQueryAction overrides final method handleRequest.(Lorg/elasticsearch/rest/RestRequest;Lorg/elasticsearch/rest/RestChannel;)V]

It looks like RestUpdateByQueryAction used to implement RestHandler.handleRequest(RestRequest, RestChannel) but the latter is now implemented by the final method BaseRestHandler.handleRequest(RestRequest, RestChannel) and it delegates to BaseRestHandler.handleRequest(RestRequest, RestChannel, Client).

So RestUpdateByQueryAction would need a small twist to override the correct method.

Update query lock up

Let me first say this plugin (1.3.0 + es 0.90.5) is really wonderful. It saves my project. However, I notice it is locking up my requests and eventually lock the entire elasticsearch.

Here is my update query

{
    "query" : {
    "bool" : {
       "must" : {
        "term" : {"m":"abc"}
       },
       "must_not": {
        "term" : {"category" : "cde"} 
       }
    }
    },
    "script" : "ctx._source.category=\"cde\""
}

It updates all my entries with m=abc and category!=cde to category=cde.

Some runs are okay but some will just hang and not returning from the call. Below are some errors in elasticsearch.log

----------------
[2013-10-25 22:52:49,907][DEBUG][action.updatebyquery     ] [Kragoff, Ivan] [logstash-2013.10.24][0] error while executing update by query shard request
java.lang.IndexOutOfBoundsException: index 3
    at java.util.concurrent.atomic.AtomicReferenceArray.checkedByteOffset(AtomicReferenceArray.java:78)
    at java.util.concurrent.atomic.AtomicReferenceArray.set(AtomicReferenceArray.java:139)
    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$MultipleIndexUpdateByQueryActionListener.onResponse(TransportUpdateByQueryAction.java:117)
    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$MultipleIndexUpdateByQueryActionListener.onResponse(TransportUpdateByQueryAction.java:98)
    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$ShardResponseListener.finalizeAction(TransportUpdateByQueryAction.java:395)
.....
----------------
2013-10-25 22:57:36,261][DEBUG][action.updatebyquery     ] [Kragoff, Ivan] error while executing bulk operations for an update by query action, sending partial response...
java.lang.IndexOutOfBoundsException: index 3
    at java.util.concurrent.atomic.AtomicReferenceArray.checkedByteOffset(AtomicReferenceArray.java:78)
    at java.util.concurrent.atomic.AtomicReferenceArray.set(AtomicReferenceArray.java:139)
    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$MultipleIndexUpdateByQueryActionListener.onResponse(TransportUpdateByQueryAction.java:117)
    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$MultipleIndexUpdateByQueryActionListener.onResponse(TransportUpdateByQueryAction.java:98)
--------------------
[2013-10-25 22:57:36,262][DEBUG][action.updatebyquery     ] [Kragoff, Ivan] [logstash-2013.10.24][0] error while executing update by query shard request
org.apache.lucene.store.AlreadyClosedException: this IndexReader is closed
    at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:224)
    at org.apache.lucene.search.SearcherManager.decRef(SearcherManager.java:111)
    at org.apache.lucene.search.SearcherManager.decRef(SearcherManager.java:58)
    at org.apache.lucene.search.ReferenceManager.release(ReferenceManager.java:253)
--------------------
[2013-10-25 22:57:36,264][DEBUG][action.bulk              ] [Kragoff, Ivan] [logstash-2013.10.24][0], node[KOnniIYJTrun0Yk7N6UIvQ], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.bulk.PublicBulkShardRequest@2c9104c4]
java.lang.IndexOutOfBoundsException: index 1
    at java.util.concurrent.atomic.AtomicReferenceArray.checkedByteOffset(AtomicReferenceArray.java:78)
    at java.util.concurrent.atomic.AtomicReferenceArray.set(AtomicReferenceArray.java:139)
    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$ShardResponseListener.handleException(TransportUpdateByQueryAction.java:380)

I can reproduce this quite easily. Let me know if you need more information. Thanks.

It's highly required to support min_score and ctx._score

For example:

  1. I have "bool" query with certain query_strings with different boosts - that is some of topic markers in full-text articles
  2. I have collect statistics for score, so i see that 75% accurancy is at 0.003 score
  3. So in search API i can write "min_score":0.003 so i will take only 25% of base result set
  4. Then, i want to use _update_by_query to update THIS 25% subset with some tags "ctx._source.tag+='mytopic'" for-ex
  5. But i cannot - it doesn't support min_score in it's parameters so i will always update whole set.
  6. It could be accomplished with groovy "if(ctx._score >= 0.003)..." - but it seems that ctx variable hasn't "_score" or "score" parameter or something similar

UpdateByQueryRequestBuilder.request() doesn't build source

I get: Validation Failed: 1: Source is missing

When building a request the following way:

new UpdateByQueryRequestBuilder(es)
  .setIndices(...)
  .setTypes(...)
  .setQuery(...)
  .setScript(...)             
  .setScriptParams(...)
  .request()

I think you should define request method as it is in SearchRequestBuilder:

public SearchRequest request() {
  if (sourceBuilder != null) {
    request.source(sourceBuilder());
  }
  return request;
}

UpdateByQueryResponse throwing timeout

Hi All,

I am Using "elasticsearch-action-updatebyquery"

Reference : https://github.com/yakaz/elasticsearch-action-updatebyquery4

API : The following api will do update bulk "Segment ids" to mached documents.

Example : segmentId= 50 needs to update on more then 20+ million documents.

Map scriptParams = new HashMap();
scriptParams.put("segmentexist", segId);
scriptParams.put("pgsegmentobject", pgSegmentIds);

UpdateByQueryClient updateByQueryClient = new UpdateByQueryClientWrapper(client);

UpdateByQueryResponse response = updateByQueryClient.prepareUpdateByQuery().setIndices(props.getProperty("index")).setTypes(props.getProperty("type"))
        .setTimeout(TimeValue.timeValueHours(24))
        .setIncludeBulkResponses(BulkResponseOption.ALL)

        .setScript("if (ctx._source.containsKey(\"pgSegmentIds\") ) { if (ctx._source.pgSegmentIds.contains(segmentexist) ) { ctx.op = \"none\" } else { ctx._source.pgSegmentIds += pgsegmentobject} } else { ctx._source.pgSegmentIds = pgsegmentobject }")
        .setScriptParams(scriptParams)


        .setQuery(query)

        .execute()
        .actionGet();

Its failing while update. I see the following exception.

2015-09-12 05:58:10 INFO transport:123 - [Moon Knight] failed to get local cluster state for [#transport#-1][ip-10-186-199-195][inet[localhost/10.31.48.47:9300]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[localhost/10.31.48.47:9300]][cluster/state] request_id [416] timed out after [5000ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:356)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2015-09-12 05:58:10 INFO transport:123 - [Moon Knight] failed to get local cluster state for [PGMonetize-ES04][bqljhciDQ4-Tr2dRAcbWtw][ip-10-31-48-47][inet[/10.31.48.47:9300]]{master=true}, disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException: [PGMonetize-ES04][inet[/10.31.48.47:9300]][cluster/state] request_id [423] timed out after [5001ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:356)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

I have done following setup

  1. We are having 5 nodes, with 5 shards.
  2. script.disable_dynamic: false
    action.updatebyquery.bulk_size: 2500

still I get the above exception. Please help.

How to solve this issue and how to improve performance like ( Updating 20+ million record in <10 mins)

Problem Elasticsearch 1.6

Hi,

I tried to run:

POST /myindex/mydoc/_update_by_query
{
"query" : {
"match_all": {}
},
"script" : "ctx._source.somevalue = 100"
}

I got:

{
"error": "NoSuchMethodError[org.elasticsearch.rest.RestRequest.contentUnsafe()Z]",
"status": 500
}

Question: How to read a script from a file?

Hi,

Thanks for this useful plugin. I'm testing it for our project, and it works as expected when using a dynamic script. Now, I'd like to have it load a script from a file under scripts directory, but I couldn't seem to get it to work.

script: The script name or source.

I followed the doc and did like the following, but it didn't work:

curl -XPOST 'localhost:9200/foo/bar/_update_by_query' -d '
{
    "query": {"match_all" : { }},
    "script": "baz.groovy",
    "lang": "groovy",
    "params": {
        "param1": 3,
        "param2": 4
    }
}'

Thanks in advance.

The number of updated documents is not equal the total of documents that match query

Thank you for creating this useful plugin.
I use this bellow query to update documents that match queries

POST http://host:9200/webdata/doc/_update_by_query?search_type=scan
{
    "query": {
        "match_phrase": {
            "token": "my phrase query"
        }
    },
    "script": "ctx._source.bn=[\"bds\"]"
}

but the result when i receive is very odd

{
  "indices": [
    {
      "webdata": {}
    }
  ],
  "ok": true,
  "took": 19118,
  "total": 92908,
  "updated": 10995
}

As can see from return json, the number of updated documents is much smaller than the total documents that match query. At first, I think there maybe some shards in elastic cluser that got problems, but when I check, everything is ok.
I use elasticsearch v1.7.1 and action-updatebyquery v2.6.0
Thanks for your help.

Expose more `Update API` features

Currently, only query and script-related fields are parsed and forwarded to the update requests.

We could add:

  • retryOnConflict
  • doc

Providing partial document-based updates may help with performance, by avoiding the costly setup and running of a scripting environment for each document.

Use of wrong parent key when updating child-type (v1.4 on ES 0.90.9)?

I tried updating a whole type, that is child to another type (say, <parentType>). When doing so the parent read from the results and then used for the index request is <parentType>#<parentId> instead of just <parentId> (happens in TransportShardUpdateByQueryAction.createRequest). Needless to say, that this results in duplicate/wrong data. Did I miss something or is this an actual problem?

elasticsearch-action-updatebyquery updated 0 doc

The command I used is:
curl -XPOST '128.192.138.122:9200/patrol_v1cpu-2015-05-11/_update_by_query' -d '
{
"query" : {
"match_all":{}
},
"script":"ctx._source."@timestamp" =new Date(ctx._source."@timestamp").format("yyyy-MM-dd HH:mm:ssZZ").replaceAll(" ","T").replaceAll("0800","08:00")"

}'
The result is:
{"ok":true,"took":77561,"total":633232,"updated":0,"indices":[{"patrol_v1cpu-2015-05-11":{}}]}.There is no error info in the log.

The part mapping of this index is(sorry for the format).Is it because the dynamic settings? I tried the command on other index,it's OK,which has no dynamic settings.
action

Help!does jdbc-importer support ES template?

My ES version is 1.4.4,jdbc-importer version is 1.7.3
My problem is:
I do not want to define "index_settings" or "type_mapping" in JDBC importer definition file,I want to define them in ES template.does jdbc-importer 1.7.3 support it?

IOOBE in `TransportUpdateByQueryAction$MultipleIndexUpdateByQueryActionListener.onResponse()`

I got a strange error while running the tests multiple time with the same seed.

$ mvn test -Dtests.seed=385239340D701F6F
[INFO] Scanning for projects...
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building elasticsearch-action-updatebyquery 2.0.1-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-enforcer-plugin:1.0:enforce (enforce-maven) @ elasticsearch-action-updatebyquery ---
[INFO] 
[INFO] --- maven-resources-plugin:2.3:resources (default-resources) @ elasticsearch-action-updatebyquery ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] 
[INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) @ elasticsearch-action-updatebyquery ---
[INFO] Compiling 1 source file to /data/Yakaz/elasticsearch-action-updatebyquery/target/classes
[INFO] 
[INFO] --- maven-resources-plugin:2.3:testResources (default-testResources) @ elasticsearch-action-updatebyquery ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] 
[INFO] --- maven-compiler-plugin:2.3.2:testCompile (default-testCompile) @ elasticsearch-action-updatebyquery ---
[INFO] Compiling 1 source file to /data/Yakaz/elasticsearch-action-updatebyquery/target/test-classes
[INFO] 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ elasticsearch-action-updatebyquery ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- junit4-maven-plugin:2.0.15:junit4 (tests) @ elasticsearch-action-updatebyquery ---
[INFO] <JUnit4> says 你好! Master seed: 385239340D701F6F
Executing 1 suite with 1 JVM.

Started J0 PID(16324@TeKa-Laptop).
HEARTBEAT J0 PID(16324@TeKa-Laptop): 2014-08-07T11:20:57, stalled for 22.1s at: UpdateByQueryTests.testUpdateByQuery_multipleIndices
Suite: org.elasticsearch.test.integration.updatebyquery.UpdateByQueryTests
  1> [2014-08-07 11:20:35,254][INFO ][test                     ] Setup TestCluster [shared-TeKa-Laptop-CHILD_VM=[0]-CLUSTER_SEED=[4058369109940772719]-HASH=[4DCCE58B3BA]] with seed [385239340D701F6F] using [2] nodes
  1> [2014-08-07 11:20:35,264][INFO ][test                     ] Test testUpdateByQuery_multipleIndices(org.elasticsearch.test.integration.updatebyquery.UpdateByQueryTests) started
  1> [2014-08-07 11:20:35,599][INFO ][node                     ] [node_0] version[1.1.0], pid[16324], build[2181e11/2014-03-25T15:59:51Z]
  1> [2014-08-07 11:20:35,600][INFO ][node                     ] [node_0] initializing ...
  1> [2014-08-07 11:20:35,611][INFO ][plugins                  ] [node_0] loaded [action-updatebyquery], sites []
  1> [2014-08-07 11:20:38,809][INFO ][node                     ] [node_0] initialized
  1> [2014-08-07 11:20:38,811][INFO ][node                     ] [node_0] starting ...
  1> [2014-08-07 11:20:38,937][INFO ][transport                ] [node_0] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.1.71:9300]}
  1> [2014-08-07 11:20:42,002][INFO ][cluster.service          ] [node_0] new_master [node_0][b7OAgitdSey5YEN3M6KRLg][TeKa-Laptop][inet[/192.168.1.71:9300]], reason: zen-disco-join (elected_as_master)
  1> [2014-08-07 11:20:42,109][INFO ][discovery                ] [node_0] shared-TeKa-Laptop-CHILD_VM=[0]-CLUSTER_SEED=[4058369109940772719]-HASH=[4DCCE58B3BA]/b7OAgitdSey5YEN3M6KRLg
  1> [2014-08-07 11:20:42,185][INFO ][http                     ] [node_0] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.1.71:9200]}
  1> [2014-08-07 11:20:42,192][INFO ][gateway                  ] [node_0] recovered [0] indices into cluster_state
  1> [2014-08-07 11:20:42,199][INFO ][node                     ] [node_0] started
  1> [2014-08-07 11:20:42,199][INFO ][test                     ] Start Shared Node [node_0] not shared
  1> [2014-08-07 11:20:42,221][INFO ][node                     ] [node_1] version[1.1.0], pid[16324], build[2181e11/2014-03-25T15:59:51Z]
  1> [2014-08-07 11:20:42,222][INFO ][node                     ] [node_1] initializing ...
  1> [2014-08-07 11:20:42,227][INFO ][plugins                  ] [node_1] loaded [action-updatebyquery], sites []
  1> [2014-08-07 11:20:42,952][INFO ][node                     ] [node_1] initialized
  1> [2014-08-07 11:20:42,955][INFO ][node                     ] [node_1] starting ...
  1> [2014-08-07 11:20:42,996][INFO ][transport                ] [node_1] bound_address {inet[/0:0:0:0:0:0:0:0:9301]}, publish_address {inet[/192.168.1.71:9301]}
  1> [2014-08-07 11:20:46,151][INFO ][cluster.service          ] [node_0] added {[node_1][n09X2sAeS7mdt0qzIQV2GA][TeKa-Laptop][inet[/192.168.1.71:9301]],}, reason: zen-disco-receive(join from node[[node_1][n09X2sAeS7mdt0qzIQV2GA][TeKa-Laptop][inet[/192.168.1.71:9301]]])
  1> [2014-08-07 11:20:46,171][INFO ][cluster.service          ] [node_1] detected_master [node_0][b7OAgitdSey5YEN3M6KRLg][TeKa-Laptop][inet[/192.168.1.71:9300]], added {[node_0][b7OAgitdSey5YEN3M6KRLg][TeKa-Laptop][inet[/192.168.1.71:9300]],}, reason: zen-disco-receive(from master [[node_0][b7OAgitdSey5YEN3M6KRLg][TeKa-Laptop][inet[/192.168.1.71:9300]]])
  1> [2014-08-07 11:20:46,261][INFO ][discovery                ] [node_1] shared-TeKa-Laptop-CHILD_VM=[0]-CLUSTER_SEED=[4058369109940772719]-HASH=[4DCCE58B3BA]/n09X2sAeS7mdt0qzIQV2GA
  1> [2014-08-07 11:20:46,283][INFO ][http                     ] [node_1] bound_address {inet[/0:0:0:0:0:0:0:0:9201]}, publish_address {inet[/192.168.1.71:9201]}
  1> [2014-08-07 11:20:46,284][INFO ][node                     ] [node_1] started
  1> [2014-08-07 11:20:46,285][INFO ][test                     ] Start Shared Node [node_1] not shared
  1> [2014-08-07 11:20:46,405][INFO ][test.integration.updatebyquery] [UpdateByQueryTests#testUpdateByQuery_multipleIndices]: before test
  1> [2014-08-07 11:20:46,406][INFO ][test.integration.updatebyquery] --> creating index test1
  1> [2014-08-07 11:20:46,963][INFO ][cluster.metadata         ] [node_0] [test1] creating index, cause [api], shards [3]/[1], mappings [subtype1, type1]
  1> [2014-08-07 11:20:47,518][INFO ][test.integration.updatebyquery] --> creating index test2
  1> [2014-08-07 11:20:47,784][INFO ][cluster.metadata         ] [node_0] [test2] creating index, cause [api], shards [4]/[0], mappings [subtype1, type1]
  1> [2014-08-07 11:20:48,336][INFO ][cluster.metadata         ] [node_0] [test0] creating index, cause [auto(index api)], shards [10]/[0], mappings []
  1> [2014-08-07 11:20:48,817][INFO ][cluster.metadata         ] [node_0] [test0] update_mapping [type1] (dynamic)
  1> [2014-08-07 11:20:49,406][INFO ][cluster.metadata         ] [node_0] [test1] update_mapping [type1] (dynamic)
  1> [2014-08-07 11:20:49,750][INFO ][cluster.metadata         ] [node_0] [test2] update_mapping [type1] (dynamic)
  1> [2014-08-07 11:20:50,082][INFO ][cluster.metadata         ] [node_0] [test3] creating index, cause [auto(index api)], shards [10]/[0], mappings []
  1> [2014-08-07 11:20:50,571][INFO ][cluster.metadata         ] [node_0] [test3] update_mapping [type1] (dynamic)
  1> [2014-08-07 11:20:50,799][INFO ][cluster.metadata         ] [node_0] [test4] creating index, cause [auto(index api)], shards [10]/[0], mappings []
  1> [2014-08-07 11:20:51,430][INFO ][cluster.metadata         ] [node_0] [test4] update_mapping [type1] (dynamic)
  1> [2014-08-07 11:20:51,606][INFO ][cluster.metadata         ] [node_0] [test5] creating index, cause [auto(index api)], shards [10]/[0], mappings []
  1> [2014-08-07 11:20:52,267][INFO ][cluster.metadata         ] [node_0] [test5] update_mapping [type1] (dynamic)
  1> [2014-08-07 11:20:52,536][INFO ][cluster.metadata         ] [node_0] [test6] creating index, cause [auto(index api)], shards [10]/[0], mappings []
  1> [2014-08-07 11:20:52,944][INFO ][cluster.metadata         ] [node_0] [test6] update_mapping [type1] (dynamic)
  1> [2014-08-07 11:20:53,462][INFO ][cluster.metadata         ] [node_0] [test7] creating index, cause [auto(index api)], shards [10]/[0], mappings []
  1> [2014-08-07 11:20:53,905][INFO ][cluster.metadata         ] [node_0] [test7] update_mapping [type1] (dynamic)
  1> [2014-08-07 11:20:54,086][INFO ][cluster.metadata         ] [node_0] [test8] creating index, cause [auto(index api)], shards [10]/[0], mappings []
  1> [2014-08-07 11:20:54,473][INFO ][cluster.metadata         ] [node_0] [test8] update_mapping [type1] (dynamic)
  1> [2014-08-07 11:20:54,718][INFO ][cluster.metadata         ] [node_0] [test9] creating index, cause [auto(index api)], shards [10]/[0], mappings []
  1> [2014-08-07 11:20:55,627][INFO ][cluster.metadata         ] [node_0] [test9] update_mapping [type1] (dynamic)
  1> [2014-08-07 11:20:56,856][INFO ][test.integration.updatebyquery] [UpdateByQueryTests#testUpdateByQuery_multipleIndices]: cleaning up after test
  1> [2014-08-07 11:20:56,866][INFO ][cluster.metadata         ] [node_0] [test0] deleting index
  1> [2014-08-07 11:20:56,891][ERROR][action.updatebyquery     ] [node_1] [test1][0] error while executing update by query shard request
  1> org.elasticsearch.transport.RemoteTransportException: index 10
  1> Caused by: org.elasticsearch.transport.ResponseHandlerFailureTransportException: index 10
  1> Caused by: java.lang.IndexOutOfBoundsException: index 10
  1>    at java.util.concurrent.atomic.AtomicReferenceArray.checkedByteOffset(AtomicReferenceArray.java:78)
  1>    at java.util.concurrent.atomic.AtomicReferenceArray.set(AtomicReferenceArray.java:139)
  1>    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$MultipleIndexUpdateByQueryActionListener.onResponse(TransportUpdateByQueryAction.java:117)
  1>    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$MultipleIndexUpdateByQueryActionListener.onResponse(TransportUpdateByQueryAction.java:98)
  1>    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$ShardResponseListener.finalizeAction(TransportUpdateByQueryAction.java:395)
  1>    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$ShardResponseListener.handleResponse(TransportUpdateByQueryAction.java:373)
  1>    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$3.handleResponse(TransportUpdateByQueryAction.java:340)
  1>    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$3.handleResponse(TransportUpdateByQueryAction.java:333)
  1>    at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:154)
  1>    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:125)
  1>    at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
  1>    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
  1>    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
  1>    at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
  1>    at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
  1>    at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
  1>    at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
  1>    at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
  1>    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
  1>    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
  1>    at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
  1>    at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
  1>    at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
  1>    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
  1>    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
  1>    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
  1>    at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
  1>    at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
  1>    at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
  1>    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  1>    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  1>    at java.lang.Thread.run(Thread.java:745)
  1> [2014-08-07 11:20:56,895][ERROR][transport.netty          ] [node_1] failed to handle exception response [org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$3@27dbd56a]
  1> java.lang.IndexOutOfBoundsException: index 3
  1>    at java.util.concurrent.atomic.AtomicReferenceArray.checkedByteOffset(AtomicReferenceArray.java:78)
  1>    at java.util.concurrent.atomic.AtomicReferenceArray.set(AtomicReferenceArray.java:139)
  1>    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$ShardResponseListener.handleException(TransportUpdateByQueryAction.java:380)
  1>    at org.elasticsearch.action.updatebyquery.TransportUpdateByQueryAction$UpdateByQueryIndexOperationAction$3.handleException(TransportUpdateByQueryAction.java:344)
  1>    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:181)
  1>    at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:159)
  1>    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:125)
  1>    at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
  1>    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
  1>    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
  1>    at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
  1>    at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
  1>    at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
  1>    at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
  1>    at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
  1>    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
  1>    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
  1>    at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
  1>    at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
  1>    at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
  1>    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
  1>    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
  1>    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
  1>    at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
  1>    at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
  1>    at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
  1>    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  1>    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  1>    at java.lang.Thread.run(Thread.java:745)
  1> [2014-08-07 11:20:58,031][INFO ][cluster.metadata         ] [node_0] [test8] deleting index
  1> [2014-08-07 11:20:58,631][INFO ][cluster.metadata         ] [node_0] [test2] deleting index
  1> [2014-08-07 11:20:58,861][INFO ][cluster.metadata         ] [node_0] [test6] deleting index
  1> [2014-08-07 11:20:59,286][INFO ][cluster.metadata         ] [node_0] [test1] deleting index
  1> [2014-08-07 11:20:59,573][INFO ][cluster.metadata         ] [node_0] [test3] deleting index
  1> [2014-08-07 11:21:00,320][INFO ][cluster.metadata         ] [node_0] [test4] deleting index
  1> [2014-08-07 11:21:00,713][INFO ][cluster.metadata         ] [node_0] [test5] deleting index
  1> [2014-08-07 11:21:01,114][INFO ][cluster.metadata         ] [node_0] [test7] deleting index
  1> [2014-08-07 11:21:01,525][INFO ][cluster.metadata         ] [node_0] [test9] deleting index
  1> [2014-08-07 11:21:01,905][INFO ][test.integration.updatebyquery] [UpdateByQueryTests#testUpdateByQuery_multipleIndices]: cleaned up after test
  1> [2014-08-07 11:21:01,905][INFO ][test                     ] Wipe data directory for all nodes locations: [data/d3/shared-TeKa-Laptop-CHILD_VM=[0]-CLUSTER_SEED=[4058369109940772719]-HASH=[4DCCE58B3BA]/nodes/0, data/d1/shared-TeKa-Laptop-CHILD_VM=[0]-CLUSTER_SEED=[4058369109940772719]-HASH=[4DCCE58B3BA]/nodes/0, data/d3/shared-TeKa-Laptop-CHILD_VM=[0]-CLUSTER_SEED=[4058369109940772719]-HASH=[4DCCE58B3BA]/nodes/1, data/d1/shared-TeKa-Laptop-CHILD_VM=[0]-CLUSTER_SEED=[4058369109940772719]-HASH=[4DCCE58B3BA]/nodes/1, data/d0/shared-TeKa-Laptop-CHILD_VM=[0]-CLUSTER_SEED=[4058369109940772719]-HASH=[4DCCE58B3BA]/nodes/1, data/d0/shared-TeKa-Laptop-CHILD_VM=[0]-CLUSTER_SEED=[4058369109940772719]-HASH=[4DCCE58B3BA]/nodes/0, data/d2/shared-TeKa-Laptop-CHILD_VM=[0]-CLUSTER_SEED=[4058369109940772719]-HASH=[4DCCE58B3BA]/nodes/0, data/d2/shared-TeKa-Laptop-CHILD_VM=[0]-CLUSTER_SEED=[4058369109940772719]-HASH=[4DCCE58B3BA]/nodes/1]
  1> [2014-08-07 11:21:01,917][ERROR][test                     ] FAILURE  : testUpdateByQuery_multipleIndices(org.elasticsearch.test.integration.updatebyquery.UpdateByQueryTests)
  1> REPRODUCE WITH  : mvn test -Dtests.seed=385239340D701F6F -Dtests.class=org.elasticsearch.test.integration.updatebyquery.UpdateByQueryTests -Dtests.method=testUpdateByQuery_multipleIndices -Dtests.prefix=tests -Dfile.encoding=UTF-8 -Duser.timezone=Europe/Paris -Des.logger.level=INFO
  1> Throwable:
  1> java.lang.AssertionError: 
  1> Expected: "test1"
  1>      got: "test2"
  1> 
  1>     __randomizedtesting.SeedInfo.seed([385239340D701F6F:2AF542E7CD9C08AE]:0)
  1>     [...org.junit.*]
  1>     org.elasticsearch.test.integration.updatebyquery.UpdateByQueryTests.testUpdateByQuery_multipleIndices(UpdateByQueryTests.java:222)
  1>     [...sun.*, com.carrotsearch.randomizedtesting.*, java.lang.reflect.*]
  1>     org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
  1>     org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
  1>     org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
  1>     [...com.carrotsearch.randomizedtesting.*]
  1>     org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
  1>     org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
  1>     org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
  1>     [...com.carrotsearch.randomizedtesting.*]
  1>     org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
  1>     org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
  1>     [...com.carrotsearch.randomizedtesting.*]
  1>     org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
  1>     org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
  1>     org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
  1>     org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
  1>     [...com.carrotsearch.randomizedtesting.*]
  1>     java.lang.Thread.run(Thread.java:745)
  1> 
  1> [2014-08-07 11:21:01,938][INFO ][test                     ] Test testUpdateByQuery_multipleIndices(org.elasticsearch.test.integration.updatebyquery.UpdateByQueryTests) finished
FAILURE 26.7s | UpdateByQueryTests.testUpdateByQuery_multipleIndices <<<
   > Throwable #1: java.lang.AssertionError: 
   > Expected: "test1"
   >      got: "test2"
   > 
   >    at __randomizedtesting.SeedInfo.seed([385239340D701F6F:2AF542E7CD9C08AE]:0)
   >    at org.junit.Assert.assertThat(Assert.java:780)
   >    at org.junit.Assert.assertThat(Assert.java:738)
   >    at org.elasticsearch.test.integration.updatebyquery.UpdateByQueryTests.testUpdateByQuery_multipleIndices(UpdateByQueryTests.java:222)
   >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   >    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   >    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   >    at java.lang.reflect.Method.invoke(Method.java:606)
   >    at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
   >    at com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
   >    at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
   >    at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
   >    at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
   >    at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
   >    at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
   >    at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
   >    at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
   >    at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
   >    at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
   >    at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
   >    at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   >    at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
   >    at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
   >    at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
   >    at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
   >    at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
   >    at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
   >    at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
   >    at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
   >    at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
   >    at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
   >    at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
   >    at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
   >    at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   >    at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
   >    at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
   >    at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
   >    at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
   >    at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   >    at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
   >    at java.lang.Thread.run(Thread.java:745)
  1> [2014-08-07 11:21:01,940][INFO ][test                     ] Test testUpdateByQuery_usingAliases(org.elasticsearch.test.integration.updatebyquery.UpdateByQueryTests) started
  1> [2014-08-07 11:21:01,951][INFO ][plugins                  ] [transport_client_node_0] loaded [action-updatebyquery], sites []
  1> [2014-08-07 11:21:02,119][INFO ][plugins                  ] [transport_client_node_1] loaded [action-updatebyquery], sites []
  1> [2014-08-07 11:21:02,284][INFO ][test.integration.updatebyquery] [UpdateByQueryTests#testUpdateByQuery_usingAliases]: before test
  1> [2014-08-07 11:21:02,318][INFO ][cluster.metadata         ] [node_0] [test] creating index, cause [api], shards [5]/[1], mappings []
  1> [2014-08-07 11:21:02,711][INFO ][cluster.metadata         ] [node_0] [test] update_mapping [type1] (dynamic)
  1> [2014-08-07 11:21:02,963][INFO ][test.integration.updatebyquery] [UpdateByQueryTests#testUpdateByQuery_usingAliases]: cleaning up after test
  1> [2014-08-07 11:21:02,967][INFO ][cluster.metadata         ] [node_0] [test] deleting index
  1> [2014-08-07 11:21:03,305][INFO ][test.integration.updatebyquery] [UpdateByQueryTests#testUpdateByQuery_usingAliases]: cleaned up after test
  1> [2014-08-07 11:21:03,387][INFO ][test                     ] Test testUpdateByQuery_usingAliases(org.elasticsearch.test.integration.updatebyquery.UpdateByQueryTests) finished
  1> [2014-08-07 11:21:03,388][INFO ][test                     ] Test testUpdateByQuery_noMatches(org.elasticsearch.test.integration.updatebyquery.UpdateByQueryTests) started
  1> [2014-08-07 11:21:03,400][INFO ][test.integration.updatebyquery] [UpdateByQueryTests#testUpdateByQuery_noMatches]: before test
  1> [2014-08-07 11:21:03,401][INFO ][test.integration.updatebyquery] --> creating index test
  1> [2014-08-07 11:21:03,439][INFO ][cluster.metadata         ] [node_0] [test] creating index, cause [api], shards [3]/[1], mappings [subtype1, type1]
  1> [2014-08-07 11:21:03,714][INFO ][cluster.metadata         ] [node_0] [test] update_mapping [type1] (dynamic)
  1> [2014-08-07 11:21:03,768][INFO ][test.integration.updatebyquery] [UpdateByQueryTests#testUpdateByQuery_noMatches]: cleaning up after test
  1> [2014-08-07 11:21:03,770][INFO ][cluster.metadata         ] [node_0] [test] deleting index
  1> [2014-08-07 11:21:03,923][INFO ][test.integration.updatebyquery] [UpdateByQueryTests#testUpdateByQuery_noMatches]: cleaned up after test
  1> [2014-08-07 11:21:03,926][INFO ][test                     ] Test testUpdateByQuery_noMatches(org.elasticsearch.test.integration.updatebyquery.UpdateByQueryTests) finished
  1> [2014-08-07 11:21:03,927][INFO ][test                     ] Test testUpdateByQuery(org.elasticsearch.test.integration.updatebyquery.UpdateByQueryTests) started
  1> [2014-08-07 11:21:03,934][INFO ][plugins                  ] [transport_client_node_1] loaded [action-updatebyquery], sites []
  1> [2014-08-07 11:21:04,066][INFO ][test.integration.updatebyquery] [UpdateByQueryTests#testUpdateByQuery]: before test
  1> [2014-08-07 11:21:04,067][INFO ][test.integration.updatebyquery] --> creating index test
  1> [2014-08-07 11:21:04,098][INFO ][cluster.metadata         ] [node_0] [test] creating index, cause [api], shards [2]/[0], mappings [subtype1, type1]
  1> [2014-08-07 11:21:04,423][INFO ][cluster.metadata         ] [node_0] [test] update_mapping [type1] (dynamic)
  1> [2014-08-07 11:21:04,587][INFO ][cluster.metadata         ] [node_0] [test] update_mapping [type2] (dynamic)
  1> [2014-08-07 11:21:04,817][INFO ][test.integration.updatebyquery] [UpdateByQueryTests#testUpdateByQuery]: cleaning up after test
  1> [2014-08-07 11:21:04,819][INFO ][cluster.metadata         ] [node_0] [test] deleting index
  1> [2014-08-07 11:21:04,946][INFO ][test.integration.updatebyquery] [UpdateByQueryTests#testUpdateByQuery]: cleaned up after test
  1> [2014-08-07 11:21:04,966][INFO ][test                     ] Test testUpdateByQuery(org.elasticsearch.test.integration.updatebyquery.UpdateByQueryTests) finished
Completed in 30.86s, 4 tests, 1 failure <<< FAILURES!


Tests with failures:
  - org.elasticsearch.test.integration.updatebyquery.UpdateByQueryTests.testUpdateByQuery_multipleIndices


[INFO] JVM J0:     0.73 ..    32.31 =    31.58s
[INFO] Execution time total: 32 seconds
[INFO] Tests summary: 1 suite, 4 tests, 1 failure
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 41.445s
[INFO] Finished at: Thu Aug 07 11:21:05 CEST 2014
[INFO] Final Memory: 30M/177M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal com.carrotsearch.randomizedtesting:junit4-maven-plugin:2.0.15:junit4 (tests) on project elasticsearch-action-updatebyquery: Execution tests of goal com.carrotsearch.randomizedtesting:junit4-maven-plugin:2.0.15:junit4 failed: /data/Yakaz/elasticsearch-action-updatebyquery/target/junit4-ant-6078308574069738634.xml:16: There were test failures: 1 suite, 4 tests, 1 failure -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException

I was unable to reproduce using the command suggested in the output.

I was working on another bug due to / hinted by randomization, in which I only modified the number of shards inside in the settings of an client().admin().indices().prepareCreate() call. It should not interfere anyway.
I was on commit 349d105.

Not updating any documents

Hi,

I am trying to update several documents at the same time using the update by query plugin.

The problem seems to be with the script as the query it self returns the correct result with no issues. The thing is ES seems to be able to execute the script as there are no exceptions in the logs. Yet nothing gets updated.

So here is the document I am indexing:

curl -XPUT 'localhost:9200/users/files/1' -d '{
"path" : "path/to/file"
"size": 200
}'

No here is my update by query request to change the path field from 'path/to/file' to 'another/path/to/file':

curl -XPOST 'localhost:9200/users/files/_update_by_query' -d '
{
"query": {
"bool": {
"must": [
{
"term": {
"path": "path/to/file"
}
}]
}
},
"script": "def str = ctx_source.path;\ndef str2 = str.replaceAll("path/to/file", "another/path/to/file");\nctx._source.path = str2;"
}'

And this is what I get:
{
"ok":true,
"took":516,
"total":75,
"updated":0,
"indices":[
{
"new_index":{}
}]
}

so the query matched 75 documents but did not update any.
Anyone knows how can I make it work.

**Here is the script in pretty form:

def str = ctx_source.path;
def str2 = str.replaceAll("path/to/file", "another/path/to/file");
ctx._source.path = str2;

Not working on ES 2.0

I tried installing the plugin on a cluster with elasticsearch 2.0. Elasticsearch fails to start and throws this exception:

Exception in thread "main" java.lang.IllegalStateException: Unable to initialize plugins
Likely root cause: java.nio.file.NoSuchFileException: /usr/share/elasticsearch/plugins/action-updatebyquery/plugin-descriptor.properties
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
        at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
        at java.nio.file.Files.newByteChannel(Files.java:317)
        at java.nio.file.Files.newByteChannel(Files.java:363)
        at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:380)
        at java.nio.file.Files.newInputStream(Files.java:108)
        at org.elasticsearch.plugins.PluginInfo.readFromProperties(PluginInfo.java:86)
        at org.elasticsearch.plugins.PluginsService.getPluginBundles(PluginsService.java:306)
        at org.elasticsearch.plugins.PluginsService.<init>(PluginsService.java:112)
        at org.elasticsearch.node.Node.<init>(Node.java:144)
        at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:145)
        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:170)
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:270)
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)

So when should we expect this to be fixed. And is there a workaround for this issue?

Problem related usage

I am using ES-1.7.0.
I wanted to update certain document based on a query

I installed plugin on all the nodes.
Now ,

  1. Doing post request on this url -> http://host:9200/abc-2015-07-25/_update_by_query
    with body as
{
  "query": {
    "filtered": {
      "filter": {
        "bool": {
          "must": [
            {"exists": {
              "field": "a"
            }},
            {"exists" : {
              "field": "b"
            }}
          ]
        }
      }
    }
  },
  "script":"ctx._source.uid = ctx._source.a + ctx._source.b"
}

I am always getting a response as

{
  "error": "RemoteTransportException[[es-node-4][inet[/host:9300]][indices:data/write/index]]; nested: InvalidTypeNameException[mapping type name [_update_by_query] can't start with '_']; ",
  "status": 400
}

Seems to eat up a lot of memories

I tried this cool plugin on my test machine with 32GB RAM. After I allocated 12GB memories to JVM and restarted the ES, I submitted a update_by_query request which updates approximately 81,000 documents among 20,000,000 total. After several minutes I saw OOM exception thrown in the ES console. I have to do two to three the same requests to complete all the updates for those 81,000 documents. Do you have any idea why this happens? Could it be a bug? The documents affected by this request is definitely less than 1GB of course.

By the way, updating 21,321 documents (among those 20,000,000) takes 14.4 minutes. A little longer than expected, do you have any suggestions to speed up this process? I'd really appreciate if you could help me with this.

Elasticsearch [1.5.2] update_by_query plugin doesn't work as expected

I'm fairly new to Elasticsearch so apologies if I am missing something elementary. I am trying to update a field within documents in my index.

POST test_index/_update_by_query
{
  "query": {
    "match": {
      "label": {
        "query": " checked",
        "type": "phrase"
      }
    }
  },
  "script": "ctx._source.status = 'ok'"
}

Looks like it's finding the documents, just doesn't update them. Here is the output in marvel

{
   "ok": true,
   "took": 7531,
   "total": 230954,
   "updated": 0,
   "indices": [
      {
         "test_index": {}
      }
    ]
}

I've enabled dynamic_scripting as well. Any help would be most appreciated. Thanks -

EsRejectedExecutionException received during update by query operation

Occasionally, my update by query requests are failing with the following output:

{
  "ok": true,
  "took": 69,
  "total": 0,
  "updated": 0,
  "indices": [
    {
      "logs_2007": {
        "0": {
          "error": "EsRejectedExecutionException[rejected execution (queue capacity 50) on org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$1@29f475cb]"
        },
        "3": {
          "error": "EsRejectedExecutionException[rejected execution (queue capacity 50) on org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$1@4f4b0c16]"
        },
        "4": {
          "error": "EsRejectedExecutionException[rejected execution (queue capacity 50) on org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$1@41f7d64d]"
        }
      }
    },
    {
      "logs_2008": {}
    },
    {
      "logs_2009": {
        "2": {
          "error": "EsRejectedExecutionException[rejected execution (queue capacity 50) on org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$1@37077cdd]"
        }
      }
    },
    {
      "logs_2010": {
        "2": {
          "error": "EsRejectedExecutionException[rejected execution (queue capacity 50) on org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$1@72b29869]"
        },
        "3": {
          "error": "EsRejectedExecutionException[rejected execution (queue capacity 50) on org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$1@4d7ba774]"
        },
        "4": {
          "error": "EsRejectedExecutionException[rejected execution (queue capacity 50) on org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$1@755fd15]"
        }
      }
    },
    {
      "logs_2011": {
        "0": {
          "error": "EsRejectedExecutionException[rejected execution (queue capacity 50) on org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$1@36d8e218]"
        }
      }
    },
    {
      "logs_2012": {
        "3": {
          "error": "EsRejectedExecutionException[rejected execution (queue capacity 50) on org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$1@5b1dc281]"
        },
        "4": {
          "error": "EsRejectedExecutionException[rejected execution (queue capacity 50) on org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$1@69f89cd0]"
        }
      }
    },
    {
      "logs_2013": {}
    },
    {
      "logs_2014": {
        "3": {
          "error": "EsRejectedExecutionException[rejected execution (queue capacity 50) on org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$1@85635b4]"
        },
        "4": {
          "error": "EsRejectedExecutionException[rejected execution (queue capacity 50) on org.elasticsearch.action.updatebyquery.TransportShardUpdateByQueryAction$1@270a4455]"
        }
      }
    }
  ]
}

What I've been doing so far is simply to increase the queue_size of the bulk queue, knowing it's neither ideal nor a good idea, since it will only "hide" a problem that is bound to resurface later.

This morning I came across the latest ES blog post on performance considerations during indexing and Michael mentions that when EsRejectedExecutionException are getting thrown, it usually means that the client is sending too many concurrent requests are the same time, which makes sense now that I read it.

The README file mentions that the action.updatebyquery.bulk_size option can be set in the elasticsearch configuration file. However, it would be nice to also mention that the default setting for this option is 1000 and that if someone starts seeing EsRejectedExecutionException in the response, the way to proceed is to set that option to at most the queue_size (defaults to 50) of the bulk queue of his ES install.

There's also a typo to fix. The equal sign below should be a colon since the elasticsearch config file is YAML:
action.updatebyquery.bulk_size=2500 should read action.updatebyquery.bulk_size:2500

not working with 1.0.7 version of Elasticsearch

Hi team,
Before all, thank you for this tool and thank you for the support!

I'm working on a project that is based on Elasticsearch 1.0.7 and it's not possible to change it to get the new functionalities of Elasticsearch.

I wanna use this tool to update many documents, i tried to install the old version by running plugin/elasticsearch-action-updatebyquery/1.0.0/elasticsearch-action-updatebyquery-1.0.0.zip" install "elasticsearch-action-updatebyquery" from the bin folder of elasticsearch, but it shows me following messages:

-> Installing elasticsearch-action-updatebyquery... Trying http://oss.sonatype.org/content/repositories/releases/com/yakaz/elasticsearch/plugins/elasticsearch-action-updatebyquery/1.0.0/elasticsearch-action-updatebyquery-1.0.0.zip... Failed: UnknownHostException[oss.sonatype.org] Trying https://github.com/null/elasticsearch-action-updatebyquery/archive/master.zip... Failed to install elasticsearch-action-updatebyquery, reason: failed to download out of all possible locations..., use --verbose to get detailed information

I downloaded it and tried to install the zip file by renaming it to the name "plugin.zip" and running the command "plugin install file:///plugin.zip" as mentioned in Elasticsearch documentation but it shows me that the command is uknown

I tried to unzip it and put the Jar file in lib folder, but Elasticsearch is not running correctly

Can you provide me some help please?
Thanks and best regards,
Mohammed

OutOfMemoryError vs "ok": true

Please do not return "ok": true when it is actually not OK:

{"indices": [{"aaa": {"1": {"error": "OutOfMemoryError[Java heap space]"}, "0": {"error": "OutOfMemoryError[Java heap space]"}, "3": {"error": "RemoteTransportException[[bbb][inet[/ccc]][updateByQuery/shard]]; nested: OutOfMemoryError[Java heap space]; "}, "2": {"error": "RemoteTransportException[[ddd][inet[/eee]][updateByQuery/shard]]; nested: OutOfMemoryError[Java heap space]; "}, "4": {"error": "OutOfMemoryError[Java heap space]"}}}],
"updated": 0, "total": 0, "ok": true, "took": 86009}

Please notice "ok": true in the end.

Using updatebyquery/1.1.0 for elasticsearch/0.90.1,
sorry - have no time to test on the most recent versions.

Compatible version with Elasticsearch 1.3.4

Hi Yakaz

Do you have an expected release date for the elasticsearch-action-updatebyquery that will be compatible with ES 1.3.4 (latest stable to date)?

I will provide errors on running v2.2.2 on ES 1.3.4 shortly.

I look forward to hearing from you soon.

Thanks
Jared

Response doesn't seem to work

The response for query string option doesn't seem to work.

Utilizing it on the sample code below:

curl -XPOST 'localhost:9200/twitter/tweet/_update_by_query' -d '                
{
    "query" : {
        "term" : {
            "message" : "you"
        },
        "response": "all"                   
    },                   
    "script" : "ctx._source.likes += 1"
}'

This yields errors: {"error":"ElasticSearchException[Script is required]"}

parent Id, Uid Fix in 1.3.0

Hi,

Thanks for the reply.

I came across this parent Id, Uid issue. And I saw you have fixed it in
latest version.

Is it possible for you you fix it in 1.3 also? We are using ES 90.5.

Also it would be great if you could write, similar copy by query plugin,
which can do in place copies.

Regards,
VB

Compatibility issue of 1.4.0 with elasticsearch-0.90.11

Using the updatebyquery plugin v1.4.0 with a fresh ES 0.90.11 install, I'm getting the stack trace below.

The carrot2 plugin experienced the same issue, see here:
carrot2/elasticsearch-carrot2#4

It looks like recompiling the plugin with the latest source did it. Any chance to republish the 1.4.0 release?

Thanks much

[2014-02-18 14:37:44,356][WARN ][http.netty               ] [Tess-One] Caught exception while handling client http traffic, closing connection [id: 0x57954f68, /127.0.0.1:56211 => /127.0.0.1:9200]
java.lang.IncompatibleClassChangeError: Found class org.elasticsearch.rest.RestRequest, but interface was expected
    at org.elasticsearch.rest.action.updatebyquery.RestUpdateByQueryAction.handleRequest(RestUpdateByQueryAction.java:63)
    at org.elasticsearch.rest.RestController.executeHandler(RestController.java:159)
    at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:142)
    at org.elasticsearch.http.HttpServer.internalDispatchRequest(HttpServer.java:121)
    at org.elasticsearch.http.HttpServer$Dispatcher.dispatchRequest(HttpServer.java:83)
    at org.elasticsearch.http.netty.NettyHttpServerTransport.dispatchRequest(   at org.elasticsearch.http.netty.NettyHttpServerTransport.dispatchRequest(NettyHttpServerTransport.java:291)
    at org.elasticsearch.http.netty.HttpRequestHandler.messageReceived(HttpRequestHandler.java:43)
    at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.elasticsearch.common.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145)
    at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
    at org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
    at org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
    at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:724)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.