Code Monkey home page Code Monkey logo

logstash-filter-elasticsearch's Introduction

Logstash Plugin

Travis Build Status

This is a plugin for Logstash.

It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.

Documentation

Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one central location.

Need Help?

Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum.

Developing

1. Plugin Developement and Testing

Code

  • To get started, you'll need JRuby with the Bundler gem installed.

  • Create a new plugin or clone and existing from the GitHub logstash-plugins organization. We also provide example plugins.

  • Install dependencies

bundle install

Test

  • Update your dependencies
bundle install
  • Run tests
bundle exec rspec

2. Running your unpublished Plugin in Logstash

2.1 Run in a local Logstash clone

  • Edit Logstash Gemfile and add the local plugin path, for example:
gem "logstash-filter-awesome", :path => "/your/local/logstash-filter-awesome"
  • Install plugin
# Logstash 2.3 and higher
bin/logstash-plugin install --no-verify

# Prior to Logstash 2.3
bin/plugin install --no-verify
  • Run Logstash with your plugin
bin/logstash -e 'filter {awesome {}}'

At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash.

2.2 Run in an installed Logstash

You can use the same 2.1 method to run your plugin in an installed Logstash by editing its Gemfile and pointing the :path to your local plugin development directory or you can build the gem and install it using:

  • Build your plugin gem
gem build logstash-filter-awesome.gemspec
  • Install the plugin from the Logstash home
# Logstash 2.3 and higher
bin/logstash-plugin install --no-verify

# Prior to Logstash 2.3
bin/plugin install --no-verify
  • Start Logstash and proceed to test the plugin

Contributing

All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin.

Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here.

It is more important to the community that you are able to contribute.

For more information about contributing, see the CONTRIBUTING file.

logstash-filter-elasticsearch's People

Contributors

andrewvc avatar andsel avatar bperian avatar colinsurprenant avatar dadoonet avatar davidecavestro avatar dedemorton avatar djschny avatar edmocosta avatar ekho avatar electrical avatar eohtake avatar fbaligand avatar jakelandis avatar jehuty0shift avatar jordansissel avatar jsvd avatar kaisecheng avatar karenzone avatar kares avatar lucabelluccini avatar markwalkom avatar ph avatar robbavey avatar robin13 avatar suyograo avatar suzuki avatar synfk avatar yaauie avatar ycombinator avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logstash-filter-elasticsearch's Issues

Specifying `ssl => true` leads to exception "URI::InvalidURIError: bad URI(is not URI?)"

When specifying ssl => true (with or without ca_cert param), the URL string is not interpolated correctly. Here's the error raised:

[2018-06-20T14:18:43,541][WARN ][logstash.filters.elasticsearch] Failed to query elasticsearch for previous event {:index=>"cert", :query=>"@timestamp:*", :event=>#<LogStash::Event:0x7ad689eb>, :error=>#<URI::InvalidURIError: bad URI(is not URI?): https://{:host=>"https://example.us-central1.gcp.cloud.es.io:9243/", :scheme=>"https"}:https>}

Notice the URL string appears to contain the full params hash (hash.to_s): https://{:host=>"https://example.us-central1.gcp.cloud.es.io:9243/", :scheme=>"https"} instead of templating the fields in a string template (e.g. "%{scheme}://%{host}" % { scheme: 'https', host: '...' })

This particular trace was generated by the following configuration (subdomain of the cloud instance changed):

bin/logstash --log.level debug -e "input { generator { count =>  3 } }
filter { elasticsearch {
  user => elastic password => '$ES_PWD' hosts => ['https://example.us-central1.gcp.cloud.es.io:9243/']
  index => 'cert' query => '@timestamp:*' sort => '_id' fields => { 'sequence' => 'last_sequence' }
  ssl => true
} }
output { stdout {} elasticsearch {
  user => elastic password => '$ES_PWD' hosts => ['https://example.us-central1.gcp.cloud.es.io:9243/']
  index => 'cert'
} }"

Known workaround

Specify all hosts with their https:// prefix and do not specify the ssl attribute.

Full stack trace

#<URI::InvalidURIError: bad URI(is not URI?): https://{:host=>"https://example.us-central1.gcp.cloud.es.io:9243/", :scheme=>"https"}:https>

 "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/uri/rfc3986_parser.rb:67:in `split'",
 "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/uri/rfc3986_parser.rb:73:in `parse'",
 "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/uri/common.rb:227:in `parse'",
 "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/uri/common.rb:714:in `URI'",
 "org/jruby/RubyMethod.java:127:in `call'",
 "/Users/mat/work/elastic/logstash/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/utils.rb:258:in `URI'",
 "/Users/mat/work/elastic/logstash/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/connection.rb:309:in `url_prefix='",
 "/Users/mat/work/elastic/logstash/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/connection.rb:77:in `initialize'",
 "/Users/mat/work/elastic/logstash/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.4/lib/elasticsearch/transport/transport/http/faraday.rb:38:in `__build_connection'",
 "/Users/mat/work/elastic/logstash/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.4/lib/elasticsearch/transport/transport/base.rb:138:in `block in __build_connections'",
 "org/jruby/RubyArray.java:2486:in `map'",
 "/Users/mat/work/elastic/logstash/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.4/lib/elasticsearch/transport/transport/base.rb:130:in `__build_connections'",
 "/Users/mat/work/elastic/logstash/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.4/lib/elasticsearch/transport/transport/base.rb:40:in `initialize'",
 "/Users/mat/work/elastic/logstash/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.4/lib/elasticsearch/transport/client.rb:114:in `initialize'",
 "/Users/mat/work/elastic/logstash/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.4/lib/elasticsearch/transport.rb:26:in `new'",
 "/Users/mat/work/elastic/plugins/logstash-filter-elasticsearch/lib/logstash/filters/elasticsearch/client.rb:28:in `initialize'",
 "/Users/mat/work/elastic/plugins/logstash-filter-elasticsearch/lib/logstash/filters/elasticsearch.rb:145:in `new_client'",
 "/Users/mat/work/elastic/plugins/logstash-filter-elasticsearch/lib/logstash/filters/elasticsearch.rb:149:in `block in get_client'",
 "/Users/mat/work/elastic/plugins/logstash-filter-elasticsearch/lib/logstash/filters/elasticsearch.rb:149:in `get_client'",
 "/Users/mat/work/elastic/plugins/logstash-filter-elasticsearch/lib/logstash/filters/elasticsearch.rb:93:in `filter'",
 "/Users/mat/work/elastic/logstash/logstash-core/lib/logstash/filters/base.rb:145:in `do_filter'",
 "/Users/mat/work/elastic/logstash/logstash-core/lib/logstash/filters/base.rb:164:in `block in multi_filter'",
 "org/jruby/RubyArray.java:1734:in `each'",
 "/Users/mat/work/elastic/logstash/logstash-core/lib/logstash/filters/base.rb:161:in `multi_filter'",
 "/Users/mat/work/elastic/logstash/logstash-core/lib/logstash/filter_delegator.rb:44:in `multi_filter'",
 "(eval):46:in `block in filter_func'",
 "/Users/mat/work/elastic/logstash/logstash-core/lib/logstash/pipeline.rb:443:in `filter_batch'",
 "/Users/mat/work/elastic/logstash/logstash-core/lib/logstash/pipeline.rb:422:in `worker_loop'",
 "/Users/mat/work/elastic/logstash/logstash-core/lib/logstash/pipeline.rb:384:in `block in start_workers'"

geo_point data type lost

Hi

I have an index, let's call it "addresses" which has a field called "location" of the type "geo_point".

I use this index to "enhance" a second index by adding the "location" field to matching events using the logstash-filter-elasticsearch.

Unfortunately the datatype gets lost when copying the data and I end up with two new string fields "location.lat" and "location.lon"

Even when defining in my mapping for both indexes that the field "location" is of type "geo_point" I get the same results.

I haven't tried if this problem also arises if you copy data within the same index.

Is this maybe a bug or am I doing something wrong here?

Thanks

Duration => nil

Hi!
I want to calculate delta 2 events, use ruby to calculate delta time
This my config:
if [status] == "out"{
elasticsearch {
hosts => ["localhost"]
query => 'status:in AND username:"%{username}"'
fields => ["@timestamp","start"]
}
ruby{
init => "require 'time'"
code => "event['duration_hrs'] = (event['@timestamp'] - event['start']) / 3600 rescue nil"
add_tag => ["duration_hrs"]
}
But result duration => nil
===> "@Version" => "1",
"@timestamp" => "2016-06-01T16:06:46.816Z",
"path" => "/opt/file1.log",
"host" => "archlinux",
"start" => "2016-06-01T16:06:23.769Z",
"username" => "linh",
"status" => "out",
"duration_hrs" => nil,
"tags" => [
[0] "duration_hrs"

if use:
ruby{
init => "require 'time'"
code => "event['duration_hrs'] =Time.parse (event['@timestamp']).to_s - Time.parse(event['start']).to_s"
add_tag => ["duration_hrs"]
}
I use Time.parse().to_s or .to_f but ruby exception
==>
Ruby exception occurred: undefined method `gsub!' for "2016-06-01T16:04:42.483Z":LogStash::Timestamp {:level=>:error}
"@Version" => "1",
"@timestamp" => "2016-06-01T16:04:42.483Z",
"path" => "/opt/file1.log",
"host" => "archlinux",
"start" => "2016-06-01T16:04:19.417Z",
"username" => "linh",
"status" => "out",
"tags" => [
[0] "_rubyexception"
]
I don't understands why wrong, can you help me?
Thanks

Version 2.1.1 does not respect SSL settings

  • Version: 2.1.1 on LS 2.1.3
  • Operating System: Ubuntu 16.04
  • Config File (if you have sensitive info, please remove it): see below
  • Sample Data: -
  • Steps to Reproduce:

config:

        elasticsearch {
            hosts => ["nuc01:9200"]
            index => "pfsense-*"
            ssl => true
            ca_file => "/path_to/CA.pem"
            query => "ip_address:%{[source_ip]} AND @timestamp:[%{[@timestamp]}||\-1h TO %{[@timestamp]}||-0s]"
            fields => [ ["sid", "sid"] ]
            add_tag => [ "enriched" ]
            user => "XXX"
            password => "XXX"
        }

This results in:

Failed to query elasticsearch for previous event {:index=>"pfsense-*", :query=>"ip_address:%{[source_ip]} AND @timestamp:[2016-08-19T16:57:04.238Z||\\-1h TO 2016-08-19T16:57:04.238Z||-0s]", :event=>#<LogStash::Event:0x50f2b06b @metadata_accessors=#<LogStash::Util::Accessors:0x29b78be3 @store={}, @lut={}>, @cancelled=false, @data={"message"=>"10.0.0.10", "@version"=>"1", "@timestamp"=>"2016-08-19T16:57:04.238Z", "type"=>"paloaltoenriched", "host"=>"es-jre"}, @metadata={}, @accessors=#<LogStash::Util::Accessors:0x332e39b2 @store={"message"=>"10.0.0.10", "@version"=>"1", "@timestamp"=>"2016-08-19T16:57:04.238Z", "type"=>"paloaltoenriched", "host"=>"es-jre"}, @lut={"type"=>[{"message"=>"10.0.0.10", "@version"=>"1", "@timestamp"=>"2016-08-19T16:57:04.238Z", "type"=>"paloaltoenriched", "host"=>"es-jre"}, "type"], "host"=>[{"message"=>"10.0.0.10", "@version"=>"1", "@timestamp"=>"2016-08-19T16:57:04.238Z", "type"=>"paloaltoenriched", "host"=>"es-jre"}, "host"], "[type]"=>[{"message"=>"10.0.0.10", "@version"=>"1", "@timestamp"=>"2016-08-19T16:57:04.238Z", "type"=>"paloaltoenriched", "host"=>"es-jre"}, "type"], "[source_ip]"=>[{"message"=>"10.0.0.10", "@version"=>"1", "@timestamp"=>"2016-08-19T16:57:04.238Z", "type"=>"paloaltoenriched", "host"=>"es-jre"}, "source_ip"], "[@timestamp]"=>[{"message"=>"10.0.0.10", "@version"=>"1", "@timestamp"=>"2016-08-19T16:57:04.238Z", "type"=>"paloaltoenriched", "host"=>"es-jre"}, "@timestamp"]}>>, :error=>#<Faraday::ConnectionFailed>, :level=>:warn}

and on ES in:

[2016-08-19 18:57:04,286][WARN ][shield.transport.netty   ] [nuc01] received plaintext http traffic on a https channel, closing connection [id: 0xc1477410, /10.0.0.32:41610 => /10.0.0.10:9200]

It looks like ssl => true is not respected

Changing the config to:

hosts => ["https://nuc01:9200"]

the CA cert seems not to be used, resulting in:

LS:

Failed to query elasticsearch for previous event {:index=>"pfsense-*", :query=>"ip_address:%{[source_ip]} AND @timestamp:[2016-08-19T17:39:36.412Z||\\-1h TO 2016-08-19T17:39:36.412Z||-0s]", :event=>#<LogStash::Event:0x70e76c39 @metadata_accessors=#<LogStash::Util::Accessors:0x7cdfc184 @store={}, @lut={}>, @cancelled=false, @data={"message"=>"10.0.0.10", "@version"=>"1", "@timestamp"=>"2016-08-19T17:39:36.412Z", "type"=>"paloaltoenriched", "host"=>"es-jre"}, @metadata={}, @accessors=#<LogStash::Util::Accessors:0x7bbc8a49 @store={"message"=>"10.0.0.10", "@version"=>"1", "@timestamp"=>"2016-08-19T17:39:36.412Z", "type"=>"paloaltoenriched", "host"=>"es-jre"}, @lut={"type"=>[{"message"=>"10.0.0.10", "@version"=>"1", "@timestamp"=>"2016-08-19T17:39:36.412Z", "type"=>"paloaltoenriched", "host"=>"es-jre"}, "type"], "host"=>[{"message"=>"10.0.0.10", "@version"=>"1", "@timestamp"=>"2016-08-19T17:39:36.412Z", "type"=>"paloaltoenriched", "host"=>"es-jre"}, "host"], "[type]"=>[{"message"=>"10.0.0.10", "@version"=>"1", "@timestamp"=>"2016-08-19T17:39:36.412Z", "type"=>"paloaltoenriched", "host"=>"es-jre"}, "type"], "[source_ip]"=>[{"message"=>"10.0.0.10", "@version"=>"1", "@timestamp"=>"2016-08-19T17:39:36.412Z", "type"=>"paloaltoenriched", "host"=>"es-jre"}, "source_ip"], "[@timestamp]"=>[{"message"=>"10.0.0.10", "@version"=>"1", "@timestamp"=>"2016-08-19T17:39:36.412Z", "type"=>"paloaltoenriched", "host"=>"es-jre"}, "@timestamp"]}>>, :error=>#<Faraday::SSLError>, :level=>:warn}

and in ES:

[2016-08-19 19:39:37,252][WARN ][shield.transport.netty   ] [nuc01] Caught exception while handling client http traffic, closing connection [id: 0x05627715, /10.0.0.32:41868 => /10.0.0.10:9200]
javax.net.ssl.SSLException: Received fatal alert: certificate_unknown
...

Brackets missing in the plugin documentation

Hi,

I think there are brackets missing in the plugin documentation:

if [type] == "end" {
elasticsearch {
hosts => ["es-server"]
query => "type:start AND operation:%{[opid]}"
fields => [ ["@timestamp", "started"] ]
}

Cheers!

Logstash - elasticsearch filter not working.

Logstash version - > 5.6
O.S - > EL 6
Config file:

input {
  elasticsearch {
                        "hosts" => "xyz.us.xyz.com:9200"
                        "index" => "test-m-docs"
                        query => '{ "query": {"match": { "ddocname" :{"query":"CNT1882742","type":"phrase"} }},"_source":{"include":["ddoctitle*"]} }'
                }
  }


output {
       stdout { codec => json_lines }
    stdout { codec => rubydebug }
  }

This is working fine as expected with the correct results:

{"ddoctitle":"VSNL_R12Upgrade_TECH_UPG_Resource_Mix-DAA1_V1.12.xls","@version":"1","@timestamp":"2018-02-23T05:23:37.421Z"}
{
     "ddoctitle" => "VSNL_R12Upgrade_TECH_UPG_Resource_Mix-DAA1_V1.12.xls",
      "@version" => "1",
    "@timestamp" => 2018-02-23T05:23:37.421Z
}

But the same logic and the same index is used in another logstash config as a lookup. It is failing with the parse errors as below.

Another logstash config, where the "test-m-docs" index is used as lookup.

 if [docname] {
      elasticsearch {
                        "hosts" => "xyz.us.xyz.com:9200"
                        "index" => "test-m-docs"
                        query => '{ "query": {"match": { "ddocname" :{"query":"%{[docname]}","type":"phrase"} }},"_source":{"include":["ddoctitle*"]} }'
                        #query => '{ "query": {"match": {"ddocname" : "%{[docname]}" }} }'
                        fields => { "ddoctitle" => "doctitle" }
                    }
                }

Error:

[2018-02-23T07:00:07,616][WARN ][logstash.filters.elasticsearch] Failed to query elasticsearch for previous event {:index=>"test-m-docs", :query=>"{ \"query\": {\"match\": { \"ddocname\" :{\"query\":\"CNT2463955\",\"type\":\"phrase\"} }},\"_source\":{\"include\":[\"ddoctitle*\"]} }", :event=>2018-02-23T06:24:20.351Z XYZBCServer 100.100.100.27 - - [06/Feb/2018:02:22:00 -0600] "GET /XXX/cnt2463955.png HTTP/1.1" 200 4186 "https://XYZ.us.xyz.com/index.htm" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0" "10.174.90.62" "image/png" , :error=>#<Elasticsearch::Transport::Transport::Errors::BadRequest: [400] {"error":{"root_cause":[{"type":"query_shard_exception","reason":"Failed to parse query [{ \"query\": {\"match\": { \"ddocname\" :{\"query\":\"CNT2463955\",\"type\":\"phrase\"} }},\"_source\":{\"include\":[\"ddoctitle*\"]} }]","index_uuid":"6iaZgOd7RO63LM0g5lcm5g","index":"test-m-docs"}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":"test-m-docs","node":"ubu7iS9YQ3aUHB-_KWUqpw","reason":{"type":"query_shard_exception","reason":"Failed to parse query [{ \"query\": {\"match\": { \"ddocname\" :{\"query\":\"CNT2463955\",\"type\":\"phrase\"} }},\"_source\":{\"include\":[\"ddoctitle*\"]} }]","index_uuid":"6iaZgOd7RO63LM0g5lcm5g","index":"test-m-docs","caused_by":{"type":"parse_exception","reason":"Cannot parse '{ \"query\": {\"match\": { \"ddocname\" :{\"query\":\"CNT2463955\",\"type\":\"phrase\"} }},\"_source\":{\"include\":[\"ddoctitle*\"]} }': Encountered \" <RANGE_GOOP> \"{\\\"match\\\": \"\" at line 1, column 11.\nWas expecting:\n    \"TO\" ...\n    ","caused_by":{"type":"parse_exception","reason":"Encountered \" <RANGE_GOOP> \"{\\\"match\\\": \"\" at line 1, column 11.\nWas expecting:\n    \"TO\" ...\n    "}}}}]},"status":400}>}

Can we refer another index as a lookup for the logstash events?
If so, why I am not able to get the same result as I was getting in the working other configuration mentioned above.?

Add support for search_type parameter

Let say I want to run the following query in elasticsearch:

GET .bano/_search?search_type=dfs_query_then_fetch
{
  "size": 1,
  "query": {
     "match": {
       "foo": "bar"
     }
  }
}

I can't specify the search_type parameter in this elasticsearch plugin.
Would be nice to add a support for it.

Backslashes in the query string not supported?

  • Version: ES/Logstash 5.0.2, plugin: 3.1.0
  • Operating System: CentOS 7
  • Config File (the query I used is in the comment):
input {
        elasticsearch {
                hosts => "10.x.y.z:9200"
                index => "data_for_mapping_test"
                query => '
{
  "query": {
    "term": {
      "_id": {
        "value": "1"
      }
    }
  }
}
'
        }
}

filter {
     elasticsearch {
              hosts => ["10.x.y.z:9200"]
              index => "mappings"
              query_template => "/etc/logstash/conf.d/mapping_test.dsl"
              # query => '{ "query": { "term": { "AssetType": "%{AssetType}"} } }'
              fields => {"AssetTypeGrouping" => "AssetTypeGroupingMapped"}
              enable_sort => false
     }
}

output {
    elasticsearch {
        hosts => ["10.x.y.z:9200"]
        index => "mappings_test"
    }
}
  • Sample Data:
    My input data contains a field like this: "AssetType":"\\Demo\\Something"
    My mappings index contains documents like this:
{
        "AssetType": "\\Demo\\Something",
        "AssetTypeGrouping": "marketing assets"
}
  • Steps to Reproduce: Run the pipe ;]
  • Error:

:error=>#<LogStash::Json::ParserError: Unrecognized character escape 'D' [...]

NOTE: It helped (worked as expected) when I added the following step in the query parsing in the plugin code:

query_tmp = event.sprintf(@query_dsl).gsub!('\\', '\\\\\\')

So it seems that the query parser doesn't understand backslashes in the query string.
Is it the solution or am I doing something completely stupid?

Need help in writing grok pattern

I want to send log data in to elasticsearch to visualize the data through kibana.

please find the below sample log data.
02/08/2019 19:55:05 Reply from IP: bytes=32 time=677ms TTL=111
02/08/2019 19:55:06 Request timed out.
02/08/2019 19:55:26 Reply from IP: Destination host unreachable.

Please help me how to write 3 grok filter pattern to send data in to elastic search using logstash.

SSL is not working with the filter

I'm trying to use the elastic cloud as my filter source. I have a config like:

filter {
elasticsearch {
hosts => ["https://FOUND_SERVER.us-east-1.aws.found.io:9243"]
ssl => true
user => "elastic"
password => "changeme"
}
}

This doesn't work. I tried also uninstalling the elasticsearch input plugin, which was pinning elasticsearch ruby client library to 1.0 so that installing the elasticsearch filter would install the 5.0 version of the ruby client library, but it didn't change anything.

Using http and ssl => false and port 9200 works fine for accessing the cluster so it does appear to be some sort of bug in the filter code.

error=>#<NoMethodError: undefined method `start_with?' for nil:NilClass>

Hello,
While using this filter to search for previous events with logstash version 2.2.3 and elasticsearch version 2.3.0. I am getting below error -
_error NoMethodError: undefined method `start_with?' for nil:NilClass level warn _

I am using it like below -
elasticsearch {
hosts => ["localhost:9205"]
}

This same plugin works with logstash version 2.2.2.

Any fix / workaround for this?

Thanks.

Make query_template option dynamic

May be not a so good idea but in my use case, I'd like to be able to do different kind of elasticsearch lookups depending on the input event values.

Ideally, I'd like to write something like:

filter {
	if [foo] {
	  mutate { add_field => { "template" => "search-by-foo.json" } }
	} else {
	  mutate { add_field => { "template" => "search-by-bar.json" } }		
	}

	elasticsearch {
	  query_template => "%{[template]}"
	  remove_field => ["template"]
	}
}

This is not supported yet as the elasticsearch filter checks at the pipeline creation time that %{[template]} file is available.

Would it be possible to support this?

Of course, I have a workaround but this makes me duplicate the elasticsearch filter twice:

filter {
	if [foo] {
		elasticsearch {
		  query_template => "search-by-foo.json"
		}
	} else {
		elasticsearch {
		  query_template => "search-by-bar.json"
		}
	}
}

It looks a nice workaround but when the elasticsearch filter configuration is more complex (more options), that's "a lot" of lines of code to duplicate. Here I have also only 2 type of searches but we can think about more than that as well.

cron support

Add cron support for elasticsearch input.

I propose adding a new parameter :schedule

Accessing aggregation result

Somewhat similar to #57
Would be great if the results of aggregation (see example query below) can be put into fields. Currently only "hits" are accessible.

{
"size": 0,
"query": {
"match": {
"countryCode": "%{countryCode}"
}
},
"aggs": {
"lat": {
"avg": {
"field": "location.lat"
}
},
"lon": {
"avg": {
"field": "location.lon"
}
}
}
}

Multiple field mappings not supported?

  • Version: Logstash 5.0, not sure how to check the plugin version
  • Operating System: CentOS 7
  • Config File (if you have sensitive info, please remove it): details below
  • Sample Data: taken from here

Hello,
I tried to setup the plugin to handle multiple field mappings and it fails, mostly returning an #<TypeError: can't convert nil into String>} error.

I tested a couple of versions of config file. For debugging purposes I introduced some extra logging in the loop that parses the fields array:

@fields.each do |old_key, new_key|
        @logger.warn("Keys:", :old_key => old_key, :new_key => new_key) # MY CODE HERE ;]

Here are the fragments of the config file that I tried along with log lines they produce:

  1. fields => [{"date" => "new_date"}, {"title" => "new_title"}]

`[2016-11-02T16:20:44,381][WARN ][logstash.filters.elasticsearch] Keys: {:old_key=>{"date"=>"new_date"}, :new_key=>nil}

  1. fields => ["date", "title"]

[2016-11-02T16:24:01,173][WARN ][logstash.filters.elasticsearch] Keys: {:old_key=>"date", :new_key=>nil}

  1. fields => [{"date" => "new_date", "title" => "new_title"}]
    returns an error:

[ERROR][logstash.agent ] fetched an invalid config

To prove that querying ES works, I can assure that my case works for a single field config like this: fields => {"date" => "new_date"}, basically a replica of the config provided in the plugin docs.

use event value for index name

Hi,

I'm trying to supply an event variable/field in the index name, such as:
elasticsearch {
hosts => ["localhost:9200"]
index => "%{[@metadata][myindex]}"
query => "..."
fields => "..." }

But this fails with:
[logstash.filters.elasticsearch] Failed to query elasticsearch for previous event {:index=>"%{[@metadata][myindex]}", :query=>... "index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":"%{[@metadata][myindex]}....

Tried different quoting and syntax for referencing the myindex field, but it never got resolved to its value. And it's overkill to search _all indices when the correct index would be known in the logstash script based on input data.
Is there some way to achieve that?

This is with logstash 5.2.1 / logstash-filter-elasticsearch 3.1.3 on Linux

AWS ES Support

I saw this does not have aws es support (sigv4), so I wrote up a feature branch and have been using it for a short time now. How can I go about getting a branch pushed up here (access restricted repo) for testing and review?

Quotes on query fields are not working

In the example provided the field "operation" is matched against the one "opid" on the new event. But, if opid is for example an url, and as such contains ":" or "/", it would fail.

   elasticsearch {
      hosts => ["es-server"]
      query => "type:start AND operation:%{[opid]}"
      fields => ["@timestamp", "started"]
   }

This could be fixed by quoting the whole "opid".

      query => "type:start AND operation:\"%{[opid]}\""

But this or other attempts to quote the search string are not working.

message=>"Failed to query elasticsearch for previous event", :query=>"type:rss AND link:\\\"https://www.domain.com/path/\\\""

Please advice.

version 3.0.2 SSL Error

It seems version 3.0.2 client.rb has the same typo as version 2.1.1, which ssh --> ssl, host --> hosts in client.rb.

Before I updated my own client.rb, there was SSLError. But after I fixed the typo in my own copy of filter, the SSLError is gone, but I got connectionFailed when connect to a SSL enabled cluster.

input {
  file
  {
    path => "/tmp/input.txt"
    start_position => "beginning"
    sincedb_path => "sincedb"
  }
}

filter
{
     csv {
        separator => ","
        columns => ["name", "gender", "msg"]
    }

    elasticsearch
    {   
        ssl => "true"
        ca_file =>  "/tmp/ca.pem"
        hosts => ["https://node01:9200", "https://node02:9200"]
        user => "readonly"
        password => "password"
        index => "staffindex"
        enable_sort => false
        query => " _type: person and SID: %{name} "
        fields => { "fullName" => "sourceUserName" "managerSid" => "sourceUserManagerSID" }
    }
}

output {
   stdout {
      codec => rubydebug
   }
}
Error:
Failed to query elasticsearch for previous event, :error=>#<Faraday::ConnectionFailed>, :level=>:warn}

Please help to look into this issue.

Thanks,

set @query_dsl parameter directly , without using query_template file

i need set @query_dsl parameter directly , without using query_template
like this :

elasticsearch {    
		hosts => "localhost:9200"
		index => ".ml-anomalies-*"
	        query_dsl => '{
                 "size": 20, 
		  "query": {
		    "bool": {
		      "filter": [
		        {
		          "range": {
		            "record_score": {
		              "gte": %{[my_field]}
		            }
		          }
		        },
		        .
		        .
		        .
		        .
		      ]
		    }
		  },
		  "sort": [
		    {
		      "record_score": {
		        "order": "desc"
		      }
		    }
		  ],
		  "_source": [
		    "job_id",
		    .
		    .
		    .
		  ]
                }'

query error

user like this

       grok {
          match => ["message","\s*\"(?<clientip>(.*?)?)\"\s+(.*)\s+\"(?<username>(.*?)?)\"\s+\"(?<usertype>(\S+)?)\""]
        }

elasticsearch{
     hosts=>["172.17.32.3:9200/sex_dic/sex/"]  
     query=>"username:%{username}"
     sort=>"_id:desc"
  }

i can not get want result...

i want search elasticsearch to remove uniq my data.To prevent repeated write data

so how can i?

Filter always returns true, disregarding actual results found

I was testing the filter on 6.2 and it was always tagging or trying to add fields even if no match was being found. Also this was sometime causing cannot convert nil to string errors in the logs because even though the filter was returning true there was no event attached as result.

Suggested fix: #92

I tested the suggested change and it works properly

Elasticsearch filter not working due to 1 sec indices refresh?

I am trying to use elasticsearch filter but with no success.
I want to search logs to find past events and add fields to actual event.
I think this is due to 1 sec refresh of indices so when past events are
too close to actual event they are not available.

I tried to:
1.change ES output options in logstash
flush_size => 1
idle_flush_time =>1
2.patched ES filter gem with action
client.indices.stats refresh: true
before search - but that's not working too!!

one working ugly solution - with
sleep(20)
before search action
in ES filter

what am i doing/thinking wrong?
Is there another solution for searching past events in logstash?

A few documentation fixes

Working to fix a few documentation issues for a doc release:

  • Get #88 merged (text in code blocks)
  • #64 be a bit more explicit about how to use query_template
  • Give clearer example of copying multiple fields to avoid misunderstandings like #52 and #63
  • Fix the "HTTP setting" link in the "Compatibility Notes" section. Will be resolved by elastic/logstash-docs#535, affecting all 3 plugins.

Note: If my next feature takes too long to get to, we may have to do a documentation-only release. This will require a version bump to be picked up and deployed.

Provide option to disable SSL hostname verification

Currently, there is no option to disable hostname verification when connection to an cluster with SSL enabled.

https://www.elastic.co/guide/en/logstash/current/plugins-filters-elasticsearch.html#_synopsis_119

While insecure (and should not be used for production), it would be nice to allow a configuration option to disable hostname verification for testing/debug purposes. This is similar to the logstash-output-elasticsearch which exposes such an option.

Enable count (or other search types)

Currently, we can only get the results of a query as returned in the hits array. It would be convenient to at least be able to specify just a count, or possibly other search types.

Add option to copy entire _source field from previous document

Specifying the individual fields to copy from the previous document to the new one can be painful if there are a lot. This would add the ability to say something like this:

fields => ["_source","previous_document"]

Which would copy the entire contents of the previous document to the field "previous_document".

Caching of search results in memory

Is caching the search results in memory within the scope of this plugin? I'm thinking about implementing something simple and making a pull request, but I'll just make a custom plugin if it's not something that would likely be accepted.

The use case is as follows:

  • I have a large volume of event data going through logstash into ES
  • I have a much smaller set of immutable configuration-like records stored in ES that the events reference by ID
  • I would like to augment the events with the referenced records without needlessly overloading ES with searches, when most of the data would fit in memory

I think this could be accomplished by adding a simple LRU cache and two optional configuration values: the size of the cache in entries, and a identifier that uniquely represents the search. Without these parameters, the plugin would behave as usual and just hit ES every time.

Correlation is not working in logstash filter elasticsearch

Hi,
I'm using new version of logstash(5.2.0) and i'm facing correlation issue in that. Kinldy find the below detail of my issue,

Actually im trying to move some fields from one index 1 to another index 2.
Already i processed the index 1 data and written the elasticsearch condition in index 2 to fetch the value from index. Please find the elasticsearch query below

elasticsearch {
				query_template => "/template.json"
			  					
		               }

template.json

{
    "query": {
      "query_string": {
       "query": "type:gvsh_reqold AND dlr_dlrCode:%{dlr_dlrCode} AND getVehicle_VIN:%{vehicle_vehicleID}"
      }
    },
   "fields": ["vehicle_SSC","vehicle_SSC"]
 }

Help me to resolve the issue as soon as possible and kindly let know my code is right or wrong.

Sometime getting this error,
[2017-02-21T05:02:29,854][WARN ][logstash.filters.elasticsearch] Failed to query elasticsearch for previous event {:index=>"", :query=>{"query"=>{"query_string"=>{"query"=>"type:gvsh_reqold AND dlr_dlrCode:04136 AND getVehicle_VIN:JTDKN3DU9F0470982"}}, "fields"=>["vehicle_SSC", "vehicle_SSC"]}, :event=>2017-02-21T13:02:19.159Z

Refactor the filter Elasticsearch to use our manticore client in the elasticsearch output

Currently the filter elasticsearch is using the elasticsearch-ruby client, in the past we have encountered some thread safety issue with this gem and we have decided to create our own small wrapper to calls elasticsearch using the Manticore library in the Elasticsearch output.

We should evaluate and/or implement the same logic in the filter.

Goals:

  • reuse code from the elasticsearch output
  • consolidate plugin options.
  • Slowly migrate to a Universal plugin model.

Feature request: support PKI based authentication to ES

The logstash-output-elasticsearch plugin supports authenticating to ES via PKI, that is, having the option to specify a TLS keystore (client-side cert) to use when connecting.

It would be great if this plugin did too. Could the code for SSL be taken straight from the output plugin?

Regards,
Nick

fail to query elasticsearch for previous event

Hi all,
I'm using logstash 2.3, elasticsearch 2.3
I want to retrieve events in elasticsearch status:in and username but error.
this my file log: dur.log
2016-05-13 14:15:00 usera in
2016-05-13 14:16:00 usera out
this my config: test.conf
input{
file{
path =>"/opt/dur.log"
}
}
filter{
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{USERNAME:username} %{WORD:status}"}
}
if [status] == "in" {
mutate {
rename => {"timestamp" => "started"}
}
date {
match =>["started","yyyy-MM-dd HH:mm:ss"]
target => "[started]"
}
}

    if [status] == "out" {
      mutate {
          rename => {"timestamp" => "ended"}
         }
       date {
       match =>["ended","yyyy-MM-dd HH:mm:ss"]
       target => "[ended]"
     }
    }

    if [status] == "out"{
      elasticsearch {
          hosts => ["localhost:9200"]
          query => 'status:in AND username:"%{[username]}"'
          fields => ["started","ended"]
     }

}
}
output {
stdout { codec => rubydebug}
elasticsearch { hosts => ["localhost:9200"]
index => "dur-%{+YYYY.MM.dd}"}
}

Failed to query elasticsearch for previous event {:query=>"status:in AND username:"usera"", :event=>#<LogStash::Event:0x120f088f @metadata_accessors=#<LogStash::Util::Accessors:0x179c88db
Can you help me? Thanks

"index" param appears to be ignored

  • Version: LS 6.3.x branch, ES 6.3.0
  • Operating System: Mac OS
  • Config File (if you have sensitive info, please remove it):
  • Sample Data: (generated)
  • Steps to Reproduce:
bin/logstash --log.level debug -e "input { generator { count =>  3 } }
filter { elasticsearch {
  user => elastic password => '$ES_PWD' hosts => ['https://example.us-central1.gcp.cloud.es.io:9243/']
  index => 'cert' query => '*' fields => { 'sequence' => 'last_sequence' }
} }
output { stdout {} elasticsearch {
  user => elastic password => '$ES_PWD' hosts => ['https://example.us-central1.gcp.cloud.es.io:9243/']
  index => 'cert'
} }"

When executing this pipeline, dummy data is inserted into the "cert" index. This is an otherwise empty ES instance. It only has one other index, ".kibana".

The filter should search only in the "cert" index, according to index => 'cert'. However I'm getting an error to the effectr that the .kibana index doesn't have a @timestamp field to sort on.

[2018-06-20T14:00:31,579][WARN ][logstash.filters.elasticsearch] Failed to query elasticsearch for previous event {:index=>"cert", :query=>"*", :event=>#<LogStash::Event:0x530c4ab1>, :error=>#<RuntimeError: Elasticsearch query error: [{"shard"=>0, "index"=>".kibana", "node"=>"xtxlP5pNS_2vmUKeXylZ5A", "reason"=>{"type"=>"query_shard_exception", "reason"=>"No mapping found for [@timestamp] in order to sort on", "index_uuid"=>"I1jLWOTUStuiVew5Ew0AVg", "index"=>".kibana"}}]>}

Support %{name} in fields

Dynamic field names could be supported with:

event[event.sprintf(new)] = results['hits']['hits'][0]['_source'][event.sprintf(old)]

elasticsearch hosts in format ipv6 not supperted

when using ipv6 I get an error when starting logstash

[2019-03-04T09:18:57,606][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"main", :plugin=>"<LogStash::Inputs::Elasticsearch id=>"elasticsearch_input_index_task", hosts=>["[fd95:ff55:7fb8:f1e5:f816:3eff:feed:ce5b]:9201"], index=>".kibana", query=>"{\"query\":{\"match_all\":{}}}", size=>5000, scroll=>"1m", add_field=>{"[@metadata][enrichment]"=>"true", "[@metadata][type]"=>"task_enrichment"}, enable_metric=>true, codec=><LogStash::Codecs::JSON id=>"json_3360181f-c73d-41df-a4c6-4e284aba296e", enable_metric=>true, charset=>"UTF-8">, docinfo=>false, docinfo_target=>"@metadata", docinfo_fields=>["_index", "_type", "_id"], ssl=>false>", :error=>"bad URI(is not URI?): http://[fd95:0", :thread=>"#<Thread:0x27dbe229 run>"}
[2019-03-04T09:18:57,720][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<URI::InvalidURIError: bad URI(is not URI?): http://[fd95:0>, :backtrace=>["uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/uri/rfc3986_parser.rb:67:in split'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/uri/rfc3986_parser.rb:73:in parse'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/uri/common.rb:227:in parse'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/uri/common.rb:714:in URI'", "org/jruby/RubyMethod.java:127:in call'", "C:/Users/seko0716/Desktop/otrk_local/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/utils.rb:258:in URI'", "C:/Users/seko0716/Desktop/otrk_local/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/connection.rb:309:in url_prefix='", "C:/Users/seko0716/Desktop/otrk_local/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/faraday-0.9.2/lib/faraday/connection.rb:77:in initialize'", "C:/Users/seko0716/Desktop/otrk_local/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/transport/http/faraday.rb:38:in __build_connection'", "C:/Users/seko0716/Desktop/otrk_local/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/transport/base.rb:138:in block in __build_connections'", "org/jruby/RubyArray.java:2486:in map'", "C:/Users/seko0716/Desktop/otrk_local/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/transport/base.rb:130:in __build_connections'", "C:/Users/seko0716/Desktop/otrk_local/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/transport/base.rb:40:in initialize'", "C:/Users/seko0716/Desktop/otrk_local/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/client.rb:114:in initialize'", "C:/Users/seko0716/Desktop/otrk_local/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport.rb:26:in new'", "C:/Users/seko0716/Desktop/otrk_local/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-elasticsearch-4.2.1/lib/logstash/inputs/elasticsearch.rb:177:in register'", "C:/Users/seko0716/Desktop/otrk_local/logstash-6.3.2/logstash-core/lib/logstash/pipeline.rb:340:in register_plugin'", "C:/Users/seko0716/Desktop/otrk_local/logstash-6.3.2/logstash-core/lib/logstash/pipeline.rb:351:in block in register_plugins'", "org/jruby/RubyArray.java:1734:in each'", "C:/Users/seko0716/Desktop/otrk_local/logstash-6.3.2/logstash-core/lib/logstash/pipeline.rb:351:in register_plugins'", "C:/Users/seko0716/Desktop/otrk_local/logstash-6.3.2/logstash-core/lib/logstash/pipeline.rb:498:in start_inputs'", "C:/Users/seko0716/Desktop/otrk_local/logstash-6.3.2/logstash-core/lib/logstash/pipeline.rb:392:in start_workers'", "C:/Users/seko0716/Desktop/otrk_local/logstash-6.3.2/logstash-core/lib/logstash/pipeline.rb:288:in run'", "C:/Users/seko0716/Desktop/otrk_local/logstash-6.3.2/logstash-core/lib/logstash/pipeline.rb:248:in block in start'"], :thread=>"#<Thread:0x27dbe229 run>"}

  • Version: 6.3.2
  • Operating System: linux/windows
  • Config File (if you have sensitive info, please remove it):

input {
elasticsearch {
id => "elasticsearch_input_index_task"
hosts => ["[fd95:ff55:7fb8:f1e5:f816:3eff:feed:ce5b]:9201"]
index => "task"
query => '....'
size => 5000
scroll => "1m"
schedule => "*/10 * * * *"
add_field => {
"[@metadata][enrichment]" => "true"
"[@metadata][type]" => "task_enrichment"
}
}
}

output {
if ([@metadata][type] == "task_enrichment") {
elasticsearch {
id => "output_elasticsearch_enrichment_task"
hosts => ["[fd95:ff55:7fb8:f1e5:f816:3eff:feed:ce5b]:9201"]
index => "task"
document_id => "%{id}"
action => "update"
doc_as_upsert => true
retry_on_conflict => 6
retry_max_interval => 320
retry_initial_interval => 5
}
}
}

i am fixed it in pull request

Logstash crashed due to typo in warning logger

Using the logstash-filter-elasticsearch filter I had logstash crash with the message

{:timestamp=>"2016-06-23T12:15:03.473000+0000", :message=>"Exception in pipelineworker, the pipeline stopped processing new events, please check your filter configuration and restart Logstash.", "exception"=>#<ArgumentError: Cabin::Channel#warn(message, data={}) takes at most 2 arguments>, "backtrace"=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/cabin-0.8.1/lib/cabin/mixins/logger.rb:62:in `warn'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-elasticsearch-2.1.0/lib/logstash/filters/elasticsearch.rb:100:in `filter'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-elasticsearch-2.1.0/lib/logstash/filters/elasticsearch.rb:90:in `filter'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/filters/base.rb:151:in `multi_filter'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/filters/base.rb:148:in `multi_filter'", "(eval):331:in `cond_func_10'", "org/jruby/RubyArray.java:1613:in `each'", "(eval):328:in `cond_func_10'", "(eval):177:in `filter_func'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:267:in `filter_batch'", "org/jruby/RubyArray.java:1613:in `each'", "org/jruby/RubyEnumerable.java:852:in `inject'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:265:in `filter_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:223:in `worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:201:in `start_workers'"], :level=>:error}

I think this is due to a typo in elasticsearch.rb line 101 which reads

@logger.warn("Failed to query elasticsearch for previous event", :index, @index, :query => query_str, :event => event, :error => e)

but should be

@logger.warn("Failed to query elasticsearch for previous event", :index => @index, :query => query_str, :event => event, :error => e)

Do not log the full event if previous event is not found

Hello,

I am using the elasticsearch filter to search for previous events.
In my use case it happens often that there is no previous event and it is normal.

But with the current logger.warn instruction logging the full event object, the log files are becoming very big very quickly.

My suggestion would be to remove the event from the logger.warn instruction; maybe to add it at info level instead of warning.

Cannot get Connection from Pool

  • Version: 5.5.1

  • Operating System: OSX

  • Steps to Reproduce:
    This issue manifests itself when the plugin is under high load with more than 1 LS worker configured. Assuming each document results in an ES query (these typically execute in a few ms) the plugin with perodically log:

[2017-09-13T14:39:37,075][WARN ][logstash.filters.elasticsearch] Failed to query elasticsearch for previous event {:index=>"threats", :query=>{"query"=>{"percolate"=>{"field"=>"query", "document_type"=>"threat", "document"=>{"entity"=>["MacBook-Pro.local", "fe80::12c3:7bff:fede:dc18", "fe80::83f:d39:1b5d:a2d4"]}}}, "size"=>10}, :event=>2017-03-30T22:51:21.628Z MacBook-Pro.local %{message}, :error=>#<Elasticsearch::Transport::Transport::Error: Cannot get new connection from pool.>}

With a single worker this issue does not occur. I will assemble a reproducible config and test case.

Setting a custom port using host:port is not working

Currently the documentation is not clear on how to set a custom port, as there is no port => setting available.
Setting hosts =>["localhost:9202"] results in the scheme https does not accept registry part: localhost:9202:9200 (or bad hostname?).

Using hosts =>["http://localhost:9202"] works.

Add support for multi search

Elasticsearch provides an _msearch endpoint to perform multiple searches in a single round trip.

Since the filters have both a filter(event) and a multi_filter(events) (link) api, we could ensure that each batch of events is searched in 1 operation.

This doesn't replace caching but will likely increase this plugin's performance by close to an order of magnitude, like we say with the http output's json_batch

Code would fail when there are more than one index

I've created a patch long time ago so you can specify the index with the default set to "logstash-*"
It was nearly a year ago but IIRC the query would fail if there are more than one index in the ES server?

I also added a field to save the document id so this matched entry can be overwritten in output, which I had my use case for updating the entry.

Let me know if I'm missing anything.

ref:
elastic/logstash#1017

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.