logstash-plugins / logstash-output-elasticsearch Goto Github PK
View Code? Open in Web Editor NEWHome Page: https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html
License: Apache License 2.0
Home Page: https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html
License: Apache License 2.0
The template that ships with this plugin will add a raw field for each event field. However, in most cases, the not_analyzed version of "message" is of no use and only takes up disk space.
Could we not generate a message.raw? Is there a use for it?
Guys, Hi. I am testing the latest "1.5.0 rc2". I have a couple of Rabbit inputs with four threads each. The output uses the latest "elasticsearch" plugin. There is one very annoying difference between the latter and the deprecated "elasticsearch_http" plugin. Any "POST /_bulk" response with one or more records with a "status" of, for example, 400 or 429, etc..., will generate "warn" level log data corresponding to the original bulk request. In my case I can have dozens of megabytes per seconds. A configuration option controlling this behaviour would be mostly welcome. (i.e.: the Rabbit input has it by the way)
Thanks in advance.
Regards, Andrei
P.S.: transferred from elastic/logstash#2847
related to elastic/logstash#2374
If Elasticsearch cluster goes down while using node protocol, Logstash will eventually crash. This should be handled more gracefully.
Also need tests to verify.
migrated from https://logstash.jira.com/browse/LOGSTASH-2275
This is with Logstash 1.4.2, Elasticsearch 1.4.0.
I have a very simple config to index a small batch of 1000 json events:
input {
stdin {
codec => json_lines {}
}
}
filter {
date {
match => [ "stbReportTime", "ISO8601"]
}
}
output {
# stdout { codec => rubydebug }
stdout { codec => dots }
elasticsearch {
protocol => http
host => "localhost"
index => "testindex"
workers => 2
}
}
After running with:
bin/logstash -f logstash.conf < data.json
the index 'testindex' does not get created and there are no events indexed in Elasticsearch.
When I add some template definitions, then it seems to create the index but only index some of the events (usually about 483; that number seems to be consistent):
input {
stdin {
codec => json_lines {}
}
}
filter {
date {
match => [ "stbReportTime", "ISO8601"]
}
}
output {
# stdout { codec => rubydebug }
stdout { codec => dots }
elasticsearch {
protocol => http
host => "localhost"
index => "testindex"
template => "index_template.json"
template_name => "testindex"
template_overwrite => true
workers => 2
}
}
After removing the workers => 2 parameter, all 1000 events are indexed properly.
This could be the same issue as referenced in this JIRA ticket: https://logstash.jira.com/browse/LOGSTASH-1535.
The relevant Logstash config:
# host is filled with an array of valid, statically defined hosts
elasticsearch {
action => create
host => [ ... ]
index => "test-%{+YYYY.MM.dd}"
protocol => "http"
}
The unexpected and suspicious log output for unused, deprecated elasticsearch settings:
{:timestamp=>"2015-03-10T15:02:39.680000-0700", :message=>"You are using a deprecated config setting \"type\" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. You can achieve this same behavior with the new conditionals, like: `if [type] == \"sometype\" { elasticsearch { ... } }`. If you have any questions about this, please visit the #logstash channel on freenode irc.", :name=>"type", :plugin=><LogStash::Outputs::ElasticSearch --->, :level=>:warn}
{:timestamp=>"2015-03-10T15:02:39.682000-0700", :message=>"You are using a deprecated config setting \"tags\" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. You can achieve similar behavior with the new conditionals, like: `if \"sometag\" in [tags] { elasticsearch { ... } }` If you have any questions about this, please visit the #logstash channel on freenode irc.", :name=>"tags", :plugin=><LogStash::Outputs::ElasticSearch --->, :level=>:warn}
{:timestamp=>"2015-03-10T15:02:39.683000-0700", :message=>"You are using a deprecated config setting \"exclude_tags\" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. You can achieve similar behavior with the new conditionals, like: `if !(\"sometag\" in [tags]) { elasticsearch { ... } }` If you have any questions about this, please visit the #logstash channel on freenode irc.", :name=>"exclude_tags", :plugin=><LogStash::Outputs::ElasticSearch --->, :level=>:warn}
{:timestamp=>"2015-03-10T15:02:39.683000-0700", :message=>"You are using a deprecated config setting \"max_inflight_requests\" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. If you have any questions about this, please visit the #logstash channel on freenode irc.", :name=>"max_inflight_requests", :plugin=><LogStash::Outputs::ElasticSearch --->, :level=>:warn}
{:timestamp=>"2015-03-10T15:02:39.691000-0700", :message=>"You are using a deprecated config setting \"type\" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. You can achieve this same behavior with the new conditionals, like: `if [type] == \"sometype\" { elasticsearch { ... } }`. If you have any questions about this, please visit the #logstash channel on freenode irc.", :name=>"type", :plugin=><LogStash::Outputs::ElasticSearch --->, :level=>:warn}
{:timestamp=>"2015-03-10T15:02:39.692000-0700", :message=>"You are using a deprecated config setting \"tags\" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. You can achieve similar behavior with the new conditionals, like: `if \"sometag\" in [tags] { elasticsearch { ... } }` If you have any questions about this, please visit the #logstash channel on freenode irc.", :name=>"tags", :plugin=><LogStash::Outputs::ElasticSearch --->, :level=>:warn}
{:timestamp=>"2015-03-10T15:02:39.693000-0700", :message=>"You are using a deprecated config setting \"exclude_tags\" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. You can achieve similar behavior with the new conditionals, like: `if !(\"sometag\" in [tags]) { elasticsearch { ... } }` If you have any questions about this, please visit the #logstash channel on freenode irc.", :name=>"exclude_tags", :plugin=><LogStash::Outputs::ElasticSearch --->, :level=>:warn}
{:timestamp=>"2015-03-10T15:02:39.694000-0700", :message=>"You are using a deprecated config setting \"max_inflight_requests\" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. If you have any questions about this, please visit the #logstash channel on freenode irc.", :name=>"max_inflight_requests", :plugin=><LogStash::Outputs::ElasticSearch --->, :level=>:warn}
{:timestamp=>"2015-03-10T15:02:39.702000-0700", :message=>"You are using a deprecated config setting \"type\" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. You can achieve this same behavior with the new conditionals, like: `if [type] == \"sometype\" { elasticsearch { ... } }`. If you have any questions about this, please visit the #logstash channel on freenode irc.", :name=>"type", :plugin=><LogStash::Outputs::ElasticSearch --->, :level=>:warn}
{:timestamp=>"2015-03-10T15:02:39.704000-0700", :message=>"You are using a deprecated config setting \"tags\" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. You can achieve similar behavior with the new conditionals, like: `if \"sometag\" in [tags] { elasticsearch { ... } }` If you have any questions about this, please visit the #logstash channel on freenode irc.", :name=>"tags", :plugin=><LogStash::Outputs::ElasticSearch --->, :level=>:warn}
{:timestamp=>"2015-03-10T15:02:39.705000-0700", :message=>"You are using a deprecated config setting \"exclude_tags\" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. You can achieve similar behavior with the new conditionals, like: `if !(\"sometag\" in [tags]) { elasticsearch { ... } }` If you have any questions about this, please visit the #logstash channel on freenode irc.", :name=>"exclude_tags", :plugin=><LogStash::Outputs::ElasticSearch --->, :level=>:warn}
{:timestamp=>"2015-03-10T15:02:39.706000-0700", :message=>"You are using a deprecated config setting \"max_inflight_requests\" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. If you have any questions about this, please visit the #logstash channel on freenode irc.", :name=>"max_inflight_requests", :plugin=><LogStash::Outputs::ElasticSearch --->, :level=>:warn}
{:timestamp=>"2015-03-10T15:02:39.715000-0700", :message=>"You are using a deprecated config setting \"type\" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. You can achieve this same behavior with the new conditionals, like: `if [type] == \"sometype\" { elasticsearch { ... } }`. If you have any questions about this, please visit the #logstash channel on freenode irc.", :name=>"type", :plugin=><LogStash::Outputs::ElasticSearch --->, :level=>:warn}
{:timestamp=>"2015-03-10T15:02:39.716000-0700", :message=>"You are using a deprecated config setting \"tags\" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. You can achieve similar behavior with the new conditionals, like: `if \"sometag\" in [tags] { elasticsearch { ... } }` If you have any questions about this, please visit the #logstash channel on freenode irc.", :name=>"tags", :plugin=><LogStash::Outputs::ElasticSearch --->, :level=>:warn}
{:timestamp=>"2015-03-10T15:02:39.717000-0700", :message=>"You are using a deprecated config setting \"exclude_tags\" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. You can achieve similar behavior with the new conditionals, like: `if !(\"sometag\" in [tags]) { elasticsearch { ... } }` If you have any questions about this, please visit the #logstash channel on freenode irc.", :name=>"exclude_tags", :plugin=><LogStash::Outputs::ElasticSearch --->, :level=>:warn}
{:timestamp=>"2015-03-10T15:02:39.718000-0700", :message=>"You are using a deprecated config setting \"max_inflight_requests\" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. If you have any questions about this, please visit the #logstash channel on freenode irc.", :name=>"max_inflight_requests", :plugin=><LogStash::Outputs::ElasticSearch --->, :level=>:warn}
This was brought over from elastic/logstash#2865
I'm working on contributing client TLS authentication, and am having some issues testing this repo. I'm currently attempting to install it as a plugin on top of a 1.5 install, and it would seem that doing so buries the files down in vendor/plugins. Upon attempting to run the spec, I get errors for a missing "test_utils" require. Are there any docs describing the proper flow, or if not, could I bother someone to talk me through the process?
After ever change to this plugin, we should have a baseline of performance metrics to ensure
that this plugin does not grow inefficient.
Similar to #16, the node client does not support "create"
actions.
Create is the same as index operations except that it should require the ID (otherwise it's going to be implicitly create operation anyway!).
I've gotten reports that protocol => http
fails when pointed at an Amazon ELB. We should test and verify that it works. Should also verify that protocol => transport
works in these situations.
Most especially, it's worth verifying behavior under misconfigured (by default, sometimes) load balancers that close idle connections or disable http 1.1 keep-alive.
References:
Parent ticket: elastic/logstash#2378
A user has reported that Logstash's elasticsearch_http output was using stale dns results when connecting to Elasticsearch - during runtime, dns had changed to move a cluster node, and Logstash didn't re-resolve the name to find the new address.
1.4.2 has config option host
http://logstash.net/docs/1.4.2/outputs/elasticsearch#host
this was changed in logstash-plugins/logstash-input-elasticsearch@add0954 which is not backward compatible
We need to go back to using host (even if it supports multiple hosts now)
Hello I am trying to run some test on logstash 1.5. When I try to install logstash output elastic search on logstash 1.5 I get the following error: http://pastebin.com/mpBeEYHL
(Apologies for the poor quality of this report; I'm unfamiliar with the details of the working behaviour)
With previous usage of the elasticsearch_http
output in LS 1.4 and earlier, my ES indices were automatically created.
Using the elasticsearch
output in LS 1.5.0-beta1, then on flush I get IndexMissingException[[logstash-2015.01.01] missing]
- no index for me
(Related to #35)
I updated 2 plugins, logstash-input-elasticsearch and logstash-output-elasticsearch so I'm not sure if this errror happened after the input initialized or before the output even started:
{:timestamp=>"2015-02-12T00:19:07.854000+0000", :message=>"Using milestone 1 input plugin 'lumberjack'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin. For more information on plugin milestones, see http://logstash.net/docs/1.5.0.beta1/plugin-milestones", :level=>:warn}
The error reported is:
elasticsearch must set a milestone. For more information about plugin milestones, see http://logstash.net/docs/1.5.0.beta1/plugin-milestones
Hi
I am facing a strange behavior in logstash 1.5.0.RC2
Every second I get one of these messages │{:timestamp=>"2015-03-24T16:21:49.644000-0700", :message=>"failed action with response of , dropping action: ["index", {:_id=>nil, :_index=>"logs-2015.03.24.23", :_type=>"hourly"}, #<LogStash::Event:0x6ce115d8 @metadata={"retry_count"=>0}, @accessors=#<LogStash::Util::Accessors:0x2ebd1571 @store={"@timestamp"
This behavior started with I updated from 1.4. to 1.5 RC2,
The problem is that I don't get any response code and I don't see any errors in elastic search itself
For more context I am using the following configurations
elasticsearch {
protocol => "http"
host => [ array of hosts]
workers => 3
flush_size => 10
index => "logs-%{+YYYY.MM.dd.HH}"
And I am running elastic search 1.4.4
I have been redirected here from my original issue in the wrong place on the main elasticsearch project.
I've been also been told there that maybe the issue was already addressed but I didn't find the issue here, so I just move the issue here:
I am using elasticsearch-1.4.4-1(stand alone not clusterized) and logstash-1.4.2-1_2c0f5a1 reading data from a redis server.
For a while everything works like a charm, but after a while logstash stops being able to write to ES. The internal logstash queues fill up and logstash stops reading from redis.
When it happens ES is ok, and I solve the problem by restarting logstash.
At the logs I can see lots of messages like:
{:timestamp=>"2015-02-24T03:50:10.034000+0000", :message=>"Failed to flush outgoing items", :outgoing_count=>67, :exception=>#<NameError: no method 'type' for arguments (org.jruby.RubyFixnum) on Java::OrgElasticsearchActionIndex::IndexRequest>, :backtrace=>["/opt/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:225:in `build_request'", "/opt/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:205:in `bulk'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:204:in `bulk'", "/opt/logstash/lib/logstash/outputs/elasticsearch.rb:315:in `flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1339:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:216:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:193:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:112:in `buffer_initialize'", "org/jruby/RubyKernel.java:1521:in `loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:110:in `buffer_initialize'"], :level=>:warn}
After restarting logstash everything works like a charm again.
I read here that it happens when the ES embeded version on logstash is not the same of the standalone ES. In my case that would be true, but I disabled the embeded ES.
If this is true, there is no way of having it working with the last ES version as (if I am not wrong) I am already using the last logstash version available.
I was thinking to use "http" protocol instead of "elasticsearch" as documentation stands that it should work with any ES version, but I read also on the "elasticsearch" protocol documentation:
This output lets you store logs in Elasticsearch and is the most recommended output for Logstash. If you plan on using the Kibana web interface, you’ll need to use this output.
As I am also using kibana (3 and 4), I am not sure if I can use the "http" protocol anyway.
So, first of all, I would like to know if this is a bug or if it is true that you can only use safely the "elasticsearch" protocol if the standalone ES version is the same of the embeded one.
In the case this is normal, and it is not an issue, would it work using the "http" protocol?
My logstash output:
output {
#stdout { codec => rubydebug }
elasticsearch {
embedded => false
host => "127.0.0.1"
protocol => transport
}
}
Grouping scattered reports
This should be communicated in the documentation, which currently implies no danger or hazard with this approach.
If existing hosts become unresponsive, we should drop unresponsive hosts and attempt to
search for new valid hosts that later join the cluster.
I've spent the better part of a day trying different combinations, and it appears that with the logstash-1.4.2 distribution the only combination that actually works is:
{
host => "localhost"
cluster => "elasticsearch"
}
I've tried the following combinations (network fully open, no firewall issues here):
Fails for every combination, no nodes found. According to debug logs ES it continually attempts to connect to localhost, and never even tries to connect to the host I specify. Logs show that it my configuration is being processed properly:
{:timestamp=>"2015-02-27T22:13:13.821000+0000", :message=>"config LogStash::Outputs::ElasticSearch/@host = \"elasticsearch\"", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"105"}
Eventually it fails with a "no master found" error after 30s.
Even if I specify protocol => "http"
, this behavior remains the same, which is bizarre - why would it be trying to discover peers if it's using the REST API? It almost seems like it's trying to start an embedded node by default and refuses to actually connect to anything else when that fails. I tried explicitly setting embedded => false
to test that theory, but the behavior remains the same.
It sees and attempts to connect to my local node, but I get "skipping node" log messages because it claims the node's "logstash" cluster name doesn't match.
It connects to the local node without complaining about cluster mismatch, advances to the next stage, and then the client complains about a cluster name mismatch. Well, yes, that's true, but why did it even get past the "cluster mismatch" stage from the previous attempt?
The only combination I found after 6 hours of experimenting that works.
My best guess is that for whatever reason it's always starting up an embedded node with default configuration, and I can't figure out how to shut that off. And if that node fails to connect to anything else (bad cluster name, nothing running on localhost), it craps out and doesn't even try to connect to the host I'm telling it to.
What a frustrating experience for what should be a very basic setup. Either something is seriously wrong with this plugin or the documentation is in need of a major update.
bin/plugin install logstash-output-elasticsearch
validating logstash-output-elasticsearch >= 0
Valid logstash plugin. Continuing...
removing existing plugin before installation
Successfully uninstalled logstash-output-elasticsearch-0.1.6
Successfully installed 'logstash-output-elasticsearch' with version '0.1.6'
When run with basic elasticsearch config:
LoadError: no such file to load -- manticore
require at org/jruby/RubyKernel.java:1065
require at /Users/kurt/elk-workshop/logstash-1.5.0.beta1/vendor/jruby/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:55
require at /Users/kurt/elk-workshop/logstash-1.5.0.beta1/vendor/jruby/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:53
require at /Users/kurt/elk-workshop/logstash-1.5.0.beta1/vendor/bundle/jruby/1.9/gems/polyglot-0.3.5/lib/polyglot.rb:65
(root) at /Users/kurt/elk-workshop/logstash-1.5.0.beta1/vendor/plugins/jruby/1.9/gems/elasticsearch-transport-1.0.6/lib/elasticsearch/transport/transport/http/manticore.rb:1
require at org/jruby/RubyKernel.java:1065
require at /Users/kurt/elk-workshop/logstash-1.5.0.beta1/vendor/jruby/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:55
require at /Users/kurt/elk-workshop/logstash-1.5.0.beta1/vendor/jruby/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:53
require at /Users/kurt/elk-workshop/logstash-1.5.0.beta1/vendor/bundle/jruby/1.9/gems/polyglot-0.3.5/lib/polyglot.rb:65
(root) at /Users/kurt/elk-workshop/logstash-1.5.0.beta1/vendor/plugins/jruby/1.9/gems/logstash-output-elasticsearch-0.1.6/lib/logstash/outputs/elasticsearch/protocol.rb:1
initialize at /Users/kurt/elk-workshop/logstash-1.5.0.beta1/vendor/plugins/jruby/1.9/gems/logstash-output-elasticsearch-0.1.6/lib/logstash/outputs/elasticsearch/protocol.rb:57
each at org/jruby/RubyArray.java:1613
register at /Users/kurt/elk-workshop/logstash-1.5.0.beta1/vendor/plugins/jruby/1.9/gems/logstash-output-elasticsearch-0.1.6/lib/logstash/outputs/elasticsearch.rb:301
each at org/jruby/RubyArray.java:1613
register at /Users/kurt/elk-workshop/logstash-1.5.0.beta1/vendor/plugins/jruby/1.9/gems/logstash-output-elasticsearch-0.1.6/lib/logstash/outputs/elasticsearch.rb:294
start_outputs at /Users/kurt/elk-workshop/logstash-1.5.0.beta1/lib/logstash/pipeline.rb:158
run at /Users/kurt/elk-workshop/logstash-1.5.0.beta1/lib/logstash/pipeline.rb:79
run at /Users/kurt/elk-workshop/logstash-1.5.0.beta1/lib/logstash/runner.rb:166
call at org/jruby/RubyProc.java:271
We have a really nicely written guide to write new plugins for logstash 1.5 but they don't cover how to write tests. I know writing tests is something we see really important as a project and we should encourage people to write them, but explaining testing isn't easy.
Since we are refactoring our tests suite and writing new helpers to help us, the developer, to tests against the pipeline we should take a few time to write some documentation or best practices so everyone can improve their code and support people to add them in their pull request.
My LS configuration is:
input {
stdin{}
}
output {
elasticsearch {
cluster => "logstash"
index => "rabbit-logstash-%{+YYYY.MM.dd}"
}
}
On debug I see that the host it's not set
New Elasticsearch output {:cluster=>"logstash", :host=>nil, :port=>"9300-9305", :embedded=>false, :protocol=>"node", :level=>:info, :file=>"logstash/outputs/elasticsearch.rb", :line=>"319"}
Then I get this exception when events reach to LS:
Failed to flush outgoing
```items {:outgoing_count=>1, :exception=>#<NoMethodError: undefined method[]' for nil:NilClass>, :backtrace=>["/home/alejandro/devel/logstash-official/vendor/plugins/jruby/1.9/gems/logstash-output-elasticsearch-0.1.9-java/lib/logstash/outputs/elasticsearch.rb:380:in
flush'", "/home/alejandro/devel/logstash-official/vendor/plugins/jruby/1.9/gems/logstash-output-elasticsearch-0.1.9-java/lib/logstash/outputs/elasticsearch.rb:378:in`flush'", "/home/alejandro/devel/logstash-official/vendor/plugins/jruby/1.9/gems/logstash-output-elasticsearch-0.1.9-java/lib/logstash/outputs/elasticsearch.rb:375:in `flush'", "/home/alejandro/devel/logstash-official/vendor/bundle/jruby/1.9/gems/stud-0.0.18/lib/stud/buffer.rb:219:in`buffer_flush'", "org/jruby/RubyHash.java:1341:in `each'", "/home/alejandro/devel/logstash-official/vendor/bundle/jruby/1.9/gems/stud-0.0.18/lib/stud/buffer.rb:216:in`buffer_flush'", "/home/alejandro/devel/logstash-official/vendor/bundle/jruby/1.9/gems/stud-0.0.18/lib/stud/buffer.rb:159:in `buffer_receive'", "/home/alejandro/devel/logstash-official/vendor/plugins/jruby/1.9/gems/logstash-output-elasticsearch-0.1.9-java/lib/logstash/outputs/elasticsearch.rb:372:in`receive'", "/home/alejandro/devel/logstash-official/lib/logstash/outputs/base.rb:86:in `handle'", "(eval):28:in`initialize'", "org/jruby/RubyProc.java:271:in `call'", "/home/alejandro/devel/logstash-official/lib/logstash/pipeline.rb:272:in`output'", "/home/alejandro/devel/logstash-official/lib/logstash/pipeline.rb:231:in `outputworker'", "/home/alejandro/devel/logstash-official/lib/logstash/pipeline.rb:160:in`start_outputs'"], :level=>:warn, :file=>"stud/buffer.rb", :line=>"231"}
The exception is thrown here:
@logger.debug? and @logger.debug "Sending bulk of actions to client[#{@client_idx}]: #{@host[@client_idx]}"
Seems like @host is null so it throws the exception. Host should have been initialized.
update for better logging here: https://github.com/logstash-plugins/logstash-output-elasticsearch/blob/master/lib/logstash/outputs/elasticsearch.rb#L415-L420
addresses comment provided here: #2 (comment)
This is being filed mostly for documentation purposes, until some upstream patches spring into existence.
We recently enabled TLS mutual auth on our Kibana/ES proxy, with much success. Unfortunately, we can't Logstash with this configuration, so it is still pointed at a non-proxied ES. In attempting to add the functionality to support this, I realized that the elasticsearch-ruby library's Manticore adapter does not expose any auth-related SSL settings, and ultimately found some issues in Manticore's SSL configuration details that make this not very straightforward. Once a fix is pushed into either Manticore or elasticsearch-ruby we should be able to implement this with relative ease.
logstash-output-elasticsearch/lib/logstash/outputs/elasticsearch.rb
Lines 446 to 464 in 6860bd6
I believe that not to be the case. After changing line 449 (for debugging reasons) to:
tmp = @current_client.bulk(actions)
@logger.debug tmp
my log looked like:
{:timestamp=>"2015-01-06T13:38:32.614000+0000", :message=>"Sending bulk of actions to client[0]: lorem.ipsum", :level=>:debug, :file=>"logstash/outputs/elasticsearch.rb", :line=>"443"}
{:timestamp=>"2015-01-06T13:38:32.892000+0000", :level=>:debug, "took"=>1, "errors"=>true, "items"=>[{"index"=>{"_index"=>"logstash-2015.01.01", "_type"=>"supervisor", "_id"=>nil, "status"=>404, "error"=>"IndexMissingException[[logstash-2015.01.01] missing]"}}, {"index"=>{"_index"=>"logstash-2015.01.01", "_type"=>"supervisor", "_id"=>nil, "status"=>404, "error"=>"IndexMissingException[[logstash-2015.01.01] missing]"}}], :file=>"logstash/outputs/elasticsearch.rb", :line=>"445"}
{:timestamp=>"2015-01-06T13:38:32.893000+0000", :message=>"Shifting current elasticsearch client", :level=>:debug, :file=>"logstash/outputs/elasticsearch.rb", :line=>"451"}
Note the lack of "Got error to send bulk of actions to elasticsearch server" - from a user perspective, until I made my source modification, the entirety of my writes were erroring silently.
(This was with the version from LS 1.5.0-beta1)
(It is possible my definition of "partially fail" / "failing flush" differs from that of $author - I guess the HTTP connection was absolutely fine...)
"NodeClient can't join to a cluster if unicast hosts contains only client node (client nodes are filtered out in ZenDiscovery#findMaster()). You need to set "discovery.zen.master_election.filter_client" to false to override the behavior. "
Therefore we should document that the list of hosts should include master nodes, and that it will fail if only client nodes are listed.
Unable to run bundle install
successfully with current master.
# master/e4590a2315b23fb9e94638534b62e49a779532d1
> bundle install
WARN: Unresolved specs during Gem::Specification.reset:
ffi (>= 0)
WARN: Clearing out unresolved specs.
Please report a bug if this causes problems.
Fetching gem metadata from https://rubygems.org/..........
Fetching version metadata from https://rubygems.org/...
Fetching dependency metadata from https://rubygems.org/..
Resolving dependencies...
Using rake 10.4.2
Using addressable 2.3.8
Using thread_safe 0.3.5
Using descendants_tracker 0.0.4
Using ice_nine 0.11.1
Using axiom-types 0.1.1
Using backports 3.6.4
Using cabin 0.7.1
Using clamp 0.6.4
Using coderay 1.1.0
Using coercible 1.0.0
Using concurrent-ruby 0.8.0
Using diff-lcs 1.2.5
Using multi_json 1.11.0
Using elasticsearch-api 1.0.7
Using multipart-post 2.0.0
Using faraday 0.9.1
Using elasticsearch-transport 1.0.7
Using elasticsearch 1.0.8
Using equalizer 0.0.11
Using ffi 1.9.8
Using minitar 0.5.4
Using file-dependencies 0.1.6
Using filesize 0.0.4
Using http_parser.rb 0.6.0
Using ftw 0.0.42
Using gem_publisher 1.5.0
Using i18n 0.6.9
Using insist 1.0.0
Using jar-dependencies 0.1.7
Using jrjackson 0.2.8
Using jruby-httpclient 1.1.1
Using virtus 1.0.5
Using maven-tools 1.0.7
Using mime-types 2.4.3
Using method_source 0.8.2
Using slop 3.6.0
Using spoon 0.0.4
Using pry 0.10.1
Using rack 1.6.0
Using ruby-maven-libs 3.1.1
Using ruby-maven 3.1.1.0.8
Using rack-protection 1.5.3
Using tilt 2.0.1
Using sinatra 1.4.6
Using stud 0.0.19
Using polyglot 0.3.5
Using treetop 1.4.15
Using logstash-core 1.5.0.rc2
Using logstash-codec-plain 0.1.5
Using rspec-core 2.14.8
Using rspec-expectations 2.14.5
Using rspec-mocks 2.14.6
Using rspec 2.14.1
Using logstash-devutils 0.0.10
Using logstash-input-generator 0.1.3
Using manticore 0.3.6
deprecated. use instead jars/installer
jar dependencies for logstash-output-elasticsearch-0.1.19-java.gemspec . . .
org.elasticsearch:elasticsearch:1.4.0
[ERROR] The build could not read 1 project -> [Help 1]
[ERROR]
[ERROR] The project rubygems:logstash-output-elasticsearch:0.1.19 (/usr/local/Cellar/jruby/1.7.19/libexec/lib/ruby/gems/shared/gems/jar-dependencies-0.2.0/lib/jars/jar_pom.rb) has 2 errors
[ERROR] Unresolveable build extension: Plugin de.saumya.mojo:gem-extension:1.0.7 or one of its dependencies could not be resolved: Failed to read artifact descriptor for de.saumya.mojo:gem-extension:jar:1.0.7: Could not transfer artifact de.saumya.mojo:gem-extension:pom:1.0.7 from/to ccad-uber (https://acropolis/nexus/content/groups/ccad-uber/): peer not authenticated -> [Help 2]
[ERROR] Unknown packaging: gem
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
[ERROR] [Help 2] http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException
Errno::ENOENT: No such file or directory - /Users/mhughes/OpenSource/logstash-output-elasticsearch/deps.lst
An error occurred while installing logstash-output-elasticsearch (0.1.19), and
Bundler cannot continue.
Make sure that `gem install logstash-output-elasticsearch -v '0.1.19'` succeeds
before bundling.
I have a field in my log file which contains the timestamp. Logstash automatically converts the field type to string but I want to convert the field type to date.
How do I go about this?
Use logstash-1.5.0RC2.rpm
output {
elasticsearch {
host => 'myip:9200'
protocol => 'http'
workers =>2
flush_size => 20000
}
}
Got "failed action with response of , dropping action: ****".
I watched the action details. Seems no error but just a right event hash. And I can run with the same configuration by 1.5.0beta1.
Hi,
I am using logstash 1.4 file input in which we are facing open file issue in linux server (RHEL).
Please add configuration so that it will only monitor updated files
Nothing appears in Elasticsearch while this is running. The index does not already exist there either, nor is there any associated template for it. Running Logstash 1.5RC2.
While trying to debug a different issue, I changed the action to create_unless_exists
(json.conf
):
input {
stdin {
codec => json { }
}
}
output {
elasticsearch {
action => "create_unless_exists"
host => ["localhost"]
index => "test-%{+YYYY.MM.dd}"
protocol => "http"
}
}
Input (file.json
):
{"_id":"ab66c2e6-e641-457c-aa49-7ad87ba680eb","other":"value"}
Running it:
cat file.json | bin/logstash -f json.conf
This endlessly spews the resulting output and CTRL+C can't even get out of it.
Got error to send bulk of actions to elasticsearch server at localhost : [400] {"error":"ActionRequestValidationException[Validation Failed: 1: no requests added;]","status":400} {:level=>:error}
Failed to flush outgoing items {:outgoing_count=>1, :exception=>#<Elasticsearch::Transport::Transport::Errors::BadRequest: [400] {"error":"ActionRequestValidationException[Validation Failed: 1: no requests added;]","status":400}>, :backtrace=>["/Users/pickypg/Downloads/logstash-1.5.0.rc2/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/base.rb:132:in `__raise_transport_error'", "/Users/pickypg/Downloads/logstash-1.5.0.rc2/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/base.rb:224:in `perform_request'", "/Users/pickypg/Downloads/logstash-1.5.0.rc2/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/http/manticore.rb:33:in `perform_request'", "/Users/pickypg/Downloads/logstash-1.5.0.rc2/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/client.rb:115:in `perform_request'", "/Users/pickypg/Downloads/logstash-1.5.0.rc2/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.7/lib/elasticsearch/api/actions/bulk.rb:80:in `bulk'", "/Users/pickypg/Downloads/logstash-1.5.0.rc2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch/protocol.rb:82:in `bulk'", "/Users/pickypg/Downloads/logstash-1.5.0.rc2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:413:in `submit'", "/Users/pickypg/Downloads/logstash-1.5.0.rc2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:412:in `submit'", "/Users/pickypg/Downloads/logstash-1.5.0.rc2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:438:in `flush'", "/Users/pickypg/Downloads/logstash-1.5.0.rc2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:436:in `flush'", "/Users/pickypg/Downloads/logstash-1.5.0.rc2/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1341:in `each'", "/Users/pickypg/Downloads/logstash-1.5.0.rc2/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:216:in `buffer_flush'", "/Users/pickypg/Downloads/logstash-1.5.0.rc2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:472:in `teardown'", "org/jruby/RubyArray.java:1613:in `each'", "/Users/pickypg/Downloads/logstash-1.5.0.rc2/lib/logstash/pipeline.rb:239:in `outputworker'", "org/jruby/RubyArray.java:1613:in `each'", "/Users/pickypg/Downloads/logstash-1.5.0.rc2/lib/logstash/pipeline.rb:238:in `outputworker'", "/Users/pickypg/Downloads/logstash-1.5.0.rc2/lib/logstash/pipeline.rb:163:in `start_outputs'"], :level=>:warn}
Hi,
if you start from a fresh master branch and run the bundle command it hangs while trying to fetch/installing manticore 0.3.4:
Installing sinatra 1.4.5
Installing stud 0.0.19
Installing polyglot 0.3.5
Installing treetop 1.4.15
Using logstash 1.5.0.beta1 from git://github.com/elasticsearch/logstash.git (at 1.5)
Installing logstash-codec-plain 0.1.4
Installing logstash-input-generator 0.1.2
Installing manticore 0.3.4
this basically makes impossible to build this master branch.
I have a proxy in front of my es cluster. I'm proxying requests to http://nginx.example.local/elasticsearch through to one of my elasticsearch nodes (eg http://es1.example.local:9200/). The plugin doesn't appear to allow output to a non-root url, since it expects a host name instead of a url.
Currently (logstash-1.4.2), the following configuration sends requests to nginx.example.local/elasticsearch:80, rather than nginx.example.local:80/elasticsearch:
output {
elasticsearch {
protocol => "http"
host => "nginx.example.local/elasticsearch"
port => "80"
}
}
I'd like to be able to configure a path:
output {
elasticsearch {
protocol => "http"
host => "nginx.example.local"
path => "/elasticsearch"
port => "80"
}
}
Moved from https://logstash.jira.com/browse/LOGSTASH-470
ElasticSearch has the ability to automatically prune older messages if a TTL value is provided by the message or set as default on the index itself.
So for example you could create a single index and tell it store events for 30 days, after 30 days ElasticSearch would start removing the old entries from the index. This could also be used with the daily style indexes that LogStash automatically creates. Although this would not delete the index itself it would empty it out.
Also it may not be a bad idea to allow for filters to also be able to tweak this value. That way I could save my access log entries for a week but my error logs for a month.
Interesting comment from MixMuffins
The thing about using the TTL is that it creates a lot of excess overhead in elasticsearch if you're dealing with a lot of indexes. Have you tried considering an automated script to remove entries after a certain period of time, and having that run as a cronjob? It'd be more efficient and wouldn't sacrifice speed on your elasticsearch indexing.
Hi,
Elasticsearch bulk API has an action called update
to update documents partially. But, Elasticsearch output only support index and delete actions. When I was looking inside the code, I saw another action called create_unless_exists
. Is create_unless_exists
action used for partial update or something else? If it is not for partial update, is there a plan to add update
action as a valid action?
Thanks
Currently, the node client does not handle update
requests
link to file location:
On running logstash to import a log file and output to elasticsearch using protocol=http and port=9200 I get:
LoadError: load error: manticore -- java.security.KeyStoreException: problem accessing trust storejava.security.cert.CertificateException: Unable to initialize, java.io.IOException: Invalid number of padding bits
require at org/jruby/RubyKernel.java:1071
require at /opt/logstash-1.5.0.rc2/vendor/bundle/jruby/1.9/gems/polyglot-0.3.5/lib/polyglot.rb:65
(root) at /opt/logstash-1.5.0.rc2/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.7/lib/elasticsearch/transport/transport/http/manticore.rb:1
require at org/jruby/RubyKernel.java:1071
require at /opt/logstash-1.5.0.rc2/vendor/bundle/jruby/1.9/gems/polyglot-0.3.5/lib/polyglot.rb:65
(root) at /opt/logstash-1.5.0.rc2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch/protocol.rb:1
initialize at /opt/logstash-1.5.0.rc2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch/protocol.rb:58
map at org/jruby/RubyArray.java:2412
register at /opt/logstash-1.5.0.rc2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:323
each at org/jruby/RubyArray.java:1613
register at /opt/logstash-1.5.0.rc2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.1.18-java/lib/logstash/outputs/elasticsearch.rb:319
start_outputs at /opt/logstash-1.5.0.rc2/lib/logstash/pipeline.rb:161
run at /opt/logstash-1.5.0.rc2/lib/logstash/pipeline.rb:79
run at /opt/logstash-1.5.0.rc2/lib/logstash/runner.rb:124
call at org/jruby/RubyProc.java:271
run at /opt/logstash-1.5.0.rc2/lib/logstash/runner.rb:129
call at org/jruby/RubyProc.java:271
execute at /opt/logstash-1.5.0.rc2/lib/logstash/agent.rb:147
Running Logstash 1.5RC2.
Input file (file.json
):
{"type":"test","_id":"ab66c2e6-e641-457c-aa49-7ad87ba680eb","other":"value"}
Logstash conf (json.conf
):
input {
stdin {
codec => json { }
}
}
output {
elasticsearch {
action => "create"
host => ["localhost"]
index => "test-%{+YYYY.MM.dd}"
protocol => "http"
}
stdout {
codec => rubydebug
}
}
Command:
$ cat file.json | bin/logstash -f json.conf
Output from Elasticsearch 1.4.4 (emphasis on the Caused by
):
[2015-03-19 23:34:10,203][DEBUG][action.bulk ] [Meggan] [test-2015.03.20][0] failed to execute bulk item (index) index {[test-2015.03.20][test][AUw1PswAB7p1lmy3zZZY], source[{"type":"test","_id":"ab66c2e6-e641-457c-aa49-7ad87ba680eb","other":"value","@version":"1","@timestamp":"2015-03-20T03:34:09.164Z","host":"Chriss-MBP-2"}]}
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [_id]
at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:416)
at org.elasticsearch.index.mapper.internal.IdFieldMapper.parse(IdFieldMapper.java:295)
at org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:709)
at org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:500)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:542)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:491)
at org.elasticsearch.index.shard.service.InternalIndexShard.prepareCreate(InternalIndexShard.java:392)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:444)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:150)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:512)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:419)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.index.mapper.MapperParsingException: Provided id [AUw1PswAB7p1lmy3zZZY] does not match the content one [ab66c2e6-e641-457c-aa49-7ad87ba680eb]
at org.elasticsearch.index.mapper.internal.IdFieldMapper.parseCreateField(IdFieldMapper.java:310)
at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:406)
... 13 more
Output on the command line for Logstash:
{
"type" => "test",
"_id" => "ab66c2e6-e641-457c-aa49-7ad87ba680eb",
"other" => "value",
"@version" => "1",
"@timestamp" => "2015-03-20T03:34:09.164Z",
"host" => "Chriss-MBP-2"
}
failed action with response of , dropping action: ["create", {:_id=>nil, :_index=>"test-2015.03.20", :_type=>"test"}, #<LogStash::Event:0x575fcc7d @metadata_accessors=#<LogStash::Util::Accessors:0x14adb7d3 @store={"retry_count"=>0}, @lut={}>, @cancelled=false, @logger=#<Cabin::Channel:0x2baec2f8 @metrics=#<Cabin::Metrics:0x2bbcfd4 @metrics_lock=#<Mutex:0x7bd1e37f>, @metrics={}, @channel=#<Cabin::Channel:0x2baec2f8 ...>>, @subscriber_lock=#<Mutex:0x7b4100e6>, @level=:warn, @subscribers={15592=>#<Cabin::Outputs::IO:0x7abc0a0b @io=#<IO:fd 1>, @lock=#<Mutex:0xe02d438>>}, @data={}>, @data={"type"=>"test", "_id"=>"ab66c2e6-e641-457c-aa49-7ad87ba680eb", "other"=>"value", "@version"=>"1", "@timestamp"=>"2015-03-20T03:34:09.164Z", "host"=>"Chriss-MBP-2"}, @metadata={"retry_count"=>0}, @accessors=#<LogStash::Util::Accessors:0x76babee8 @store={"type"=>"test", "_id"=>"ab66c2e6-e641-457c-aa49-7ad87ba680eb", "other"=>"value", "@version"=>"1", "@timestamp"=>"2015-03-20T03:34:09.164Z", "host"=>"Chriss-MBP-2"}, @lut={"host"=>[{"type"=>"test", "_id"=>"ab66c2e6-e641-457c-aa49-7ad87ba680eb", "other"=>"value", "@version"=>"1", "@timestamp"=>"2015-03-20T03:34:09.164Z", "host"=>"Chriss-MBP-2"}, "host"], "type"=>[{"type"=>"test", "_id"=>"ab66c2e6-e641-457c-aa49-7ad87ba680eb", "other"=>"value", "@version"=>"1", "@timestamp"=>"2015-03-20T03:34:09.164Z", "host"=>"Chriss-MBP-2"}, "type"]}>>] {:level=>:warn}
Running the test as specified with the documentation give this error, without running at all. The process to run the test I followed is the one described in the README file.
➜ logstash-output-elasticsearch ✗ bundle exec rspec --tag elasticsearch
Using Accessor#strict_set for specs
Run options:
include {:elasticsearch=>true}
exclude {:redis=>true, :socket=>true, :performance=>true, :elasticsearch_secure=>true, :broken=>true, :export_cypher=>true}
Jan 28, 2015 1:49:18 PM org.elasticsearch.node.internal.InternalNode <init>
INFO: [logstash-Peres-MacBook-Pro.local-78132-9186] version[1.4.0], pid[78132], build[bc94bd8/2014-11-05T14:26:12Z]
Jan 28, 2015 1:49:18 PM org.elasticsearch.node.internal.InternalNode <init>
INFO: [logstash-Peres-MacBook-Pro.local-78132-9186] initializing ...
Jan 28, 2015 1:49:18 PM org.elasticsearch.plugins.PluginsService <init>
INFO: [logstash-Peres-MacBook-Pro.local-78132-9186] loaded [], sites []
Jan 28, 2015 1:49:23 PM org.elasticsearch.node.internal.InternalNode <init>
INFO: [logstash-Peres-MacBook-Pro.local-78132-9186] initialized
Jan 28, 2015 1:49:23 PM org.elasticsearch.node.internal.InternalNode start
INFO: [logstash-Peres-MacBook-Pro.local-78132-9186] starting ...
Jan 28, 2015 1:49:23 PM org.elasticsearch.transport.TransportService doStart
INFO: [logstash-Peres-MacBook-Pro.local-78132-9186] bound_address {inet[/0:0:0:0:0:0:0:0:9301]}, publish_address {inet[/192.168.2.112:9301]}
Jan 28, 2015 1:49:23 PM org.elasticsearch.discovery.DiscoveryService doStart
INFO: [logstash-Peres-MacBook-Pro.local-78132-9186] elasticsearch/x1NmjcPsRXukShUeRv-Hkg
Jan 28, 2015 1:49:53 PM org.elasticsearch.discovery.DiscoveryService waitForInitialState
WARNING: [logstash-Peres-MacBook-Pro.local-78132-9186] waited for 30s and no initial state was set by the discovery
Jan 28, 2015 1:49:53 PM org.elasticsearch.node.internal.InternalNode start
INFO: [logstash-Peres-MacBook-Pro.local-78132-9186] started
Failed to install template: waited for [30s] {:level=>:error}
Then if there is no flag given, the registration test fails as it complains there is no milestone set.
1) outputs/elasticsearch should register
Failure/Error: output = LogStash::Plugin.lookup("output", "elasticsearch").new("embedded" => "false", "protocol" => "transport", "manage_template" => "false")
RuntimeError:
elasticsearch must set a milestone. For more information about plugin milestones, see http://logstash.net/docs/1.5.0.dev/plugin-milestones
# ./spec/outputs/elasticsearch_spec.rb:10:in `(root)'
This was tested using the 1.5 branch, to run this using the master branch makes no complaint about the milestone stuff, but the test with the tag hangs with a similar message:
INFO: [logstash-Peres-MacBook-Pro.local-78698-8642] elasticsearch/y79rgoLuT5mbEYBT9JVCyA
Jan 28, 2015 2:09:28 PM org.elasticsearch.discovery.DiscoveryService waitForInitialState
WARNING: [logstash-Peres-MacBook-Pro.local-78698-8642] waited for 30s and no initial state was set by the discovery
Jan 28, 2015 2:09:28 PM org.elasticsearch.node.internal.InternalNode start
INFO: [logstash-Peres-MacBook-Pro.local-78698-8642] started
Would it be feasible to add support for automatically updating an index template on startup, instead of just a per-index template?
i.e. http://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html
The documentation for this plugin says:
If you have dynamic templating (e.g. creating indices based on field names) then you should set manage_template to false and use the REST API to upload your templates manually.
It's pretty awkward and error-prone to manage these yourself though. I'd be happy to work on this and submit a pull request if this is within the scope of this plugin.
Raised in Logstash github.
Kibana 4 has this nasty habit of requiring all nodes in a cluster be at Elasticsearch 1.4.4. Because of this, we can't connect to the cluster using node protocol. Is it as easy as replacing the jar in the library tree? How can we customize a build of this plugin to use a new Elasticsearch library?
thanks
Configuration:
Logstash - logstash-1.5.0.rc2-1.noarch
Elasticsearch - elasticsearch-1.4.4-1.noarch
Output Plugin - logstash-output-elasticsearch (0.1.18)
Depending on tags, I dump data into different indexes and I also specify my own template. If another elasticsearch section is below the section used, the "template" is completely ignored and data is uploaded with no map.
For instance, I am dumping data to logs index.
What works
output {
if "logs" in [tags] {
elasticsearch {
cluster => ""
host => [ "localhost" ]
index => "logs-%{+YYYY.MM}"
protocol => "http"
template => "/logs-index-template.json"
template_overwrite => true
flush_size => 2500
}
}
}
What also works
output {
if "state" in [tags] {
elasticsearch {
cluster => ""
host => [ "localhost" ]
index => "state-%{+YYYY.MM}"
protocol => "http"
template => "/state-index-template.json"
template_overwrite => true
flush_size => 2500
}
}
if "logs" in [tags] {
elasticsearch {
cluster => ""
host => [ "localhost" ]
index => "logs-%{+YYYY.MM}"
protocol => "http"
template => "/logs-index-template.json"
template_overwrite => true
flush_size => 2500
}
}
}
What doesn't work
output {
if "logs" in [tags] {
elasticsearch {
cluster => ""
host => [ "localhost" ]
index => "logs-%{+YYYY.MM}"
protocol => "http"
template => "/logs-index-template.json"
template_overwrite => true
flush_size => 2500
}
}
if "state" in [tags] {
elasticsearch {
cluster => ""
host => [ "localhost" ]
index => "state-%{+YYYY.MM}"
protocol => "http"
template => "/state-index-template.json"
template_overwrite => true
flush_size => 2500
}
}
}
Any assistance is appreciated. Thank you!
migrated from https://logstash.jira.com/browse/LOGSTASH-2107
with this json, reading in from file, and outputting to elasticsearch:
{"message":"test","@version":"1","@timestamp":"2014-04-08T14:12:00.000+02:00","type":["cdn_access_log","ys-ssvod"],"tags":["cdn"],"host":"Casper-VAIO","path":"c:/Temp/Java/logstash/we_accesslog_apache","message_id":"mpJdW+Q8Q0xYv1Ha0bBNmB5LTps=","bytes-sent":32475,"object-size":32031,"bytes-recvd":356,"method":"GET","status":200,"time-recvd":"08/Apr/2014:14:12:00+0200","time-to-serve":1875}
(note that type was set by a mistake by a developer here - to an array
However, instead of logstash completely halting reading of data.. I'd prefer it to log the line that breaks it, and continue.. it logs this:
{:timestamp=>"2014-04-09T10:02:42.456000+0200", :message=>"Failed to flush outgoing items", :outgoing_count=>1, :exception=>#<NameError: no method 'type' for arguments (org.jruby.RubyArray) on Java::OrgElasticsearchActionIndex::IndexRequest>, :backtrace=>["/usr/share/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:225:in `build_request'", "/usr/share/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:205:in `bulk'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:204:in `bulk'", "/usr/share/logstash/lib/logstash/outputs/elasticsearch.rb:331:in `flush'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1339:in `each'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:216:in `buffer_flush'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:112:in `buffer_initialize'", "org/jruby/RubyKernel.java:1521:in `loop'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:110:in `buffer_initialize'"], :level=>:warn}
from: elastic/logstash#1706
elasticsearch_http plugin exposed this setting, might be useful to keep it in the new plugin.
Elasticsearch transport client allows to add a sniffing config to be able to list machines in the cluster. In the client you can set:
client.transport.sniff
to true
Logstash should expose this setting
For more details see http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current/client.html#transport-client
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.