Code Monkey home page Code Monkey logo

logstash-input-redis's Introduction

Logstash Plugin

Travis Build Travis Build Status

This is a plugin for Logstash.

It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.

Documentation

Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one central location.

Need Help?

Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum.

Developing

1. Plugin Developement and Testing

Code

  • To get started, you'll need JRuby with the Bundler gem installed.

  • Create a new plugin or clone and existing from the GitHub logstash-plugins organization. We also provide example plugins.

  • Install dependencies

bundle install

Test

  • Update your dependencies
bundle install
  • Run tests
bundle exec rspec

2. Running your unpublished Plugin in Logstash

2.1 Run in a local Logstash clone

  • Edit Logstash Gemfile and add the local plugin path, for example:
gem "logstash-filter-awesome", :path => "/your/local/logstash-filter-awesome"
  • Install plugin
# Logstash 2.3 and higher
bin/logstash-plugin install --no-verify

# Prior to Logstash 2.3
bin/plugin install --no-verify
  • Run Logstash with your plugin
bin/logstash -e 'filter {awesome {}}'

At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash.

2.2 Run in an installed Logstash

You can use the same 2.1 method to run your plugin in an installed Logstash by editing its Gemfile and pointing the :path to your local plugin development directory or you can build the gem and install it using:

  • Build your plugin gem
gem build logstash-filter-awesome.gemspec
  • Install the plugin from the Logstash home
# Logstash 2.3 and higher
bin/logstash-plugin install --no-verify

# Prior to Logstash 2.3
bin/plugin install --no-verify
  • Start Logstash and proceed to test the plugin

Contributing

All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin.

Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here.

It is more important to the community that you are able to contribute.

For more information about contributing, see the CONTRIBUTING file.

logstash-input-redis's People

Contributors

andrewvc avatar brennentsmith avatar candlerb avatar colinsurprenant avatar dedemorton avatar driskell avatar electrical avatar guyboertje avatar houtmanj avatar jakelandis avatar jordansissel avatar jsvd avatar kaisecheng avatar karenzone avatar kares avatar matt-lane avatar ph avatar robbavey avatar shaharmor avatar suyograo avatar wiibaa avatar yaauie avatar ycombinator avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logstash-input-redis's Issues

v1.0.1 "Couldn't find any input plugin named 'redis'"

logstash 1.5.3. When version up input-redis from 1.0.0 to 1.0.1, error occured.

{:timestamp=>"2015-07-27T09:03:31.372000+0900", :message=>"The error reported is: \n Couldn't find any input plugin named 'redis'. Are you sure this is correct? Trying to load the redis input plugin resulted in this error: NameError"}

Maybe the cause is...

+module Logstash module Inputs class Redis < LogStash::Inputs::Threadable
+# class LogStash::Inputs::Redis < LogStash::Inputs::Threadable

Setting to use LMOVE command

Want a no destructive behavior, like Kafka.

The data is consummed by logstash, then push in another redis destination.

The plugin currently use only redis BLPOP command, I propose a new settings to use the BLMOVE command.

Implement ECS-Compatibility Mode

This is a stub issue, and needs to be fleshed out with details specific to
this plugin.


As a part of the effort to make plugins able to run in an ECS-Compatible manner
by default in an upcoming release of Logstash, this plugin needs to either
implement an ECS-Compatibility mode or certify that it does not implicitly use
fields that conflict with ECS.

Reliable queue delivery: Redis, Kafka, ... ?

Hi

In our ELK deployment we're looking for a 0.00% event loss.
Actually we are running a rsyslog using RELP to send logs to a Redis queue/list, readed from Logstash which inserts into Elasticsearch.

Are there any plans to implement Redis RPOPLPUSH & LREM for reliable queue delivery?

Are the efforts regarding reliability placed on kafka plugin (as a reliable platform) instead?

As seems elastic was (is?) working for a reliable delivery elastic/logstash#3693, should we consider other approach?

Regards

Add teardown_interval to close connections after certain amount of time has passes

The Shippers establish a TCP connection to Redis and never drop it, if you have a load balancer, the balance of load depends on the initial triaging by the load balancer. (in other words, once a running Shipper connects to a Redis it will stay connected to that Redis). At times the overwhelming majority are connected to a Redis instance.

Adding a teardown internal and closing the connection after several minutes will allow to release old connections and refresh the connection which is very useful for load balancing instances.

All threads are taking the same message at the same time

Hello, I have Logstash 5.6.1 running with Redis input plugin. Config:

redis {
    codec => elog {
        elog_definitions => "elog-structure.yml"
    }
    data_type => "channel"
    key => "tst_key"
    batch_count => 125
    host => "XXXXXXXXXXX"
    port => 6379
    threads => 12
}

We are running our own Java application to publish messages to Redis. Now the problem is with the threads. When you publish one message on Redis all 12 threads take the same message at the same time (checked using tcpdump that 12 exactly same packets arrive to logstash basically at the same time and checked the publish count using redis-cli monitor and tcpdump, oh and with 1 thread in config everything looks fine). Or maybe it is intended to work like this and we should be using something like RPOP and list data_type? I am not an expert on Redis...

Logstash can't get data from multi redis

I configured Logstash Input which get data from two redis. However, Logstash only get data from one redis. I don't know it is a bug whether or not ?

My logstash version is 1.5.0. Thanks

input {
  redis {
    host => "10.0.0.50"
    type => "logs"
    data_type => "list"
    key => "LOG"
    codec => "json"
  }
  redis {
    host => "10.0.3.202"
    type => "logs"
    data_type => "list"
    key => "LOG"
    codec => "json"
  }
}

filter {
  date {
    match => [ "timestamp", "UNIX_MS" ]
  }
}

logstash-input-redis: accept an array for the host setting

While the logstash-output-redis plugin's host setting accepts an array of hosts, this is currently not the case of logstash-input-redis.

Having the possibility to define an array of Redis hosts from which to read data would be particularly interesting in the case of a master-slave Redis setup (for load-balancing and/or fail-over).

input-redis does not know ssl-flag, although it is mentioned in documentation

  • Version: logstash 7.2.0
  • Operating System: docker container / centos as host
  • Config File (if you have sensitive info, please remove it):

I am using redis behind stunnel to gain TLS encryption for redis. Filebeat is running fine, ships it's data sucessfully to redis database.

Now I want to establish secured connection from logstash to redis.
I updated logstash to version 7.2.0 where the docs are offering a flag for ssl:

https://www.elastic.co/guide/en/logstash/current/plugins-inputs-redis.html#plugins-inputs-redis-ssl

Sounds like it is what I need.

So my input looks like this:

input
{
	redis
	{
		data_type => "list"
		db   => "${REDIS_DB}"
		host => "${REDIS_HOST}"
		port => "${REDIS_PORT}"
		ssl  => "${REDIS_SSL}"
		key => "timer"
	}
}

When I check the environment variables on container level, they are looking correctly to me:

[root@poc-logstash-5ddf9b77db-756d6 logstash]# echo $REDIS_SSL
true

Here are the logfiles. Looks like ssl flag is unknown for logstash, but it is documented...

kubectl logs poc-logstash-5ddf9b77db-228n7
2019/07/12 09:01:22 Setting 'xpack.monitoring.elasticsearch.hosts' from environment.
2019/07/12 09:01:22 Setting 'xpack.monitoring.enabled' from environment.
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.7.0.jar) to field java.io.FileDescriptor.fd
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-07-12T09:01:42,207][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2019-07-12T09:01:42,228][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2019-07-12T09:01:42,763][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.2.0"}
[2019-07-12T09:01:42,795][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"d5a7bc34-522e-4afb-bfa9-170640a783ba", :path=>"/usr/share/logstash/data/uuid"}
[2019-07-12T09:01:44,429][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://poc-es-master:9200/]}}
[2019-07-12T09:01:44,705][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://poc-es-master:9200/"}
[2019-07-12T09:01:44,758][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
[2019-07-12T09:01:44,761][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T09:01:44,893][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2019-07-12T09:01:44,894][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[aschampe@kubernetes01 elasticsearch]$ kubectl logs poc-logstash-5ddf9b77db-228n7
2019/07/12 09:01:22 Setting 'xpack.monitoring.elasticsearch.hosts' from environment.
2019/07/12 09:01:22 Setting 'xpack.monitoring.enabled' from environment.
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.7.0.jar) to field java.io.FileDescriptor.fd
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-07-12T09:01:42,207][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2019-07-12T09:01:42,228][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2019-07-12T09:01:42,763][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.2.0"}
[2019-07-12T09:01:42,795][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"d5a7bc34-522e-4afb-bfa9-170640a783ba", :path=>"/usr/share/logstash/data/uuid"}
[2019-07-12T09:01:44,429][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://poc-es-master:9200/]}}
[2019-07-12T09:01:44,705][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://poc-es-master:9200/"}
[2019-07-12T09:01:44,758][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
[2019-07-12T09:01:44,761][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-12T09:01:44,893][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2019-07-12T09:01:44,894][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2019-07-12T09:02:12,925][ERROR][logstash.inputs.redis    ] Unknown setting 'ssl' for redis
[2019-07-12T09:02:12,925][ERROR][logstash.inputs.redis    ] Unknown setting 'ssl' for redis

proxy configuration and ssl configuration

Referring to https://discuss.elastic.co/t/redis-input-plugin-proxy-and-ssl/111195

Was testing Azure Redis with logstash and beats, which comes by default only with SSL enabled, and have to disable SSL explicitly. Moreover, to access redis, there is proxy to pass in some scenarios.

W.r.t beats, there are settings to config proxy and ssl - but, logstash there is none. Had to turn off SSL and take the logstash outside proxy to get working. Though Azure Redis, the situation is common for certain redis implementations.

Shall we add similar settings for logstash to cover proxy and ssl. If it is already present/I'm missing something, please reply to the thread in elastic.

logstash-input-redis can lose data

In redis-cli monitor I see this calls:

1547818504.509313 [0 lua] "lrange" "filebeat" "0" "124"
1547818504.509476 [0 lua] "ltrim" "filebeat" "125" "-1"

Do I understand correctly, data which added between unix timestamp 1547818504.509313 and 1547818504.509476 will be just deleted?

Test failling on travis and jenkins too

https://travis-ci.org/logstash-plugins/logstash-input-redis/builds/11697475

The command "bundle exec rspec spec --order rand" exited with 0.
13.72s$ bundle exec rspec spec --order rand -t redis
Using Accessor#strict_set for specs
Run options:
  include {:redis=>true}
  exclude {:socket=>true, :performance=>true, :couchdb=>true, :elasticsearch=>true, :elasticsearch_secure=>true, :export_cypher=>true, :integration=>true, :windows=>true}
F...
Failures:
  1) LogStash::Inputs::Redis for the subscribe data_types runtime for pattern_channel data_type real redis calling the run method, adds events to the queue
     Failure/Error: expect(accumulator.size).to eq(2)

       expected: 2
            got: 0

       (compared using ==)
     # ./spec/inputs/redis_spec.rb:295:in `(root)'
     # ./vendor/bundle/jruby/1.9/gems/rspec-wait-0.0.8/lib/rspec/wait.rb:46:in `(root)'
Finished in 5.76 seconds (files took 1.83 seconds to load)
4 examples, 1 failure
Failed examples:
rspec ./spec/inputs/redis_spec.rb:287 # LogStash::Inputs::Redis for the subscribe data_types runtime for pattern_channel data_type real redis calling the run method, adds events to the queue
Randomized with seed 938

Any plans to support "lpop" command with polling?

For some reasons(we reimplemented Redis with minimum commands supported), we cannot use the "blpop" nor the subscribe. Only the "lpop" is allowed.

So I'm wondering if there are any plans to support the lpop. If not, can I submit some PR?

Thx! :-)

Logstash don't exit after click ctl+c in ubuntu12.10

(This issue was originally filed by @leiding at elastic/logstash#2143)


step 1: start logstash
./bin/logstash agent -f index.conf
Using milestone 2 input plugin 'redis'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.2/plugin-milestones {:level=>:warn}

Step2: Try to exit with cto+c, but the cursor can't return to the shell.

^CInterrupt received. Shutting down the pipeline. {:level=>:warn}

cat index.conf
input {
redis
{
host => "10.111.96.175"
data_type =>"list"
port => "6379"
key => "logstash"
type => "redis-input"
}

}

output
{
elasticsearch {
host => "10.111.96.175"
port => "9300"
}
}

logstash can't read data from redis sometimes

It works most of time,but will not work sometimes, and the reason is that logstash can't read data from redis anymore. My logstash version is 1.4.2.
input {
redis {
host => localhost
data_type => "list"
key => "paylist"
type => "pay"
}
redis {
host => localhost
data_type => "list"
key => "compensationlist"
type => "compensation"
}
redis {
host => localhost
data_type => "list"
key => "orderlist"
type => "order"
}
redis {
host => localhost
data_type => "list"
key => "list4log"
type => "log"
}
}

remove redis input list_listener custom LUA

(This issue was originally filed by @colinsurprenant at elastic/logstash#1780)


the custom LUA script used for batching items from list could be replaced with :

redis.pipelined do
  redis.multi do
    redis.lrange(...)
    redis.ltrim(...)
  end
end

so basically, doing a lrange fetches a batch of items without deleting them, ltrim deletes them, wrapped in a multi to make this atomic and wrapped in a pipeline to avoid multiple network roundtrips.

Use of wildcards in redis input keys

Yo. I cant use wildcards in the key field if data_type is list.

Is this by design for can it get implemented?

input { redis { host => "redis" port => "6379" data_type => "list" key => "logstash-*" } }

Add pattern_channel channel name matching

I'd really like to see if we can incorporate the channel name for the redis pattern_channel data type. It looks like right now the channel is just being thrown away in the response when receiving messages under the pattern channel listener. This would be pretty easy to do if we can just add the name of the channel to the queue event propogated here.

Any thoughts on this? I know redis is widely used broker for logstash messages, so i'm not sure what type of performance impact this would have, but it would make generic channel subscriptions and then attaching particular channel names in the forwarded output much simpler.

Be consistent with how logstash-output-redis handles host and port

As a logstash administrator, I want my configuration of logstash-input-redis to be consistent with logstash-output-redis as much as possible.

Problem

When I use the following output { } configuration for logstash-output-redis everything works:

  redis {
    port => "12345"
    host => "redis.example.com:12345"
    key => "logstash"
    data_type => "list"
  }

But the same configuration used on the inside { } configuration for logstash-input-redis does not work. The debug output shows the following:

Registering Redis {:identity=>"redis://@redis.example.com:12345:6379/0 list:logstash", :level=>:info, :file=>"logstash/inputs/redis.rb", :line=>"117", :method=>"register"}
Pipeline started {:level=>:info, :file=>"logstash/pipeline.rb", :line=>"87", :method=>"run"}
Logstash startup completed
A plugin had an unrecoverable error. Will restart this plugin.
  Plugin: <LogStash::Inputs::Redis host=>"redis.example.com:12345", key=>"logstash", data_type=>"list", debug=>false, codec=><LogStash::Codecs::JSON charset=>"UTF-8">, threads=>1, name=>"default", port=>6379, db=>0, timeout=>5, batch_count=>1>
  Error: initialize: name or service not known
  Exception: SocketError
  Stack: org/jruby/ext/socket/RubyTCPSocket.java:129:in `initialize'

Notice that it registers with redis://@redis.example.com:12345:6379/0, effectively treating the port as part of the host name.

The following code works in both places:

  redis {
    port => "12345"
    host => "redis.example.com"
    key => "logstash"
    data_type => "list"
  }

Solution

The output plugin has the following line: @current_host, @current_port = @host[@host_idx].split(':') Something similar can be done with the input plugin.

Both the output and the input plugins need to be tweaked because the error message and the debug messages do not consistently represent the port variable. For instance if I do not set the port variable, the output plugins says @port = 6379 even though it uses port 12345 to connect:

config LogStash::Outputs::Redis/@host = ["redis.example.com:123245"] {:level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"112", :method=>"config_init"}
config LogStash::Outputs::Redis/@port = 6379 {:level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"112", :method=>"config_init"}

commands_map broken after switch to redis ~4

redis-rb 4 introduced a breaking change in version 4.0 where redis.client became a CLIENT command to Redis server. This breaks commands_map that sets values on redis.client.command_map, but probably also the whole subscribe_stop function.

An easy fix is to just replace redis.client with redis._client, as I've done here: path-network@fb7e374

According to the redis-rb changelog for 4.0.1 the proper way to replace client usage is to use a new redis.connection function instead.

logstash 1.4.2 - Redis input timeout / missing data

(This issue was originally filed by @lmorfitt at elastic/logstash#3226)


Hello,

I'm experiencing some issues with Logstash 1.4.2 and the redis input loosing data. I'm wondering if anyone else has seen this issue.

If redis has no data added to the queue at least once an hour I see a "Redis Timeout, restating plugin" error. The data is then all pulled from the list in one go and out of 10 messages, only 2 will arrive in elasticsearch.

To try and track down if there was a timeout condition I added a test message to Redis once an hour.

1431330002.916917 "rpush" "data" "{########################TEST############################}"
1431330953.161801 "blpop" "data" "0"
1431330953.821474 "blpop" "data" "0"
1431330959.191131 "blpop" "data" "0"
1431330959.867708 "blpop" "data" "0"
1431333602.272291 "rpush" "data" "{########################TEST############################}"
1431334560.613347 "blpop" "data" "0"
1431334560.613396 "blpop" "data" "0"
1431334566.661423 "blpop" "data" "0"
1431334566.668555 "blpop" "data" "0"

Test message added
1431333602 - GMT: Mon, 11 May 2015 08:40:02 GMT

Logstash restarts redis plugin and picks up message.
1431334560 - GMT: Mon, 11 May 2015 08:56:00 GMT

Logstash server

{:timestamp=>"2015-05-11T07:55:58.829000+0000", :message=>"Failed to get event from Redis", :name=>"default", :exception=>#<Redis::TimeoutError: Connection timed out>, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/redis-3.0.7/lib/redis/client.rb:222:in io'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/redis-3.0.7/lib/redis/client.rb:220:inio'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/redis-3.0.7/lib/redis/client.rb:228:in read'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/redis-3.0.7/lib/redis/client.rb:96:incall'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/redis-3.0.7/lib/redis/client.rb:201:in process'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/redis-3.0.7/lib/redis/client.rb:309:inensure_connected'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/redis-3.0.7/lib/redis/client.rb:191:in process'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/redis-3.0.7/lib/redis/client.rb:270:inlogging'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/redis-3.0.7/lib/redis/client.rb:190:in process'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/redis-3.0.7/lib/redis/client.rb:96:incall'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/redis-3.0.7/lib/redis/client.rb:179:in call_with_timeout'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/redis-3.0.7/lib/redis/client.rb:244:inwith_socket_timeout'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/redis-3.0.7/lib/redis/client.rb:242:in with_socket_timeout'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/redis-3.0.7/lib/redis/client.rb:178:incall_with_timeout'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/redis-3.0.7/lib/redis.rb:1038:in _bpop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/redis-3.0.7/lib/redis.rb:37:insynchronize'", "file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/1.9/monitor.rb:211:in mon_synchronize'", "file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/1.9/monitor.rb:210:inmon_synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/redis-3.0.7/lib/redis.rb:37:in synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/redis-3.0.7/lib/redis.rb:1035:in_bpop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/redis-3.0.7/lib/redis.rb:1064:in blpop'", "/opt/logstash/lib/logstash/inputs/redis.rb:148:inlist_listener'", "/opt/logstash/lib/logstash/inputs/redis.rb:229:in listener_loop'", "/opt/logstash/lib/logstash/inputs/redis.rb:245:inrun'", "/opt/logstash/lib/logstash/pipeline.rb:163:in inputworker'", "/opt/logstash/lib/logstash/pipeline.rb:157:instart_input'"], :level=>:warn}
{:timestamp=>"2015-05-11T07:55:58.833000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n Plugin: <LogStash::Inputs::Redis host=>"10.49.240.121", key=>"data", data_type=>"list", name=>"default">\n Error: Connection timed out", :level=>:error}

I was still seeing lost data so increased the addition of the test message to every 15 mins. This appears to have stopped the issue I was seeing. Not sure on the root cause at this time, any ideas?

thanks
Luke

Redis streams support

With Redis 5.0 there is a new interesting datatype that was introduced, Streams, ''which models a log data structure'' https://redis.io/topics/streams-intro
This datatype seems perfect for logstash.

Currently the redis input plugin supports reading from the following data_types:
list, channel, pattern_channel

This issue is a feature request for support of this data_type, stream.

The problem of when to send XACK can be solved like other queue plugins', sqs, kafka, by directly acknowledging or in batches. Until elastic/logstash#8514 provides a way to do this better.

Support Redis cluster in input and output plugins

(This issue was originally filed by @liuxingyishi at elastic/logstash#2979)


When I set an output to a node in redis-cluster(Redis 3.0.0),it can not direct to other nodes in redis-cluster automatically.Port 7000 to 7005 are used by redis cluster.
logstash logstash-simple.conf:
input { stdin { } }
output {
redis {
host => ["localhost:7000"]
data_type => "list"
key => "key_count"}
stdout { codec => rubydebug }
}
logstash logs:exception=>#Redis::CommandError: MOVED 15454 127.0.0.1:7002
I supposed that it may be caused by logstash(redis-client role) was not started in cluster mode.

Add support for renamed redis commands

New feature request. Pull request to follow.
Redis supports renaming commands. The ruby redis client also supports configuring renamed commands. Please expose this support through the logstash input client.

why redis input event value twice

When I switched to use redis to be MQ. I fount a problem than event value record twice.
image

Problems with log

Indexing debug log:

{
"message" => "<6>May 26 17:11:43 test-yum-10-59 kernel: device eth0 entered promiscuous mode",
"@Version" => "1",
"@timestamp" => "2016-05-26T09:11:43.000Z",
"host" => "172.26.10.59",
"port" => 17599,
"type" => "syslog",
"syslog_timestamp" => [
[0] "May 26 17:11:43",
[1] "May 26 17:11:43"
],
"syslog_hostname" => [
[0] "test-yum-10-59",
[1] "test-yum-10-59"
],
"syslog_program" => [
[0] "kernel",
[1] "kernel"
],
"syslog_message" => [
[0] "device eth0 entered promiscuous mode",
[1] "device eth0 entered promiscuous mode"
],
"received_at" => [
[0] "2016-05-26T09:11:43.843Z",
[1] "2016-05-26T09:11:43.000Z"
],
"received_from" => [
[0] "172.26.10.59",
[1] "172.26.10.59"
...

But my shipping log is:
{
"message" => "<6>May 26 17:11:43 test-yum-10-59 kernel: device eth0 entered promiscuous mode",
"@Version" => "1",
"@timestamp" => "2016-05-26T09:11:43.000Z",
"host" => "172.26.10.59",
"port" => 17599,
"type" => "syslog",
"syslog_timestamp" => "May 26 17:11:43",
"syslog_hostname" => "test-yum-10-59",
"syslog_program" => "kernel",
"syslog_message" => "device eth0 entered promiscuous mode",
"received_at" => "2016-05-26T09:11:43.843Z",
"received_from" => "172.26.10.59",
"syslog_severity_code" => 5,
"syslog_facility_code" => 1,
"syslog_facility" => "user-level",
"syslog_severity" => "notice"
}

Then I fount redis subscribe messages it seems to be ok!

172.26.10.74:6379> SUBSCRIBE logstash-chan-2016.05.26
Reading messages... (press Ctrl-C to quit)

  1. "subscribe"
  2. "logstash-chan-2016.05.26"
  3. (integer) 1
  4. "message"
  5. "logstash-chan-2016.05.26"
  6. "{"message":"<6>May 26 17:11:43 test-yum-10-59 kernel: device eth0 entered promiscuous mode","@Version":"1","@timestamp":"2016-05-26T09:11:43.000Z","host":"172.26.10.59","port":17599,"type":"syslog","syslog_timestamp":"May 26 17:11:43","syslog_hostname":"test-yum-10-59","syslog_program":"kernel","syslog_message":"device eth0 entered promiscuous mode","received_at":"2016-05-26T09:11:43.843Z","received_from":"172.26.10.59","syslog_severity_code":5,"syslog_facility_code":1,"syslog_facility":"user-level","syslog_severity":"notice"}"

Is my configuration errors or other problems?

Config

shipping

logstash-syslog.conf
input {
tcp {
port => 5000
type => syslog
}
udp {
port => 5000
type => syslog
}
}

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

output {
redis {
host => "172.26.10.74"
data_type => "channel"
key => "logstash-chan-%{+yyyy.MM.dd}"
}
#elasticsearch { hosts => ["172.26.10.74:9200"] }
stdout { codec => rubydebug }
}

indexing

redis-chan-input.conf
input {
redis {
data_type => "pattern_channel"
key => "logstash-chan-*"
host => "172.26.10.74"
port => "6379"
threads => 5
}
}
output {
elasticsearch { hosts => ["172.26.10.74:9200"] }
stdout { codec => rubydebug }
}

Package version

logstash-2.3.2-1.noarch
elasticsearch-2.3.3-1.noarch
kibana-4.5.1-1.x86_64

handle generic (unwrapped) errors from Redis

such as: IOError, otherwise these willl propagate out and crash the pipeline:

[ERROR][logstash.javapipeline    ][mq][f669cc10d01329b13e936073a827724110f2c178c852cebfd2d8d3f80a58141c] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:loggermq
  Plugin: <LogStash::Inputs::Redis host=>"some.host", data_type=>"list", id=>"redis", key=>"foo", enable_metric=>true, codec=><LogStash::Codecs::JSON id=>"json_6227b920-d3b8-4380-a469-ee7d9949a208", enable_metric=>true, charset=>"UTF-8">, threads=>1, port=>6379, ssl=>false, db=>0, timeout=>5, batch_count=>125>
  Error: closed stream
  Exception: IOError
  Stack: org/jruby/RubyIO.java:1366:in `write_nonblock'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.2.5/lib/redis/connection/ruby.rb:76:in `block in write'
org/jruby/RubyKernel.java:1442:in `loop'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.2.5/lib/redis/connection/ruby.rb:75:in `write'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.2.5/lib/redis/connection/ruby.rb:374:in `write'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.2.5/lib/redis/client.rb:289:in `block in write'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.2.5/lib/redis/client.rb:268:in `io'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.2.5/lib/redis/client.rb:287:in `write'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.2.5/lib/redis/client.rb:245:in `block in process'
org/jruby/RubyArray.java:1809:in `each'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.2.5/lib/redis/client.rb:239:in `block in process'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.2.5/lib/redis/client.rb:389:in `ensure_connected'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.2.5/lib/redis/client.rb:238:in `block in process'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.2.5/lib/redis/client.rb:344:in `logging'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.2.5/lib/redis/client.rb:237:in `process'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.2.5/lib/redis/client.rb:131:in `call'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.2.5/lib/redis.rb:2585:in `block in _eval'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.2.5/lib/redis.rb:69:in `block in synchronize'
uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/monitor.rb:235:in `mon_synchronize'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.2.5/lib/redis.rb:69:in `synchronize'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.2.5/lib/redis.rb:2584:in `_eval'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/redis-4.2.5/lib/redis.rb:2636:in `evalsha'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-redis-3.5.1/lib/logstash/inputs/redis.rb:222:in `list_batch_listener'
org/jruby/RubyMethod.java:123:in `call'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-redis-3.5.1/lib/logstash/inputs/redis.rb:208:in `list_runner'
org/jruby/RubyMethod.java:119:in `call'
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-redis-3.5.1/lib/logstash/inputs/redis.rb:104:in `run'
:1:in `'
/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:396:in `block in start_input'

Filbeat with redis and logstas

Hi
We are using Filbeat -> Redis cache -> logstash ->File and Elastic search.

we need to understand from the beatoutput it is list .i.e) sorted list?.whether logstash-input-redis will supports the reading data as Sorted list?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.