Code Monkey home page Code Monkey logo

logstash-logger's Introduction

LogStashLogger

Build Status Code Climate codecov.io Gem Version

LogStashLogger extends Ruby's Logger class to log directly to Logstash. It supports writing to various outputs in logstash JSON format. This is an improvement over writing to a file or syslog since Logstash can receive the structured data directly.

Features

  • Can write directly to a logstash listener over a UDP or TCP/SSL connection.
  • Can write to a file, Redis, Kafka, Kinesis, Firehose, a unix socket, syslog, stdout, or stderr.
  • Logger can take a string message, a hash, a LogStash::Event, an object, or a JSON string as input.
  • Events are automatically populated with message, timestamp, host, and severity.
  • Writes in logstash JSON format, but supports other formats as well.
  • Can write to multiple outputs.
  • Log messages are buffered and automatically re-sent if there is a connection problem.
  • Easily integrates with Rails via configuration.

Installation

Add this line to your application's Gemfile:

gem 'logstash-logger'

And then execute:

$ bundle

Or install it yourself as:

$ gem install logstash-logger

Usage Examples

require 'logstash-logger'

# Defaults to UDP on 0.0.0.0
logger = LogStashLogger.new(port: 5228)

# Specify host and type (UDP or TCP) explicitly
udp_logger = LogStashLogger.new(type: :udp, host: 'localhost', port: 5228)
tcp_logger = LogStashLogger.new(type: :tcp, host: 'localhost', port: 5229)

# Other types of loggers
file_logger = LogStashLogger.new(type: :file, path: 'log/development.log', sync: true)
unix_logger = LogStashLogger.new(type: :unix, path: '/tmp/sock')
syslog_logger = LogStashLogger.new(type: :syslog)
redis_logger = LogStashLogger.new(type: :redis)
kafka_logger = LogStashLogger.new(type: :kafka)
stdout_logger = LogStashLogger.new(type: :stdout)
stderr_logger = LogStashLogger.new(type: :stderr)
io_logger = LogStashLogger.new(type: :io, io: io)

# Use a different formatter
cee_logger = LogStashLogger.new(
  type: :tcp,
  host: 'logsene-receiver-syslog.sematext.com',
  port: 514,
  formatter: :cee_syslog
)

custom_formatted_logger = LogStashLogger.new(
  type: :redis,
  formatter: MyCustomFormatter
)

lambda_formatted_logger = LogStashLogger.new(
  type: :stdout,
  formatter: ->(severity, time, progname, msg) { "[#{progname}] #{msg}" }
)

ruby_default_formatter_logger = LogStashLogger.new(
  type: :file,
  path: 'log/development.log',
  formatter: ::Logger::Formatter
)

# Send messages to multiple outputs. Each output will have the same format.
# Syslog cannot be an output because it requires a separate logger.
multi_delegating_logger = LogStashLogger.new(
  type: :multi_delegator,
  outputs: [
    { type: :file, path: 'log/development.log' },
    { type: :udp, host: 'localhost', port: 5228 }
  ])

# Balance messages between several outputs.
# Works the same as multi delegator, but randomly chooses an output to send each message.
balancer_logger = LogStashLogger.new(
  type: :balancer,
  outputs: [
    { type: :udp, host: 'host1', port: 5228 },
    { type: :udp, host: 'host2', port: 5228 }
  ])

# Send messages to multiple loggers.
# Use this if you need to send different formats to different outputs.
# If you need to log to syslog, you must use this.
multi_logger = LogStashLogger.new(
  type: :multi_logger,
  outputs: [
    { type: :file, path: 'log/development.log', formatter: ::Logger::Formatter },
    { type: :tcp, host: 'localhost', port: 5228, formatter: :json }
  ])

# The following messages are written to UDP port 5228:

logger.info 'test'
# {"message":"test","@timestamp":"2014-05-22T09:37:19.204-07:00","@version":"1","severity":"INFO","host":"[hostname]"}

logger.error '{"message": "error"}'
# {"message":"error","@timestamp":"2014-05-22T10:10:55.877-07:00","@version":"1","severity":"ERROR","host":"[hostname]"}

logger.debug message: 'test', foo: 'bar'
# {"message":"test","foo":"bar","@timestamp":"2014-05-22T09:43:24.004-07:00","@version":"1","severity":"DEBUG","host":"[hostname]"}

logger.warn LogStash::Event.new(message: 'test', foo: 'bar')
# {"message":"test","foo":"bar","@timestamp":"2014-05-22T16:44:37.364Z","@version":"1","severity":"WARN","host":"[hostname]"}

# Tagged logging
logger.tagged('foo') { logger.fatal('bar') }
# {"message":"bar","@timestamp":"2014-05-26T20:35:14.685-07:00","@version":"1","severity":"FATAL","host":"[hostname]","tags":["foo"]}

URI Configuration

You can use a URI to configure your logstash logger instead of a hash. This is useful in environments such as Heroku where you may want to read configuration values from the environment. The URI scheme is type://host:port/path?key=value. Some sample URI configurations are given below.

udp://localhost:5228
tcp://localhost:5229
unix:///tmp/socket
file:///path/to/file
redis://localhost:6379
kafka://localhost:9092
stdout:/
stderr:/

Pass the URI into your logstash logger like so:

# Read the URI from an environment variable
logger = LogStashLogger.new(uri: ENV['LOGSTASH_URI'])

Logstash Listener Configuration

In order for logstash to correctly receive and parse the events, you will need to configure and run a listener that uses the json_lines codec. For example, to receive events over UDP on port 5228:

input {
  udp {
    host => "0.0.0.0"
    port => 5228
    codec => json_lines
  }
}

File and Redis inputs should use the json codec instead. For more information read the Logstash docs.

See the samples directory for more configuration samples.

SSL

If you are using TCP then there is the option of adding an SSL certificate to the options hash on initialize.

LogStashLogger.new(type: :tcp, port: 5228, ssl_certificate: "/path/to/certificate.crt")

The SSL certificate and key can be generated using

openssl req -x509 -batch -nodes -newkey rsa:2048 -keyout logstash.key -out logstash.crt

You can also enable SSL without a certificate:

LogStashLogger.new(type: :tcp, port: 5228, ssl_enable: true)

Specify an SSL context to have more control over the behavior. For example, set the verify mode:

ctx = OpenSSL::SSL::SSLContext.new
ctx.set_params(verify_mode: OpenSSL::SSL::VERIFY_NONE)
LogStashLogger.new(type: :tcp, port: 5228, ssl_context: ctx)

The following Logstash configuration is required for SSL:

input {
  tcp {
    host => "0.0.0.0"
    port => 5228
    codec => json_lines
    ssl_enable => true
    ssl_cert => "/path/to/certificate.crt"
    ssl_key => "/path/to/key.key"
  }
}

Hostname Verification

Hostname verification is enabled by default. Without further configuration, the hostname supplied to :host will be used to verify the server's certificate identity.

If you don't pass an :ssl_context or pass a falsey value to the :verify_hostname option, hostname verification will not occur.

Examples

Verify the hostname from the :host option

ctx = OpenSSL::SSL::SSLContext.new
ctx.cert = '/path/to/cert.pem'
ctx.verify_mode = OpenSSL::SSL::VERIFY_PEER

LogStashLogger.new \
  type: :tcp,
  host: 'logstash.example.com'
  port: 5228,
  ssl_context: ctx

Verify a hostname different from the :host option

LogStashLogger.new \
  type: :tcp,
  host: '1.2.3.4'
  port: 5228,
  ssl_context: ctx,
  verify_hostname: 'server.example.com'

Explicitly disable hostname verification

LogStashLogger.new \
  type: :tcp,
  host: '1.2.3.4'
  port: 5228,
  ssl_context: ctx,
  verify_hostname: false

Custom Log Fields

LogStashLogger by default will log a JSON object with the format below.

{
  "message":"Some Message",
  "@timestamp":"2015-01-29T10:43:32.196-05:00",
  "@version":"1",
  "severity":"INFO",
  "host":"hostname"
}

Some applications may need to attach additional metadata to each message. The LogStash::Event can be manipulated directly by specifying a customize_event block in the LogStashLogger configuration.

config = LogStashLogger.configure do |config|
  config.customize_event do |event|
    event["other_field"] = "some_other_value"
  end
end

This configuration would result in the following output.

{
    "message": "Some Message",
    "@timestamp": "2015-01-29T10:43:32.196-05:00",
    "@version": "1",
    "severity": "INFO",
    "host": "hostname",
    "other_field": "some_other_value"
}

This block has full access to the event, so you can remove fields, modify existing fields, etc. For example, to remove the default timestamp:

config = LogStashLogger.configure do |config|
  config.customize_event do |event|
    event.remove('@timestamp')
  end
end

You can also customize events on a per-logger basis by passing a callable object (lambda or proc) to the customize_event option when creating a logger:

LogStashLogger.new(customize_event: ->(event){ event['other_field'] = 'other_field' })

Buffering / Automatic Retries

For devices that establish a connection to a remote service, log messages are buffered internally and flushed in a background thread. If there is a connection problem, the messages are held in the buffer and automatically resent until it is successful. Outputs that support batch writing (Redis and Kafka) will write log messages in bulk from the buffer. This functionality is implemented using a fork of Stud::Buffer. You can configure its behavior by passing the following options to LogStashLogger:

  • :buffer_max_items - Max number of items to buffer before flushing. Defaults to 50.
  • :buffer_max_interval - Max number of seconds to wait between flushes. Defaults to 5.
  • :drop_messages_on_flush_error - Drop messages when there is a flush error. Defaults to false.
  • :drop_messages_on_full_buffer - Drop messages when the buffer is full. Defaults to true.
  • :sync - Flush buffer every time a message is received (blocking). Defaults to false.
  • :buffer_flush_at_exit - Flush messages when exiting the program. Defaults to true.
  • :buffer_logger - Logger to write buffer debug/error messages to. Defaults to none.

You can turn buffering off by setting sync = true.

Please be aware of the following caveats to this behavior:

  • It's possible for duplicate log messages to be sent when retrying. For outputs like Redis and Kafka that write in batches, the whole batch could get re-sent. If this is a problem, you can add a UUID field to each event to uniquely identify it. You can either do this in a customize_event block, or by using logstash's UUID filter.
  • It's still possible to lose log messages. Ruby won't detect a TCP/UDP connection problem immediately. In my testing, it took Ruby about 4 seconds to notice the receiving end was down and start raising exceptions. Since logstash listeners over TCP/UDP do not acknowledge received messages, it's not possible to know which log messages to re-send.
  • When sync is turned off, Ruby may buffer data internally before writing to the IO device. This is why you may not see messages written immediately to a UDP or TCP socket, even though LogStashLogger's buffer is periodically flushed.

Full Buffer

By default, messages are discarded when the buffer gets full. This can happen if the output source is down for too long or log messages are being received too quickly. If your application suddenly terminates (for example, by SIGKILL or a power outage), the whole buffer will be lost.

You can make message loss less likely by increasing buffer_max_items (so that more events can be held in the buffer), and decreasing buffer_max_interval (to wait less time between flushes). This will increase memory pressure on your application as log messages accumulate in the buffer, so make sure you have allocated enough memory to your process.

If you don't want to lose messages when the buffer gets full, you can set drop_messages_on_full_buffer = false. Note that if the buffer gets full, any incoming log message will block, which could be undesirable.

Sync Mode

All logger outputs support a sync setting. This is analogous to the "sync mode" setting on Ruby IO objects. When set to true, output is immediately flushed and is not buffered internally. Normally, for devices that connect to a remote service, buffering is a good thing because it improves performance and reduces the likelihood of errors affecting the program. For these devices, sync defaults to false, and it is recommended to leave the default value. You may want to turn sync mode on for testing, for example if you want to see log messages immediately after they are written.

It is recommended to turn sync mode on for file and Unix socket outputs. This ensures that log messages from different threads or proceses are written correctly on separate lines.

See #44 for more details.

Error Handling

If an exception occurs while writing a message to the device, the exception is logged using an internal logger. By default, this logs to $stderr. You can change the error logger by setting LogStashLogger.configuration.default_error_logger, or by passsing your own logger object in the :error_logger configuration key when instantiating a LogStashLogger.

Logger Silencing

LogStashLogger provides support for Rails-style logger silencing. The implementation was extracted from Rails, but has no dependencies, so it can be used outside of a Rails app. The interface is the same as in Rails:

logger.silence(temporary_level) do
  ...
end

Custom Logger Class

By default, LogStashLogger creates a logger that extends Ruby's built in Logger class. If you require a different logger implementation, you can use a different class by passing in the class with the logger_class option.

Note that for syslog, the Syslog::Logger class is required and cannot be changed.

Rails Integration

Supports Rails 4.2 and 5.x.

By default, every Rails log message will be written to logstash in LogStash::Event JSON format.

For minimal, more-structured logstash events, try one of the following gems:

Currently these gems output a JSON string, which LogStashLogger then parses. Future versions of these gems could potentially have deeper integration with LogStashLogger (e.g. by directly writing LogStash::Event objects).

Rails Configuration

Add the following to your config/environments/production.rb:

Common Options

# Optional, Rails sets the default to :info
config.log_level = :debug

# Optional, Rails 4 defaults to true in development and false in production
config.autoflush_log = true

# Optional, use a URI to configure. Useful on Heroku
config.logstash.uri = ENV['LOGSTASH_URI']

# Optional. Defaults to :json_lines. If there are multiple outputs,
# they will all share the same formatter.
config.logstash.formatter = :json_lines

# Optional, the logger to log writing errors to. Defaults to logging to $stderr
config.logstash.error_logger = Logger.new($stderr)

# Optional, max number of items to buffer before flushing. Defaults to 50
config.logstash.buffer_max_items = 50

# Optional, max number of seconds to wait between flushes. Defaults to 5
config.logstash.buffer_max_interval = 5

# Optional, drop message when a connection error occurs. Defaults to false
config.logstash.drop_messages_on_flush_error = false

# Optional, drop messages when the buffer is full. Defaults to true
config.logstash.drop_messages_on_full_buffer = true

UDP

# Optional, defaults to '0.0.0.0'
config.logstash.host = 'localhost'

# Optional, defaults to :udp.
config.logstash.type = :udp

# Required, the port to connect to
config.logstash.port = 5228

TCP

# Optional, defaults to '0.0.0.0'
config.logstash.host = 'localhost'

# Required, the port to connect to
config.logstash.port = 5228

# Required
config.logstash.type = :tcp

# Optional, enables SSL
config.logstash.ssl_enable = true

Unix Socket

# Required
config.logstash.type = :unix

# Required
config.logstash.path = '/tmp/sock'

Syslog

If you're on Ruby 1.9, add Syslog::Logger v2 to your Gemfile:

gem 'SyslogLogger', '2.0'

If you're on Ruby 2+, Syslog::Logger is already built into the standard library.

# Required
config.logstash.type = :syslog

# Optional. Defaults to 'ruby'
config.logstash.program_name = 'MyApp'

# Optional default facility level. Only works in Ruby 2+
config.logstash.facility = Syslog::LOG_LOCAL0

Redis

Add the redis gem to your Gemfile:

gem 'redis'
# Required
config.logstash.type = :redis

# Optional, will default to the 'logstash' list
config.logstash.list = 'logstash'

# All other options are passed in to the Redis client
# Supported options include host, port, path, password, url
# Example:

# Optional, Redis will default to localhost
config.logstash.host = 'localhost'

# Optional, Redis will default to port 6379
config.logstash.port = 6379

Kafka

Add the poseidon gem to your Gemfile:

gem 'poseidon'
# Required
config.logstash.type = :kafka

# Optional, will default to the 'logstash' topic
config.logstash.path = 'logstash'

# Optional, will default to the 'logstash-logger' producer
config.logstash.producer = 'logstash-logger'

# Optional, will default to localhost:9092 host/port
config.logstash.hosts = ['localhost:9092']

# Optional, will default to 1s backoff
config.logstash.backoff = 1

Kinesis

Add the aws-sdk gem to your Gemfile:

# aws-sdk >= 3.0
gem 'aws-sdk-kinesis'

# aws-sdk < 3.0
gem 'aws-sdk'
# Required
config.logstash.type = :kinesis

# Optional, will default to the 'logstash' stream
config.logstash.stream = 'my-stream-name'

# Optional, will default to 'us-east-1'
config.logstash.aws_region = 'us-west-2'

# Optional, will default to the AWS_ACCESS_KEY_ID environment variable
config.logstash.aws_access_key_id = 'ASKASKHLD12341'

# Optional, will default to the AWS_SECRET_ACCESS_KEY environment variable
config.logstash.aws_secret_access_key = 'ASKASKHLD1234123412341234'

Firehose

Add the aws-sdk gem to your Gemfile:

# aws-sdk >= 3.0
gem 'aws-sdk-firehose'

# aws-sdk < 3.0
gem 'aws-sdk'
# Required
config.logstash.type = :firehose

# Optional, will default to the 'logstash' delivery stream
config.logstash.stream = 'my-stream-name'

# Optional, will default to AWS default region config chain
config.logstash.aws_region = 'us-west-2'

# Optional, will default to AWS default credential provider chain
config.logstash.aws_access_key_id = 'ASKASKHLD12341'

# Optional, will default to AWS default credential provider chain
config.logstash.aws_secret_access_key = 'ASKASKHLD1234123412341234'

File

# Required
config.logstash.type = :file

# Optional, defaults to Rails log path
config.logstash.path = 'log/production.log'

IO

# Required
config.logstash.type = :io

# Required
config.logstash.io = io

Multi Delegator

# Required
config.logstash.type = :multi_delegator

# Required
config.logstash.outputs = [
  {
    type: :file,
    path: 'log/production.log'
  },
  {
    type: :udp,
    port: 5228,
    host: 'localhost'
  }
]

Multi Logger

# Required
config.logstash.type = :multi_logger

# Required. Each logger may have its own formatter.
config.logstash.outputs = [
  {
    type: :file,
    path: 'log/production.log',
    formatter: ::Logger::Formatter
  },
  {
    type: :udp,
    port: 5228,
    host: 'localhost'
  }
]

Logging HTTP request data

In web applications, you can log data from HTTP requests (such as headers) using the RequestStore middleware. The following example assumes Rails.

# in Gemfile
gem 'request_store'
# in application.rb
LogStashLogger.configure do |config|
  config.customize_event do |event|
    event["session_id"] = RequestStore.store[:load_balancer_session_id]
  end
end
# in app/controllers/application_controller.rb
before_filter :track_load_balancer_session_id

def track_load_balancer_session_id
  RequestStore.store[:load_balancer_session_id] = request.headers["X-LOADBALANCER-SESSIONID"]
end

Cleaning up resources when forking

If your application forks (as is common with many web servers) you will need to manage cleaning up resources on your LogStashLogger instances. The instance method #reset is available for this purpose. Here is sample configuration for several common web servers used with Rails:

Passenger:

::PhusionPassenger.on_event(:starting_worker_process) do |forked|
  Rails.logger.reset
end

Puma:

# In config/puma.rb
on_worker_boot do
  Rails.logger.reset
end

Unicorn

# In config/unicorn.rb
after_fork do |server, worker|
  Rails.logger.reset
end

Ruby Compatibility

Verified to work with:

  • MRI Ruby 2.2 - 2.5
  • JRuby 9.x
  • Rubinius

Ruby versions < 2.2 are EOL'ed and no longer supported.

What type of logger should I use?

It depends on your specific needs, but most applications should use the default (UDP). Here are the advantages and disadvantages of each type:

  • UDP is faster than TCP because it's asynchronous (fire-and-forget). However, this means that log messages could get dropped. This is okay for many applications.
  • TCP verifies that every message has been received via two-way communication. It also supports SSL for secure transmission of log messages over a network. This could slow your app down to a crawl if the TCP listener is under heavy load.
  • A file is simple to use, but you will have to worry about log rotation and running out of disk space.
  • Writing to a Unix socket is faster than writing to a TCP or UDP port, but only works locally.
  • Writing to Redis is good for distributed setups that generate tons of logs. However, you will have another moving part and have to worry about Redis running out of memory.
  • Writing to stdout is only recommended for debugging purposes.

For a more detailed discussion of UDP vs TCP, I recommend reading this article: UDP vs. TCP

Troubleshooting

Logstash never receives any logs

If you are using a device backed by a Ruby IO object (such as a file, UDP socket, or TCP socket), please be aware that Ruby keeps its own internal buffer. Despite the fact that LogStashLogger buffers messages and flushes them periodically, the data written to the IO object can be buffered by Ruby internally indefinitely, and may not even write until the program terminates. If this bothers you or you need to see log messages immediately, your only recourse is to set the sync: true option.

JSON::GeneratorError

Your application is probably attempting to log data that is not encoded in a valid way. When this happens, Ruby's standard JSON library will raise an exception. You may be able to overcome this by swapping out a different JSON encoder such as Oj. Use the oj_mimic_json gem to use Oj for JSON generation.

No logs getting sent on Heroku

Heroku recommends installing the rails_12factor so that logs get sent to STDOUT. Unfortunately, this overrides LogStashLogger, preventing logs from being sent to their configured destination. The solution is to remove rails_12factor from your Gemfile.

Logging eventually stops in production

This is most likely not a problem with LogStashLogger, but rather a different gem changing the log level of Rails.logger. This is especially likely if you're using a threaded server such as Puma, since gems often change the log level of Rails.logger in a non thread-safe way. See #17 for more information.

Sometimes two lines of JSON log messages get sent as one message

If you're using UDP output and writing to a logstash listener, you are most likely encountering a bug in the UDP implementation of the logstash listener. There is no known fix at this time. See #43 for more information.

Errno::EMSGSIZE - Message too long

A known drawback of using TCP or UDP is the 65535 byte limit on total message size. To workaround this issue, you will have to truncate the message by setting the max message size:

LogStashLogger.configure do |config|
  config.max_message_size = 2000
end

This will truncate only the message field of the LogStash Event. So make sure you set the max message size significantly less than 65535 bytes to make room for other fields.

Breaking changes

Version 0.25+

Rails 3.2, MRI Ruby < 2.2, and JRuby 1.7 are no longer supported, since they have been EOL'ed. If you are on an older version of Ruby, you will need to use 0.24 or below.

Version 0.5+

  • The source event key has been replaced with host to better match the latest logstash.
  • The (host, port, type) constructor has been deprecated in favor of an options hash constructor.

Version 0.4+

LogStash::Event uses the v1 format starting version 1.2+. If you're using the v1, you'll need to install LogStashLogger version 0.4+. This is not backwards compatible with the old LogStash::Event v1.1.5, which uses the v0 format.

Version 0.3+

Earlier versions of this gem (<= 0.2.1) only implemented a TCP connection. Newer versions (>= 0.3) also implement UDP, and use that as the new default. Please be aware if you are using the default constructor and still require TCP, you should add an additional argument:

# Now defaults to UDP instead of TCP
logger = LogStashLogger.new('localhost', 5228)
# Explicitly specify TCP instead of UDP
logger = LogStashLogger.new('localhost', 5228, :tcp)

Contributors

Contributing

  1. Fork it
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

logstash-logger's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logstash-logger's Issues

How to Satisfy this activerecord-session_store Silencer Module Requirement?

Hello,

We are using logstash-logger in Rails 4.2.5 with MRI Ruby 2.2.3p173

We use activerecord-session_store (1.0.0 but saw same issue with previous version), which indicates this in the readme:

Please note that you will need to manually include the silencer module to your custom logger if you are using a logger other than Logger and Syslog::Logger and their subclasses:

MyLogger.send :include, ActiveRecord::SessionStore::Extension::LoggerSilencer
This silencer is being used to silence the logger and not leaking private information into the log, and it is required for security reason.

We configure LogStashLogger like so in /config/enviornments/environment-name.rb:

  config.logger = LogStashLogger.new(
  type: :multi_delegator,
  outputs: [
    { type: :file, path: "log/#{Rails.env}.log" },
    { host: ENV["ELK_LOG_HOST"], port: ENV["ELK_LOG_PORT"], ssl_certificate: ENV["ELK_LOG_CERT"]}
  ])

We have tried many permutations of the LogStashLogger.send :include command but all of them fail during deployment. Here are the commands we have tried:

LogStashLogger.send :include, ActiveRecord::SessionStore::Extension::LoggerSilencer

config.logger.send :include, ActiveRecord::SessionStore::Extension::LoggerSilencer

Logging.logger.send :include, ActiveRecord::SessionStore::Extension::LoggerSilencer

Rails.logger.send :include, ActiveRecord::SessionStore::Extension::LoggerSilencer

The top line (which seems the most correct) gives us this error:
Error ID: 2fd1ffbb
Error details saved to: /tmp/passenger-error-22Gh4T.html
Message from application: undefined method 'level' for module LogStashLogger' (NameError) /var/app/current/vendor/bundle/ruby/2.2.0/gems/activerecord-session_store-1.0.0/lib/active_record/session_store/extension/logger_silencer.rb:16:inalias_method'

Has anyone successfully included ActiveRecord::SessionStore::Extension::LoggerSilencer into LogStashLogger?

Messages not sending while using rails runner

Hello,

The simple method below sends the message while in rails console, but does not send while using rails runner.

  def logstash_test
    logger = LogStashLogger.new(type: :tcp, host: 'localhost', port: 5231, error_logger: Logger.new(STDOUT))
    logger.info(test:Time.now)
    p logger
  end

When printing the logger instance via rails runner, its seems that the message is pending:

@buffer_state={:pending_items=>{nil=>["{\"test\":\"2016-08-09T12:58:43.879-04:00\",\"@timestamp\":\"2016-08-09T12:58:43.879-04:00\",\"@version\":\"1\",\"severity\":\"INFO\",\"host\":\"localhost\"}\n"]}`...

This may simply be an issue with rails runner / spring, but thought i'd post it here. Thx!

Logging to kafka

Kafka-input/-output is moved into logstash core with upcoming release.
Do you have any plans to implement logging to kafka in the same way logging to redis is available now?

crashes when invalid UTF-8 is passed

It's not a logstash-logger bug per-se...

See this:

[3] pry(main)> LogStashLogger.new(type: :file, path: '/tmp/foo.json').info({"a\xC3" => {"pez" => [ 'gatito' ]}})
JSON::GeneratorError: partial character in source, but hit end

I've 'fixed' it by String#scrub(ing) the input:

  module LogStashEventExtension
    module Initializer
      def initialize(data={})
        data = _scrub_obj(data)
        super(data)
      end

      def _scrub_obj(o)
        ret = o.class.new
        if o.is_a?(Hash)
          o.each{|k, v|
            v = _scrub(v)
            k = _scrub(k)
            ret[k]=v
          }
        else
          o.each.with_index{| v,k|
            v = _scrub(v)
            ret[k]=v
          }
        end
        ret
      end

      def _scrub(v)
          if v.is_a?(String)
            v = v.dup.scrub
          elsif v.respond_to?(:each)
            v = _scrub_obj(v)
          end
          v
      end
    end

    def self.included(klass)
      klass.send :prepend, Initializer
    end
end

logger will hang programs indefinitely

logger = LogStashLogger.new(
  type: :unix,
  path: '/path/to/missing/socket',
  program_name: 'ruby-example',
  formatter: :cee_syslog,
  sync: true
)

logger.info 'hello world'
p "got here"

Given the above logger, we'll never get to got here. Instead you'll likely get:

LogStashLogger::Device::Unix - Errno::ENOENT - No such file or directory - connect(2) for "/path/to/missing/socket"

forever and ever.

I found this when integrating LogStashLogger with a sinatra application where sync was not specified. It would still indefinitely hang the http requests for some reason.

The underlying cause seems due to the retry implementation in stud. It seems like this retry logic should be configurable but in the worst case it wouldn't retry indefinitely.

logger consume all thread resources

I just found a problem when I using logger with LogStash connect through TCP and I'm using ElasticSearch as LogStash's storage.

In my website, every times user open website, I will start broadcast event asynchronously (using wisper and wisper-celluloid gem) to log user access into LogStash.

Under the hood, every times when you broadcast event, wisper will start new celluloid actor which will spawn new thread and under that actor thread it will try to send log to LogStash.

I don't want to block website request to send the log first, that's why I use wisper and celluloid.

On my server, Somethings accidentally stop ElasticSearch service but LogStash is still running then, 30 minutes later, my server is down. It eat up all thread resources in my server.

I try to find which thread is not getting killed and I found that it is logger thread. this is backtrace of that log:

/home/myweb/.rvm/rubies/ruby-2.1.5/lib/ruby/2.1.0/monitor.rb:185:in `lock'
/home/myweb/.rvm/rubies/ruby-2.1.5/lib/ruby/2.1.0/monitor.rb:185:in `mon_enter'
/home/myweb/.rvm/rubies/ruby-2.1.5/lib/ruby/2.1.0/monitor.rb:209:in `mon_synchronize'
/home/myweb/.rvm/rubies/ruby-2.1.5/lib/ruby/2.1.0/logger.rb:559:in `write'
/home/myweb/.rvm/rubies/ruby-2.1.5/lib/ruby/2.1.0/logger.rb:378:in `add'
/home/myweb/.rvm/rubies/ruby-2.1.5/lib/ruby/2.1.0/logger.rb:434:in `info'
/home/myweb/web/app/wisper_subscribers/Logger.rb:114:in `log_user_access'

Thousands of threads are hanging at lock method.

After I found that cause and proved that, by not start ElasticSearch service, is a root of problem then, I just start ElasticSearch and then every thread that locking is starting to finish their job. So, if LogStash and ElasticSearch is running normally, everything is fine.

Failed to configure LogStashLogger

Hi @dwbutler,
I have some strange issue with LogStashLogger.
I try to config it to add some custom fields and my app fail to start with the next error:

/media/sf_offside/pspgateway/config/environments/development.rb:37:in `block in <top (required)>': undefined method `configure' for LogStashLogger:Module (NoMethodError)
    from /home/retgoat/.rvm/gems/ruby-2.1.2/gems/railties-3.2.13/lib/rails/railtie/configurable.rb:24:in `class_eval'
    from /home/retgoat/.rvm/gems/ruby-2.1.2/gems/railties-3.2.13/lib/rails/railtie/configurable.rb:24:in `configure'
    from /media/sf_offside/pspgateway/config/environments/development.rb:2:in `<top (required)>'
    from /home/retgoat/.rvm/gems/ruby-2.1.2/gems/activesupport-3.2.13/lib/active_support/dependencies.rb:251:in `require'
    from /home/retgoat/.rvm/gems/ruby-2.1.2/gems/activesupport-3.2.13/lib/active_support/dependencies.rb:251:in `block in require'
    from /home/retgoat/.rvm/gems/ruby-2.1.2/gems/activesupport-3.2.13/lib/active_support/dependencies.rb:236:in `load_dependency'
    from /home/retgoat/.rvm/gems/ruby-2.1.2/gems/activesupport-3.2.13/lib/active_support/dependencies.rb:251:in `require'
    from /home/retgoat/.rvm/gems/ruby-2.1.2/gems/railties-3.2.13/lib/rails/engine.rb:571:in `block in <class:Engine>'
    from /home/retgoat/.rvm/gems/ruby-2.1.2/gems/railties-3.2.13/lib/rails/initializable.rb:30:in `instance_exec'
    from /home/retgoat/.rvm/gems/ruby-2.1.2/gems/railties-3.2.13/lib/rails/initializable.rb:30:in `run'
    from /home/retgoat/.rvm/gems/ruby-2.1.2/gems/railties-3.2.13/lib/rails/initializable.rb:55:in `block in run_initializers'
    from /home/retgoat/.rvm/gems/ruby-2.1.2/gems/railties-3.2.13/lib/rails/initializable.rb:54:in `each'
    from /home/retgoat/.rvm/gems/ruby-2.1.2/gems/railties-3.2.13/lib/rails/initializable.rb:54:in `run_initializers'

Here is my Gemfile

gem 'logstash-logger', '~>0.8.0'

Here is the initialization

#config/environments/development.rb
require 'logstash-logger'

#some code
config.logger = LogStashLogger.new([{type: :file, path: 'log/development.log'}, {type: :udp, host: 'localhost', port: 5923}])
  LogStashLogger.configure do |config|
    config.customize_event do |event|
      event["@caller"] = caller[6] ? caller[6].split("/").last : "UNDEFINED"
    end
  end

#some code

But it I try to use cloned logstash-logger like this:

#Gemfile
gem 'logstash-logger', :path => '../logstash-logger'

All working like a charm!

Rails 3.2.13
Ruby 2.1.2

Could you please help with this?
Thanks!

Multiple outputs do not work

Hi!

I'm trying to use logstash-logger to send my rails logs to remote Logstash server and to write them to file, as usual.

I've added to application.rb

    config.logstash = [
      {
        type: :file,
        path: 'log/production.log'
      },
      {
        type: :tcp,
        port: 5000,
        host: 'localhost'
      }
    ]

And receive

13:07:04 worker.1 | no implicit conversion of Symbol into Integer
13:07:04 worker.1 | /Users/smekhonoshin/.rbenv/versions/2.2.0/lib/ruby/gems/2.2.0/gems/logstash-logger-0.7.0/lib/logstash-logger/device.rb:31:in []' 13:07:04 worker.1 | /Users/smekhonoshin/.rbenv/versions/2.2.0/lib/ruby/gems/2.2.0/gems/logstash-logger-0.7.0/lib/logstash-logger/device.rb:31:inparse_uri_config'
13:07:04 worker.1 | /Users/smekhonoshin/.rbenv/versions/2.2.0/lib/ruby/gems/2.2.0/gems/logstash-logger-0.7.0/lib/logstash-logger/railtie.rb:9:in setup' 13:07:04 worker.1 | /Users/smekhonoshin/.rbenv/versions/2.2.0/lib/ruby/gems/2.2.0/gems/logstash-logger-0.7.0/lib/logstash-logger/railtie.rb:33:inblock in class:Railtie'

Does anybody know how to solve this issue?

Rails 3.0.X integration ?

Since ActiveSupport::TaggedLogging was introduced in new rails 3.2.X (i guess) and since ActiveSupport::BufferedLogger/ Logger expectation are different for their constructor method, how we go about with Rails 3.0.X integration for logstash-logger ?

use an http request header value as a custom value

we are using logstash-logger like this

config = LogStashLogger.configure do |config|
  config.customize_event do |event|
    event["other_field"] = "some_other_value"
  end
end

Is there any way to use an HTTP request header in place of "some_other_value" ?
eg. request.headers["X-LOADBALANCER-SESSIONID"]

redis uri is not parsed correctly

When you use the redis uri, such as :uri => 'redis://localhost:6379/0' The connection string seems to be parsed as {:type => 'redis', :host => 'localhost', :port => 6379, :path => '/0'}. This is then passed to the redis client. Unfortunately I think the redis client is getting confused with the :path parameter since that can specify a unix socket. It almost seems like for redis, you are better off passing the entire uri as the :url option to the redis client. (Note it doesn't matter if you specify the redis db in the uri or not)

FWIW, here's the code we are trying to use:

My::Application.configure do
  config.logstash = [{
                         :uri => 'redis://localhost:6379/0'
                     }, {
                         :type => :file
                     }]
end

Error when Errno::ECONNREFUSED - Connection refused

Hello,
I'm trying to use the last update 0.16.0 (or 0.17.0) and I'm having problems when logstash is down and the process (a Sinatra App) can't access to it. So, in general I see the following log:
E, [2016-07-25T11:37:56.750065 #4282] ERROR -- : [LogStashLogger::Device::UDP] Errno::ECONNREFUSED - Connection refused
But sometimes, when some logs messages are defined, the process stops with the following message:

/home/ubuntu/.rvm/gems/ruby-2.2.5/gems/stud-0.0.22/lib/stud/buffer.rb:248:in `unlock': Attempt to unlock a mutex which is not locked (ThreadError)
        from /home/ubuntu/.rvm/gems/ruby-2.2.5/gems/stud-0.0.22/lib/stud/buffer.rb:248:in `ensure in buffer_flush'
        from /home/ubuntu/.rvm/gems/ruby-2.2.5/gems/stud-0.0.22/lib/stud/buffer.rb:248:in `buffer_flush'
        from /home/ubuntu/.rvm/gems/ruby-2.2.5/gems/stud-0.0.22/lib/stud/buffer.rb:112:in `block (2 levels) in buffer_initialize'
        from /home/ubuntu/.rvm/gems/ruby-2.2.5/gems/stud-0.0.22/lib/stud/buffer.rb:110:in `loop'
        from /home/ubuntu/.rvm/gems/ruby-2.2.5/gems/stud-0.0.22/lib/stud/buffer.rb:110:in `block in buffer_initialize'

I'm trying to generate the same behaviour with a small sinatra app, but for the moment I can't.

Support syslog as an output 'device'

I'd like to use logstash-logger but need to be able to send the JSON logs to the local syslogd (which then forwards to a remote logstash and does local disk-assisted queuing if the remote logstash is down).

Continue writing to log file?

Is it possible to continue writing to the regular environment.log file, in the default non-json rails log format?

we use logstash logger like this
config.logstash = [ { type: :file, path: 'log/development.log' }, { type: :file, path: '/var/log/development.log' } ]

Too many open files - socket(2)

Hello!

Yesterday I switched an application in production to Puma. This morning the application is down and about every second or so this message gets written in the Puma error log:

LogStashLogger::Device::UDP - Errno::EMFILE - Too many open files - socket(2) - udp

Looks like this gem is leaking opened sockets.

Options for Mitigating Input Downtime

We recently experienced an issue where our Redis input for Logstash went down and our app became unresponsive, a scenario outlined in the README. As noted in the README, we can bump up the values for the buffer configuration, but it doesn't seem that that will prevent this issue from recurring in the event of another significant logging infrastructure downtime event.

There's the sync option, but, based on the documentation, I'm unclear on whether this would have prevented this issue from occurring.

It would be great if there was a way for the logger to flush the buffer if it receives a connection error. We'd much rather lose logs than take downtime. Is this something that would be possible? I'd be more than willing to work on a patch and submit a PR if you thought it was possible and worthwhile and could give a little direction.

Unable to use Ruby Logger as Buffer logger

I have been trying to debug some issues with logs not being flushed properly and have been unable to set the options[:logger] when constructing a UDP logger.

One issue seems to be that you can't (from the public interface) set the logger for the Buffer to use, but even if I manually edit the source in place and specify it's value as Logger.new($stderr) the method signature for logging methods on the standard logger don't match the call:

 @buffer_config[:logger].debug("Flushing output",
          :outgoing_count => @buffer_state[:outgoing_count],
          :time_since_last_flush => time_since_last_flush,
          :outgoing_events => @buffer_state[:outgoing_items],
          :batch_timeout => @buffer_config[:max_interval],
          :force => force,
          :final => final
        ) if @buffer_config[:logger]

AFAIK the standard ruby logger doesn't have this kind of method: https://ruby-doc.org/stdlib-2.2.0/libdoc/logger/rdoc/Logger.html#method-i-info

Am I missing something?

Bug: LogStashLogger::Device::UDP - Errno::EMSGSIZE - Message too long

Related to #75 #71

I have spent the whole day in order to figure out what was making the request hang forever on my server and I've finally find out that it is a bug in this gem.

In fact I've been able to reproduce the error from the rails console.

The error only happens when an SQL query is very long (and thus the log message produced).

You get many of the following errors if the log message is long:

Bug: LogStashLogger::Device::UDP - Errno::EMSGSIZE - Message too long

Excess string allocation with LogStash::Event

We're seeing pretty high memory allocation due to the logstash-event gem.

I think this is resolved in newer versions of the logstash-event gem, but I see that the latest version of logstash-event on rubygems is still 1.2. The github repository has newer versions. Do you know anything more about this?

2.2.2 :010 > report = MemoryProfiler.report do
2.2.2 :011 >     Rails.logger.info("log a line")
2.2.2 :012?>   end
2.2.2 :016 > report.pretty_print
Total allocated: 4280138 bytes (88589 objects)
Total retained:  7193 bytes (123 objects)

allocated memory by gem
-----------------------------------
   2813600  logstash-event-1.2.02
    534538  redis-3.2.1
    436400  activesupport-4.2.1
    140400  logstash-logger-6a0aad203ce6
    139200  json-1.8.3
     94400  stud-0.0.20
     85600  ruby-2.2.2/lib

allocated memory by file
-----------------------------------
   2813600  /Users/.../.rvm/gems/ruby-2.2.2@ccc/gems/logstash-event-1.2.02/lib/logstash/event.rb
    462538  /Users/.../.rvm/gems/ruby-2.2.2@ccc/gems/redis-3.2.1/lib/redis/connection/ruby.rb
    233200  /Users/.../.rvm/gems/ruby-2.2.2@ccc/gems/activesupport-4.2.1/lib/active_support/json/encoding.rb
    183200  /Users/.../.rvm/gems/ruby-2.2.2@ccc/gems/activesupport-4.2.1/lib/active_support/core_ext/object/json.rb
    139200  /Users/.../.rvm/gems/ruby-2.2.2@ccc/gems/json-1.8.3/lib/json/common.rb

...

allocated memory by location
-----------------------------------
   1808000  /Users/.../.rvm/gems/ruby-2.2.2@ccc/gems/logstash-event-1.2.02/lib/logstash/event.rb:126
    452000  /Users/.../.rvm/gems/ruby-2.2.2@ccc/gems/logstash-event-1.2.02/lib/logstash/event.rb:270
    448000  /Users/.../.rvm/gems/ruby-2.2.2@ccc/gems/logstash-event-1.2.02/lib/logstash/event.rb:269
    402138  /Users/.../.rvm/gems/ruby-2.2.2@ccc/gems/redis-3.2.1/lib/redis/connection/ruby.rb:49
    116800  /Users/.../.rvm/gems/ruby-2.2.2@ccc/gems/json-1.8.3/lib/json/common.rb:223
    115200  /Users/.../.rvm/gems/ruby-2.2.2@ccc/gems/activesupport-4.2.1/lib/active_support/core_ext/object/json.rb:159

...

allocated memory by class
-----------------------------------
   3331650  String
    340600  Array
    183200  Hash
    164760  Thread::Backtrace
     80000  ActiveSupport::JSON::Encoding::JSONGemEncoder::EscapedString
     62400  RubyVM::Env
     41600  Proc
     38400  Time
     22400  JSON::Ext::Generator::State
      7128  IO::EAGAINWaitReadable
      4000  LogStash::Event
      4000  ActiveSupport::JSON::Encoding::JSONGemEncoder

allocated objects by gem
-----------------------------------
     68900  logstash-event-1.2.02
      8400  activesupport-4.2.1
      3089  redis-3.2.1
      2900  json-1.8.3
      2000  ruby-2.2.2/lib
      1300  logstash-logger-6a0aad203ce6
      1100  stud-0.0.20
       500  card_creation/lib
       400  other

allocated objects by file
-----------------------------------
     68900  /Users/.../.rvm/gems/ruby-2.2.2@ccc/gems/logstash-event-1.2.02/lib/logstash/event.rb
      4200  /Users/.../.rvm/gems/ruby-2.2.2@ccc/gems/activesupport-4.2.1/lib/active_support/json/encoding.rb
      3700  /Users/.../.rvm/gems/ruby-2.2.2@ccc/gems/activesupport-4.2.1/lib/active_support/core_ext/object/json.rb
      2900  /Users/.../.rvm/gems/ruby-2.2.2@ccc/gems/json-1.8.3/lib/json/common.rb
      1700  /Users/.../.rvm/rubies/ruby-2.2.2/lib/ruby/2.2.0/time.rb
      1689  /Users/.../.rvm/gems/ruby-2.2.2@ccc/gems/redis-3.2.1/lib/redis/connection/ruby.rb

Continue writing to log file?

Is it possible to have Rails continue to write to the log/environment.log file, but also send the data to logstash?

Thanks!

format_message question

Hello again!

I wanted to ask a question about the background of your design.

As you may recall, you suggested that if I just logged a bunch of stuff to message, then we could json filter on that and get the fields we want. Now that might not work for me, but here's what I'm seeing:

https://github.com/dwbutler/logstash-logger/blob/master/lib/logstash-logger/logger.rb

 def format_message(severity, time, progname, message) <-- pass { xxx } to message
    data = message
    if data.is_a?(String) && data[0] == '{'
      data = (JSON.parse(message) rescue nil) || message
    end

    event = case data
    when LogStash::Event
      data.clone
    when Hash <---
      event_data = {
        "@tags" => [],
        "@fields" => {},
        "@timestamp" => time
      }
      LOGSTASH_EVENT_FIELDS.each do |field_name|
        if field_data = data.delete(field_name)
          event_data[field_name] = field_data
        end
      end
      event_data["@fields"].merge!(data)
      LogStash::Event.new(event_data)
    when String
      LogStash::Event.new("message" => data, "@timestamp" => time)
    end

If I send a hash to message, then execution flows to: "when Hash", and gets stuffed into @fields anyways.

Actually, that is just fine with me. However, I was curious what your original intention was behind @tags and @fields.

Was there a certain convention that you were following? Or was there a program you were integrating with that used those @tags/@fields names?

Personally for me, I was thinking maybe some different naming would work better for me, but before I forge my own way, I at least wanted to ask what led you to this point so far.

Thanks!

Not working on production

Hello!
I'm trying to use logstash-logger but i have problem with configuring it on production. In developement environment, everything works great but on production, i'm not able to send my logs to logstash server.

My production.rb file looks like that:

Rails.application.configure do
    # Settings specified here will take precedence over those in config/application.rb.

    # Code is not reloaded between requests.
    config.cache_classes = true

    # Eager load code on boot. This eager loads most of Rails and
    # your application in memory, allowing both threaded web servers
    # and those relying on copy on write to perform better.
    # Rake tasks automatically ignore this option for performance.
    config.eager_load = true

    # Full error reports are disabled and caching is turned on.
    config.consider_all_requests_local       = false
    config.action_controller.perform_caching = true

    # Enable Rack::Cache to put a simple HTTP cache in front of your application
    # Add `rack-cache` to your Gemfile before enabling this.
    # For large-scale production use, consider using a caching reverse proxy like nginx, varnish or squid.
    # config.action_dispatch.rack_cache = true

    # Disable Rails's static asset server (Apache or nginx will already do this).
    config.serve_static_assets = false

    # Compress JavaScripts and CSS.
    config.assets.js_compressor = :uglifier
    # config.assets.css_compressor = :sass

    # Do not fallback to assets pipeline if a precompiled asset is missed.
    config.assets.compile = false

    # Generate digests for assets URLs.
    config.assets.digest = true

    # Version of your assets, change this if you want to expire all your assets.
    config.assets.version = '1.0'

    # Specifies the header that your server uses for sending files.
    # config.action_dispatch.x_sendfile_header = "X-Sendfile" # for apache
    # config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect' # for nginx

    # Force all access to the app over SSL, use Strict-Transport-Security, and use secure cookies.
    # config.force_ssl = true

    # Set to :debug to see everything in the log.
    config.log_level = :info

    # Prepend all log lines with the following tags.
    # config.log_tags = [ :subdomain, :uuid ]

    # Use a different logger for distributed setups.
    # config.logger = ActiveSupport::TaggedLogging.new(SyslogLogger.new)

    # Use a different cache store in production.
    # config.cache_store = :mem_cache_store

    # Enable serving of images, stylesheets, and JavaScripts from an asset server.
    # config.action_controller.asset_host = "http://assets.example.com"

    # Precompile additional assets.
    # application.js, application.css, and all non-JS/CSS in app/assets folder are already added.
    # config.assets.precompile += %w( search.js )

    # Ignore bad email addresses and do not raise email delivery errors.
    # Set this to true and configure the email server for immediate delivery to raise delivery errors.
    # config.action_mailer.raise_delivery_errors = false

    # Enable locale fallbacks for I18n (makes lookups for any locale fall back to
    # the I18n.default_locale when a translation cannot be found).
    config.i18n.fallbacks = true

    # Send deprecation notices to registered listeners.
    config.active_support.deprecation = :notify

    # Disable automatic flushing of the log to improve performance.
    # config.autoflush_log = false

    # Use default logging formatter so that PID and timestamp are not suppressed.
    # config.log_formatter = ::Logger::Formatter.new

    # Do not dump schema after migrations.
    config.active_record.dump_schema_after_migration = false

    config.action_mailer.default_url_options = { host: "domain.com" }
    config.action_mailer.asset_host          = "http://domain.com"
    config.action_mailer.delivery_method     = :smtp
    config.action_mailer.smtp_settings = {
      authentication:       :plain,
      address:              ENV["MAILGUN_SMTP_SERVER"],
      user_name:            ENV["MAILGUN_SMTP_LOGIN"],
      password:             ENV["MAILGUN_SMTP_PASSWORD"],
      port:                 ENV["MAILGUN_SMTP_PORT"],
      domain:               "domain.com"
    }

    config.logstasher.enabled = true
    config.logstasher.log_controller_parameters = true
    config.logstasher.suppress_app_log = true

    config.logstash.host = 'domain.com'
    config.logstash.port = 5228
    config.logstash.type = :udp
  end

Have you any idea what can be causing such issue? Any clues or tips where to look for that? Thanks in advance.

Version 0.14.0 broke server startup on windows

In Logger.rb, there was a commit that added the module Syslog and forced the logstash.type to be syslog required (at least according to the README.md)

module Syslog
autoload :Logger, 'syslog/logger'
end

#Required
config.logstash.type = :syslog

For windows server users, this is a problem since it is not supported natively. When attempting to start up a server, I receive the following error block:

/Ruby21-x64/lib/ruby/gems/2.1.0/gems/logstash-logger-0.15.1/lib/logstash-logger/logger.rb:5:in `<top (required)>': cannot load such file -- syslog (LoadError)

If we do want support for logging to syslog, and requiring the module, it needs to be conditionally loaded based on the config, rather than stating that config must be of this type. On the project I'm on, we use config.logstash.type = :file.

And yes, for the record, we do use linux servers, where this isn't an issue... however some of our local environments are on Windows

License missing from gemspec

RubyGems.org doesn't report a license for your gem. This is because it is not specified in the gemspec of your last release.

via e.g.

spec.license = 'MIT'
# or
spec.licenses = ['MIT', 'GPL-2']

Including a license in your gemspec is an easy way for rubygems.org and other tools to check how your gem is licensed. As you can imagine, scanning your repository for a LICENSE file or parsing the README, and then attempting to identify the license or licenses is much more difficult and more error prone. So, even for projects that already specify a license, including a license in your gemspec is a good practice. See, for example, how rubygems.org uses the gemspec to display the rails gem license.

There is even a License Finder gem to help companies/individuals ensure all gems they use meet their licensing needs. This tool depends on license information being available in the gemspec. This is an important enough issue that even Bundler now generates gems with a default 'MIT' license.

I hope you'll consider specifying a license in your gemspec. If not, please just close the issue with a nice message. In either case, I'll follow up. Thanks for your time!

Appendix:

If you need help choosing a license (sorry, I haven't checked your readme or looked for a license file), GitHub has created a license picker tool. Code without a license specified defaults to 'All rights reserved'-- denying others all rights to use of the code.
Here's a list of the license names I've found and their frequencies

p.s. In case you're wondering how I found you and why I made this issue, it's because I'm collecting stats on gems (I was originally looking for download data) and decided to collect license metadata,too, and make issues for gemspecs not specifying a license as a public service :). See the previous link or my blog post about this project for more information.

LogStashLogger::Device::TCP - Errno::EPIPE - Broken pipe

I'm using gem logstash-logger version 0.8.0 in my Rails 4.1 app. My config:

    ...
    config.log_level = :info
    config.assets.debug = false

    config.lograge.enabled = true
    config.lograge.formatter = Lograge::Formatters::Logstash.new

    # add time to lograge
    config.lograge.custom_options = lambda do |event|
      {:deploy_name => 'my-app'}
    end

    # Optional, Rails 4 defaults to true in development and false in production
    config.autoflush_log = true

    config.logstash = [
        {
            type: :file,
            path: 'log/staging.log'
        },
        {
            type: :tcp,
            port: ENV['LS_PORT'],
            host: ENV['LS_HOST']
        }
    ]
   ...

When I deploy it all look fine but sometime the error console log shows this messages:

LogStashLogger::Device::TCP - Errno::EPIPE - Broken pipe
LogStashLogger::Device::TCP - Errno::EPIPE - Broken pipe
LogStashLogger::Device::TCP - Errno::EPIPE - Broken pipe
LogStashLogger::Device::TCP - Errno::EPIPE - Broken pipe
LogStashLogger::Device::TCP - Errno::EPIPE - Broken pipe
LogStashLogger::Device::TCP - Errno::EPIPE - Broken pipe

I suppose that problem is about TCP lost connection or overhead TCP port...I don't know...Anybody have some idea about it?

Log to STDERR

The documentation says that it is possible to log to stdout. Is it also possible to configure the logger that it logs to stderr?

ERROR -- : [LogStashLogger::Device::TCP] OpenSSL::SSL::SSLError - SSL_connect SYSCALL returned=5 errno=0 state=unknown state

my client code:

require 'logstash-logger'
require 'sys/filesystem'

logger = LogStashLogger.new(type: :tcp,
                            host: 'logstash.bla.com',
                            port: 5228,
                            ssl_certificate: "ssl/logstash.crt")

logger.error '{"message": "test"}'

my logstash.conf

input {
  tcp {
    host => "0.0.0.0"
    port => 5228
    # codec => json_lines
    ssl_enable => true
    ssl_cert => "/etc/logstash/ssl/logstash.crt"
    ssl_key => "/etc/logstash/ssl/logstash.key"
  }
}

output {
    elasticsearch {
        hosts => "elasticsearch:9200"
    index => "logstash-%{+YYYY.MM.dd}"
    }
}

cert and key were generated with

openssl req -x509 -batch -nodes -newkey rsa:2048 -keyout logstash.key -out logstash.crt

however, when i try to connect from logstash-logger, i get

ERROR -- : [LogStashLogger::Device::TCP] OpenSSL::SSL::SSLError - SSL_connect SYSCALL returned=5 errno=0 state=unknown state

logstash version: 2.3
ruby version 2.2.3

Custom Events can't access tags from Tagged Logging

Currently the call to
LogStashLogger.configuration.customize_event_block.call(event)
happens before the current_tags are added to the event. As such, when using tagged logging, the customize_event can't access the tags.

Why do we care?
In my case tagged logging (Rails) is adding a few pieces of information, such as session information, current user etc. In the customize_event call I'd like to change the tags into proper fields.

File logging doesn't support log rotation

The current file logger implementation just keeps logging to the same file. There is no way to configure the logger for rotating the logs or other forms of configuring a retention policy.

Stack trace on losing logstash connection when using customize_event

When the logstash service is no longer reachable for whatever reason, I get a crash in the new version. It works fine on version 16, but when I upgrade to version 19.1 I get the error pasted at the bottom of this post.

I am using Puma and my logstash configuration looks like this (it's actually a bit more complicated, but this is enough to reproduce):

logstash_outputs = [{ type: :tcp,
                      host: ENV["LOGSTASH_HOST"],
                      port: ENV["LOGSTASH_PORT"],
                      buffer_max_items: 1,
                      error_logger: Logger.new(logstash_err_path),
                      drop_messages_on_flush_error: true}]
logstash_outputs.push({ type: :file,
                        path: config.paths['log'].first })
logger = LogStashLogger.new(
  type: :multi_delegator,
  outputs: logstash_outputs
  )
config.logger = ActiveSupport::TaggedLogging.new(logger)

**
LogStashLogger.configure do |config|
        config.customize_event do |event|
          event["application"] = "MyApp"
        end
end
**

Notes:
-buffer_max_items is set so low just for this test case (to verify our server responds well if logstash goes down)
-removing the file buffer doesn't help
-works fine on logstash version 15
-removing the bolded configuration fixes the issue, although I still get a similar error if logstash is not working on startup, but no error if it goes down thereafter

Puma caught this error:

Attempt to unlock a mutex which is not locked (ThreadError)
/home/gambit/.rvm/gems/ruby-2.3.1/gems/logstash-logger-0.19.1/lib/logstash-logger/buffer.rb:280:in `unlock'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/logstash-logger-0.19.1/lib/logstash-logger/buffer.rb:280:in `buffer_flush'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/logstash-logger-0.19.1/lib/logstash-logger/device/connectable.rb:59:in `flush'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/logstash-logger-0.19.1/lib/logstash-logger/device/multi_delegator.rb:28:in `block (3 levels) in delegate'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/logstash-logger-0.19.1/lib/logstash-logger/device/multi_delegator.rb:28:in `each'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/logstash-logger-0.19.1/lib/logstash-logger/device/multi_delegator.rb:28:in `block (2 levels) in delegate'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/logstash-logger-0.19.1/lib/logstash-logger/logger.rb:20:in `flush'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/activesupport-5.0.0.1/lib/active_support/log_subscriber.rb:70:in `flush_all!'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/railties-5.0.0.1/lib/rails/rack/logger.rb:43:in `call_app'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/railties-5.0.0.1/lib/rails/rack/logger.rb:24:in `block in call'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/activesupport-5.0.0.1/lib/active_support/tagged_logging.rb:70:in `block in tagged'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/activesupport-5.0.0.1/lib/active_support/tagged_logging.rb:26:in `tagged'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/activesupport-5.0.0.1/lib/active_support/tagged_logging.rb:70:in `tagged'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/railties-5.0.0.1/lib/rails/rack/logger.rb:24:in `call'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/request_store-1.3.1/lib/request_store/middleware.rb:9:in `call'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/actionpack-5.0.0.1/lib/action_dispatch/middleware/request_id.rb:24:in `call'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/rack-2.0.1/lib/rack/method_override.rb:22:in `call'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/rack-2.0.1/lib/rack/runtime.rb:22:in `call'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/activesupport-5.0.0.1/lib/active_support/cache/strategy/local_cache_middleware.rb:28:in `call'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/actionpack-5.0.0.1/lib/action_dispatch/middleware/executor.rb:12:in `call'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/actionpack-5.0.0.1/lib/action_dispatch/middleware/static.rb:136:in `call'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/rack-2.0.1/lib/rack/sendfile.rb:111:in `call'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/railties-5.0.0.1/lib/rails/engine.rb:522:in `call'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/puma-3.6.0/lib/puma/configuration.rb:225:in `call'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/puma-3.6.0/lib/puma/server.rb:578:in `handle_request'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/puma-3.6.0/lib/puma/server.rb:415:in `process_client'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/puma-3.6.0/lib/puma/server.rb:275:in `block in run'
/home/gambit/.rvm/gems/ruby-2.3.1/gems/puma-3.6.0/lib/puma/thread_pool.rb:116:in `block in spawn_thread'

Buffer is leaking a thread every time it's full

I recently deployed an update of this gem and immediately noticed the thread count per instance was constantly increasing over time.

Here's what it looked like:
image

After looking into the code, I found the cause.

When the buffer is initialized, it calls reset_buffer to set up the buffer state. In there, it launches a timer thread containing an infinite loop.

on buffer_receive, the default behaviour when the buffer is full is to drop the messages. It does so by calling... reset_buffer, which ends up creating a new thread while the other still lives.

The result is a new thread that never exits for every full buffer.

Clarification on Logstash input configuration

Hi there

Thanks for this project, it looks to be exactly what I was looking for.

I might have missed it in the docs, but can anyone provide me with guidance on how to configure my logstash inputs/filters?

Seeing that format and message_format are both deprecated, I thought it might be as easy at this:

input {
  udp {
    debug => true
    host => "0.0.0.0"
    port => 9999
    type => "whatever"
  }
}

but Logstash doesn't break apart the message correctly... I get this in my stdout output:

{
       "message" => "{\"message\":\"gogogo\",\"@timestamp\":\"2013-10-17T12:04:56.408+11:00\",\"@version\":\"1\",\"severity\":\"INFO\"}\n",
    "@timestamp" => "2013-10-17T01:04:56.409Z",
      "@version" => "1",
          "type" => "whatever",
          "host" => "127.0.0.1"
}

Thanks!

Error when using MultiLogger in a Rails app

I'm attempting to use a MultiLogger within a Rails app to retain file system logging while also pushing logs to Logstash. I'm seeing the following error:

undefined method `[]=' for #<Logger:0x007fb180089178>

when using the following configuration:

# Required
config.logstash.type = :multi_logger

 # Required. Each logger may have its own formatter.
 config.logstash.outputs = [
   {
     type: :file,
     path: "log/#{Rails.env}.log",
     formatter: Logger::Formatter
   },
   {
     type: :udp,
     port: port,
     host: host,
     formatter: :json
   }
 ]

Any other logger type works fine (including a MultiDelegator, though in that case i can't specify different formatters, which is a requirement). The full stack trace is linked below:

https://gist.github.com/jgmartin/96a9219367e6faabcd05

Any help is much appreciated. Thanks.

Getting config.log_tags into LogStashLogger

Hi again!

So I'm doing some digging to find out why config.log_tags = [:uuid] doesn't make it all the way to our LogStashLogger.

Here's the basic logger:
/rails/activesupport/lib/active_support/logger.rb

# Simple formatter which only displays the message.
    class SimpleFormatter < ::Logger::Formatter
      # This method is invoked when a log event occurs
      def call(severity, timestamp, progname, msg)
        "#{String === msg ? msg : msg.inspect}\n"
      end
    end

And now the one that does the tagging:
/rails/activesupport/lib/active_support/tagged_logging.rb

module TaggedLogging
    module Formatter # :nodoc:
      # This method is invoked when a log event occurs.
      def call(severity, timestamp, progname, msg)
        super(severity, timestamp, progname, "#{tags_text}#{msg}") <- Prepends tags / calls to tags_text, prints in STDOUT, but does not make it into message for LogStashLogger
      end




private
        def tags_text
          tags = current_tags
          if tags.any?
            tags.collect { |tag| "[#{tag}] " }.join  <-- build string of tags
          end



end
.. removed lines ...


    def self.new(logger)
      # Ensure we set a default formatter so we aren't extending nil!
      logger.formatter ||= ActiveSupport::Logger::SimpleFormatter.new <-- this is where LogStashLogger.new goes
      logger.formatter.extend Formatter
      logger.extend(self)
    end

    delegate :push_tags, :pop_tags, :clear_tags!, to: :formatter

    def tagged(*tags)
      formatter.tagged(*tags) { yield self }
    end

Example Usage

logger = ActiveSupport::TaggedLogging.new(Logger.new(STDOUT))

logger.tagged('BCX') { logger.info 'Stuff' } # Logs "[BCX] Stuff"

In class LogStashLogger < ::Logger

  def add(severity, message = nil, progname = nil, &block)
    severity ||= UNKNOWN
    if severity < @level
      return true

I realize that in ::Logger, there is a #add.
In ::Logger::Formatter there is #call

In TaggedLogging::Formatter we have def call(severity, timestamp, progname, msg). I believe this simply calls SimpleFormatter's #call with tags [prepended] to the message.

In any case, my console does print out uuid tags:
[a4709b66-d442-426c-b12f-69e8f7b65ae6] User Load (0.4ms) SELECT ....
[a4709b66-d442-426c-b12f-69e8f7b65ae6] User Found

But of course my logstash doesn't see it in the hash.

It seems that TaggedLogging should have already added the necessary code in:
logger.formatter ||= ActiveSupport::Logger::SimpleFormatter.new
logger.formatter.extend Formatter

Maybe... push_tags just isn't called for us? Uhm... anyways, that's my notes for the moment. Gotta take a break, but there must be something simple somewhere that links the config.log_tags to this gem.

And of course, for us it would be much better to get JSON output such as
"uuid": "whatever"
"message": "Hello world"

More so than "[whatever] Hello world"

Perhaps we have to rewrite
module TaggedLogging
module Formatter # :nodoc:

for ourselves so that it doesn't simply prepend [tags]

Multi_logger doesn't respect specified logging formatter

When configuring a multi_logger LogStashLogger, the provided formatter is ignored and the default formatter is used. The specified formatter should the one that is used to write out to the output.

Example configuration

config.logger = LogStashLogger.new(
  type: :multi_logger,
  outputs: [
    { type: :file, path: "log/development.log", formatter: ::Logger::Formatter, sync: true }
  ]
)

Log file output:

{"message":"","@timestamp":"2015-10-05...","@version":"1"...}
{"message":"","@timestamp":"2015-10-05...","@version":"1"...}
{"message":"Started GET \"/\" for 192.168.59.3 at 2015-10-05 21:32:33 +0000","@timestamp":"2015-10-05...","@version":"1"...}

long event messages are being truncated

I'm processing some long ( > 10 k) log events containing a large amount of XML. Using either :udp or :tcp causes these events to be truncated at about 5 k. I haven't yet tried the :file output but removing logstash-logger fixes the issue.

I'll post some more detail as I have more info.

unable to parse timestamp

I'm having an issue with the timestamp format -- everything works great on output but the elasticsearch input chokes on the timestamp:

:exception=>java.lang.IllegalArgumentException: Invalid format: "2013-03-06 15:18:01 +0000" is malformed at " 15:18:01 +0000"

Logstash 1.1.9
Elasticsearch 20.2

Thoughts? I would expect the timestamp format to parse properly with logstash-logger's default output.

Great project BTW!

Stack overflow in silenced_logging.rb after upgrading to 0.17.0

I created a minimal sample app to replicate the issue. https://github.com/adamvduke/logstash-logger-test

Going back to 0.16.0 resolves the issue, but I haven't had time to dig in to exactly what the problem might be.

There is some custom logger configuration in development.rb which replicates what we are using in production.

~/src/logstash-logger-test (master=)$ bundle exec rails s
=> Booting WEBrick
=> Rails 4.2.5.1 application starting in development on http://localhost:3000
=> Run `rails server -h` for more startup options
=> Ctrl-C to shutdown server
Exiting
/Users/adamduke/.rbenv/versions/2.2.0/lib/ruby/gems/2.2.0/gems/logstash-logger-0.19.0/lib/logstash-logger/silenced_logging.rb:44:in `thread_level': stack level too deep (SystemStackError)
        from /Users/adamduke/.rbenv/versions/2.2.0/lib/ruby/gems/2.2.0/gems/activerecord-session_store-1.0.0/lib/active_record/session_store/extension/logger_silencer.rb:31:in `level_with_threadsafety'
        from /Users/adamduke/.rbenv/versions/2.2.0/lib/ruby/gems/2.2.0/gems/activerecord-session_store-1.0.0/lib/active_record/session_store/extension/logger_silencer.rb:31:in `level_with_threadsafety'
        from /Users/adamduke/.rbenv/versions/2.2.0/lib/ruby/gems/2.2.0/gems/activerecord-session_store-1.0.0/lib/active_record/session_store/extension/logger_silencer.rb:31:in `level_with_threadsafety'
        from /Users/adamduke/.rbenv/versions/2.2.0/lib/ruby/gems/2.2.0/gems/activerecord-session_store-1.0.0/lib/active_record/session_store/extension/logger_silencer.rb:31:in `level_with_threadsafety'
        from /Users/adamduke/.rbenv/versions/2.2.0/lib/ruby/gems/2.2.0/gems/activerecord-session_store-1.0.0/lib/active_record/session_store/extension/logger_silencer.rb:31:in `level_with_threadsafety'
        from /Users/adamduke/.rbenv/versions/2.2.0/lib/ruby/gems/2.2.0/gems/activerecord-session_store-1.0.0/lib/active_record/session_store/extension/logger_silencer.rb:31:in `level_with_threadsafety'
        from /Users/adamduke/.rbenv/versions/2.2.0/lib/ruby/gems/2.2.0/gems/activerecord-session_store-1.0.0/lib/active_record/session_store/extension/logger_silencer.rb:31:in `level_with_threadsafety'
        from /Users/adamduke/.rbenv/versions/2.2.0/lib/ruby/gems/2.2.0/gems/activerecord-session_store-1.0.0/lib/active_record/session_store/extension/logger_silencer.rb:31:in `level_with_threadsafety'
         ... 10067 levels...
        from /Users/adamduke/.rbenv/versions/2.2.0/lib/ruby/gems/2.2.0/gems/railties-4.2.5.1/lib/rails/commands/commands_tasks.rb:39:in `run_command!'
        from /Users/adamduke/.rbenv/versions/2.2.0/lib/ruby/gems/2.2.0/gems/railties-4.2.5.1/lib/rails/commands.rb:17:in `<top (required)>'
        from bin/rails:4:in `require'
        from bin/rails:4:in `<main>'

Is "sync: true" required (or should be explained) for Redis?

I love the gem, but it took me a while to figure out that I needed "sync: true" in order to send logger info to Redis. "Sync" is only mentioned once in the README by way an example for another device, so it took me a while to find it, and that was only by adding bunches of "puts foo.inspect" lines in redis.rb. I might be missing some overall pre-requisite knowledge but should this be made clearer in the documentation? Thanks!

Problem UDP as input

Hi,

i am using logstash but I face a problem that is related to UDP input

it can not split the line as input, so my message contains of multiline messages not one line message.

Logging Suddenly Stops on Production

So my production server is a puma cluster and it logs perfectly... until recently.

Currently, logging is successful for a number of hours, then suddenly stops.

I traced one cause back to Logstash::Event where Event#to_json was throwing errors on encoding to JSON, and usually that was the last log message. So I upgraded Logstash::Event to the latest master branch, fixed a couple logstash-event bugs, and the system is much more stable now.
[ Ref: https://github.com/pctj101/logstash/tree/event_timestamp ]

But... after a few more hours, logging still stops.

So I don't think it's a Logstash::Event#to_json problem any more.

Granted I use lograge to reduce my logs and format into json, but I don't see any problems there either.

And too be fair, I don't see any problems in logstash-logger either.

So it's not clear what to debug.

So while I'm looking into this I thought I'd ask a few simple questions:

Is there anything in general where a bug could cause some exception to be raised, and permanently disable logstash-logger or Rails.logger until the next server restart?

Any similar problems seen in the past?

Thanks!So my production server is a puma cluster and it logs perfectly... until recently.

Currently, logging is successful for a number of hours, then suddenly stops.

I traced one cause back to Logstash::Event where Event#to_json was throwing errors on encoding to JSON, and usually that was the last log message. So I upgraded Logstash::Event to the latest master branch, fixed a couple logstash-event bugs, and the system is much more stable now.
[ Ref: https://github.com/pctj101/logstash/tree/event_timestamp ]

But... after a few more hours, logging still stops.

So I don't think it's a Logstash::Event#to_json problem any more.

Granted I use lograge to reduce my logs and format into json, but I don't see any problems there either.

And too be fair, I don't see any problems in logstash-logger either.

So it's not clear what to debug.

So while I'm looking into this I thought I'd ask a few simple questions:

Is there anything in general where a bug could cause some exception to be raised, and permanently disable logstash-logger or Rails.logger until the next server restart?

Any similar problems seen in the past?

Thanks!

Exception when using Rails 5.0.0

Logstash logger and rails versions:

    lograge (0.4.1)
    logstash-event (1.2.02)
    logstash-logger (0.15.1)
    rails (5.0.0)

Initialization of logstash logger (in config/application.rb):

  config.lograge.enabled = true
  config.lograge.formatter = Lograge::Formatters::Logstash.new
  config.logstash.uri = "file:///var/applications/test-app/log/logstash.log"

Fails with following exception:

2016-07-22 08:26:25 +0000: Rack app error: #<NoMethodError: super: no superclass method `silence' for #<Logger:0x00000003fc5d98>>
/home/vagrant/.rvm/gems/ruby-2.3.0@global/gems/activesupport-5.0.0/lib/active_support/logger.rb:63:in `block (3 levels) in broadcast'
/home/vagrant/.rvm/gems/ruby-2.3.0@global/gems/activesupport-5.0.0/lib/active_support/logger_silence.rb:20:in `silence'
/home/vagrant/.rvm/gems/ruby-2.3.0@global/gems/activesupport-5.0.0/lib/active_support/logger.rb:61:in `block (2 levels) in broadcast'
/home/vagrant/.rvm/gems/ruby-2.3.0@global/gems/sprockets-rails-3.1.1/lib/sprockets/rails/quiet_assets.rb:11:in `call'
/home/vagrant/.rvm/gems/ruby-2.3.0@global/gems/actionpack-5.0.0/lib/action_dispatch/middleware/request_id.rb:24:in `call'
/home/vagrant/.rvm/gems/ruby-2.3.0@global/gems/rack-2.0.1/lib/rack/method_override.rb:22:in `call'
/home/vagrant/.rvm/gems/ruby-2.3.0@global/gems/rack-2.0.1/lib/rack/runtime.rb:22:in `call'
/home/vagrant/.rvm/gems/ruby-2.3.0@global/gems/activesupport-5.0.0/lib/active_support/cache/strategy/local_cache_middleware.rb:28:in `call'
/home/vagrant/.rvm/gems/ruby-2.3.0@global/gems/actionpack-5.0.0/lib/action_dispatch/middleware/executor.rb:12:in `call'
/home/vagrant/.rvm/gems/ruby-2.3.0@global/gems/actionpack-5.0.0/lib/action_dispatch/middleware/static.rb:136:in `call'
/home/vagrant/.rvm/gems/ruby-2.3.0@global/gems/rack-2.0.1/lib/rack/sendfile.rb:111:in `call'
/home/vagrant/.rvm/gems/ruby-2.3.0@global/gems/railties-5.0.0/lib/rails/engine.rb:522:in `call'

The exactly same setup works fine when using Rails 4.2.6. Any ideas on why might this happen?

Failure when logging non-string objects

When passing an object to any log method that is not a String, a Hash or a Logstash object, the logger errors with

Puma caught this error: undefined method `[]' for nil:NilClass (NoMethodError)
path/to/gems/logstash-logger-0.6.2/lib/logstash-logger/formatter.rb:34:in `build_event'
path/to/gems/logstash-logger-0.6.2/lib/logstash-logger/formatter.rb:12:in `call'

The culprit is https://github.com/dwbutler/logstash-logger/blob/master/lib/logstash-logger/formatter.rb#L24 where a default case for anything that is not a String, but can be converted is missing. It would be more robust to test if data.responds_to :to_s, so that exception objects or similar can be passed directly.

Long json logs get split into two lines (UDP)

My JSON (which can be long after adding custom fields) get split into two lines in logstash. So of course, the JSON parsing fails.

I looked at the source, and it doesn't look like this gem does anything actively to split a long json string into multiple parts.

Is it correct to assume I need to configure logstash to recombine lines if they are too long? Or is there something that I was supposed to configure in this gem?

For the record, I tried using codec => json / json_lines in the logstash.conf, but it did not immediately help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.