Code Monkey home page Code Monkey logo

python-logstash-async's Introduction

python-logstash-async

PyPI Documentation Status CI Tests Python Versions License

Python Logstash Async is an asynchronous Python logging handler to submit log events to a remote Logstash instance.

Unlike most other Python Logstash logging handlers, this package works asynchronously by collecting log events from Python's logging subsystem and then transmitting the collected events in a separate worker thread to Logstash. This way, the main application (or thread) where the log event occurred, doesn't need to wait until the submission to the remote Logstash instance succeeded.

This is especially useful for applications like websites or web services or any kind of request serving API where response times matter.

For more details, configuration options and usage examples please see the documentation at http://python-logstash-async.readthedocs.io/en/latest/.

Installation

The easiest method is to install directly from pypi using pip:

pip install python-logstash-async

If you prefer, you can download python-logstash-async and install it directly from source:

python setup.py install

Get the Source

The source code is available at https://github.com/eht16/python-logstash-async/.

Contributing

Found a bug or got a feature request? Please report it at https://github.com/eht16/python-logstash-async/issues.

Author

Enrico Tröger <[email protected]>

python-logstash-async's People

Contributors

alibozorgkhan avatar andriilahuta avatar asmaps avatar bngsudheer avatar chickahoona avatar d1skort avatar daleobrien avatar eht16 avatar ercpe avatar feliixx avatar ghyde avatar javawizard avatar jloehel avatar kvdveer avatar loganasherjones avatar nazarkhanov avatar pmazarovich avatar ratherbland avatar rmihael avatar sarg avatar shreyaskarnik avatar skwashd avatar vklochan avatar wmacomber avatar zagaria avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

python-logstash-async's Issues

Error: queue.Empty

Hi,

  • I am using logstash-async without db
  • i configure python logging handler this way:
 handlers:
        console:
            class: logging.StreamHandler
            level: INFO
            stream: ext://sys.stdout
            formatter: simple
        logstash:
            level: DEBUG
            class: logstash_async.handler.AsynchronousLogstashHandler
            formatter: logstash_formatter
            host: 'host'
            port: <port>
            database_path: null
  • Do you have an idea how to manage this error?
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/logstash_async/worker.py", line 118, in _fetch_events
    self._fetch_event()
  File "/usr/local/lib/python3.8/site-packages/logstash_async/worker.py", line 140, in _fetch_event
    self._event = self._queue.get(block=False)
  File "/usr/local/lib/python3.8/queue.py", line 167, in get
    raise Empty
_queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/logstash_async/worker.py", line 214, in _flush_queued_events
    self._send_events(events)
  File "/usr/local/lib/python3.8/site-packages/logstash_async/worker.py", line 268, in _send_events
    self._transport.send(events, use_logging=use_logging)
  File "/usr/local/lib/python3.8/site-packages/logstash_async/transport.py", line 88, in send
    self._create_socket()
  File "/usr/local/lib/python3.8/site-packages/logstash_async/transport.py", line 166, in _create_socket
    self._sock.connect((self._host, self._port))
socket.timeout: timed out

Can't create two handlers with different configurations

import logging
import sys
from logstash_async.handler import AsynchronousLogstashHandler


host = 'localhost'

test_logger1 = logging.getLogger('python-logstash-logger')
test_logger1.setLevel(logging.INFO)
test_logger1.addHandler(AsynchronousLogstashHandler(
    host,
    5002,
    database_path=None,
    transport='logstash_async.transport.UdpTransport'
))

test_logger1.info('logger1')


test_logger2 = logging.getLogger('python-logstash-logger2')
test_logger2.setLevel(logging.INFO)
test_logger2.addHandler(AsynchronousLogstashHandler(
    host,
    5000,
    database_path=None,
    transport='logstash_async.transport.UdpTransport'
))


test_logger2.info('logger2')  # goes to 5002 port :(

I think problem is here:

 class AsynchronousLogstashHandler(Handler):
     """Python logging handler for Logstash. Sends events over TCP by default.
     :param host: The host of the logstash server, required.
     :param port: The port of the logstash server, required.
     :param database_path: The path to the file containing queued events, required.
     :param transport: Callable or path to a compatible transport class.
     :param ssl_enable: Should SSL be enabled for the connection? Default is False.
     :param ssl_verify: Should the server's SSL certificate be verified?
     :param keyfile: The path to client side SSL key file (default is None).
     :param certfile: The path to client side SSL certificate file (default is None).
     :param ca_certs: The path to the file containing recognized CA certificates.
     :param enable: Flag to enable log processing (default is True, disabling
                   might be handy for local testing, etc.)
     :param event_ttl: Amount of time in seconds to wait before expiring log messages in
                       the database. (Given in seconds. Default is None, and disables this feature)
     """
 
     _worker_thread = None
...

_worker_thread -- class based attribute and shares between different handlers

Message field is JSON-decoded when switching to BeatsTransport

First of all I would like to thank you for this package.

Here's the issue: when I use default TcpTransport, the message field arrives to elastic as a regular string in json format:

image

When I switch to BeatsTransport, the message field is parsed to the document root and I get this kind of data in kibana (parsed fields are selected with blue color):

image

Log level of: "An error occurred while sending events: [Errno 104] Connection reset by peer"

Hey Enrico,
first, thank you for this amazing library! Great to see people putting time and effort in open-source software.

Now my Issue:
It seems, that something is not perfect with the network in our QA-Environment, I get a lot of "An error occurred while sending events: [Errno 104] Connection reset by peer" from your library (worker.py line 216). Since you took care of handling this well, this is not a big problem. The logs are put back in the queue and are sent immediately after, when the connection is back.

My request:
Since the error is handled in the code, can we lower the log level of these messages? I only want Errors logged, if something in my application is actually broken. So I would propose to change the log level of these messages to WARN.

Reasoning:
Nothing is broken, there is just some networking hiccup, that is handled by the library. Logged Errors like these distract us from seeing, when something is actually broken.

If you like, I can open a Pull-Request to propose a solution.

Yours
István

The default value for database_path doesn't work

The default value for database_path, :memory:, currently doesn't work with this library because every connection to :memory: creates a separate in-memory database. I suggest either providing a generic filename for an on-disk database file, such as log_event_queue.sqlite3, or making the database_path argument to AsynchronousLogstashHandler required.

Django with with many workers

Using uwsgi setting
'processes = 8', for example

sqlite3.OperationalError: attempt to write a readonly database

This happens as more than 1 thread want to write in single 'database'

You have any ideas?

Too many open files

So, sometimes when logstash is down, our long running scripts will report the normal queue.Empty + connectionError.

This is fine, however, when running with pytest, pytest will create an XML file at the end of the test, which fails due to do "Too many open files"

Are failed socket connections somehow lingering for long periods of time?

I also noticed that after long period of failing to connect to logstash, the sqlite3 connections failed (I'm assuming it's because of too many open sockets)

error	26-Apr-2019 02:44:43	20190426 02:44:43|worke:[ERROR] - Log processing error (queue size: 1031): unable to open database file
error	26-Apr-2019 02:44:43	Traceback (most recent call last):
error	26-Apr-2019 02:44:43	  File "/home.local/build/.local/share/virtualenvs/<redacted>-rYeAsF95/lib/python3.6/site-packages/logstash_async/worker.py", line 145, in _process_event
error	26-Apr-2019 02:44:43	    self._write_event_to_database()
error	26-Apr-2019 02:44:43	  File "/home.local/build/.local/share/virtualenvs/<redacted>-rYeAsF95/lib/python3.6/site-packages/logstash_async/worker.py", line 195, in _write_event_to_database
error	26-Apr-2019 02:44:43	    self._database.add_event(self._event)
error	26-Apr-2019 02:44:43	  File "/home.local/build/.local/share/virtualenvs/<redacted>-rYeAsF95/lib/python3.6/site-packages/logstash_async/database.py", line 94, in add_event
error	26-Apr-2019 02:44:43	    connection.execute(query, (event, False))
error	26-Apr-2019 02:44:43	sqlite3.OperationalError: unable to open database file

Looking at the code, I think this could be fixed by using a connection pool, rather than creating new connections everytime. It would be nice if there was a way to configure this to use a connection pool rather than creating new connections every time.

Gunicorn logging support

Hi,
I just came to this package because the original isn't working very well.

I wan't to know if i can use this package for log gunicorn.

I did the next config file:

[loggers]
keys=root, logstash.error, logstash.access

[handlers]
keys=console , logstash

[formatters]
keys=generic, access, json

[logger_root]
level=INFO
handlers=console

[logger_logstash.error]
level=INFO
handlers=logstash
propagate=1
qualname=gunicorn.error

[logger_logstash.access]
level=INFO
handlers=logstash
propagate=0
qualname=gunicorn.access

[handler_console]
class=StreamHandler
formatter=generic
args=(sys.stdout, )

[handler_logstash]
class=logstash_async.handler.AsynchronousLogstashHandler
formatter=json
version=1
args=('logstash',5959, 'logstash.db')

[formatter_generic]
format=%(asctime)s [%(process)d] [%(levelname)s] %(message)s
datefmt=%Y-%m-%d %H:%M:%S
class=logging.Formatter

[formatter_access]
format=%(message)s
class=logging.Formatter

[formatter_json]
class=pythonjsonlogger.jsonlogger.JsonFormatter

As you can see i only change the logstash handler by this handler and i added the missing argument data_base.

Can you make some documentation about it?

Thank you.

Logs have not been sent in high rate

Our system logs with more that 10,000 logs/Sec and with this rate logs are not sent in logstash. When we check caching database file size we understand that it is increasing with time and its size gets more than 1 gigs.
What can we do about it? Is there any config about it? Is it possibe that it's logstash overloading problem?

Log record "extra" fields are missing when using LogstashFormatter

When using LogstashFormatter the specific log record "extra" is missing from the formatted message

from loguru import logger
from logstash_async.handler import AsynchronousLogstashHandler
from logstash_async.formatter import LogstashFormatter

logstash_handler = AsynchronousLogstashHandler(
    remote_host,
    remote_port,
    database_path,
)

logstash_formatter = LogstashFormatter(
    extra={
        'default': 'extra',
    },
)

logstash_handler.setFormatter(logstash_formatter)
logger.add(
    logstash_handler,
    level=level,
    format='{message}',
    backtrace=False,
)

logger.info('some message', extra={'some': 'extra'})

formatted message won't include the extra={'some': 'extra'} but will include 'default': 'extra'
This is because of the _get_extra_fields method which doesn't add the record.extra to the extra fields.
solved it locally by overriding the _get_extra_fields locally

class MyFormatter(LogstashFormatter):
    def _get_extra_fields(self, record):
        extra_fields = super()._get_extra_fields(record=record, )

        if record.extra:
            extra_fields = record.extra | extra_fields
        return extra_fields

thought you might want to add this to the code as well

thanks!

python3 cross compatibility and django=1.8

hi, i am experiencing some bug while using the DjangoLogstashFormatter with django 1.8 and python 3.4.

it seem that this lib is not compatible with either one by 2 at least two lines :

logstash_async/formatter.py:235 extra_fields['req_uri'] = request.get_raw_uri()

but django WSGIRequest object has no get_raw_uri() in version 1.8 (LTS). I propose to skip this extra_fields for later versions.

logstash_async/formatter.py:236: extra_fields['req_user'] = unicode(request.user) if request.user else ''

you use unicode, which is a 2.7 only string type. you can either import six or do a try: except ImportError for string types to be cross compatible.

this is the only 2 things a saw for now, but I will keep investigating.

it is a best practice to support from current LTS to current stable for your app. that's why I speak about 1.8...
https://www.djangoproject.com/download/#supported-versions

If you want to be python3 compatible and LTS compatible, I can eagerly do a PR to fix all thing a found and build some unittests to check it all.

logstash-async ignoring formatter options

I have encountered a problem, logstash logger seems to totally ignore formatter options when I create my own formatter with custom options. To be precise I can pass formatter to logger (It might be a little bit obscure way but I was inspired by logstash-async implementation itself) and as it prints correct value it seems to became part of logger. But on the other side, in logstash I can only see the default "type" => "python-logstash" in the message.

#!/usr/bin/python

import logging
import sys
from logstash_async.handler import AsynchronousLogstashHandler
from logstash_async.formatter import LogstashFormatter

host = 'localhost'
port = 5959

test_logger = logging.getLogger('test')

test_formatter = LogstashFormatter(message_type='abcd', tags=None, fqdn=False, extra_prefix='new', extra=None)
test_logger.formatter = test_formatter

test_logger.setLevel(logging.INFO)
test_logger.addHandler(AsynchronousLogstashHandler(
    host, port, database_path='logstash.db'))

print test_logger.formatter._message_type

test_logger.warning('Test logstash warning message.')

Lost messages if process after process quit

Hello,

I am running an API where I am logging some data to Logstash.
It seems that there are lots of lost messages if process quit and messages are still not send to Logstash.
If I set 30 secs sleep at the end of the process to test if is alright but it does not work otherwise.

I assume that for LRP it is going to work perfectly fine but what is happening if process is created and exits soon?

I am using following:

  • ELK 6.4.3
  • Ubuntu 18.04
  • Python 3.7.1

Could you suggest how to workaround it?

Thanks

Clarify documentation for extra dictionaries

Reading the documentation, I initially thought I could simply pass an extra kwarg to AsynchronousLogstashHandler:

"You can also specify an additional extra dictionary in the logging configuration with static values like the application name, environment, etc. These values will be merged with any extra dictionary items passed in the logging call into the configured extra prefix."

But the docs must mean the Logging File Config. They don't mention using a LoggingAdapter- which is the other way I know of to do this. It'd be neat to have an example of this in the docs, since I think adding extras to the Logger as a whole is a use case just as large as adding them to individual calls.

Thanks for your work on python-logstash-async!

Exception with simple usage

import logging

from logstash_async.handler import AsynchronousLogstashHandler
from logstash_async.transport import BeatsTransport

lgr = logging.getLogger()
lgr.addHandler(AsynchronousLogstashHandler('10.222.0.20', 5056, database_path=None, transport=BeatsTransport))
lgr.error('fnord')
2019-07-03 08:55:45: exception: An error occurred while sending events: send() missing 1 required positional argument: 'events'
Traceback (most recent call last):
  File "/Users/cameronnemo/docs/elk/py-logstash-demo/venv/lib/python3.7/site-packages/logstash_async/worker.py", line 118, in _fetch_events
    self._fetch_event()
  File "/Users/cameronnemo/docs/elk/py-logstash-demo/venv/lib/python3.7/site-packages/logstash_async/worker.py", line 140, in _fetch_event
    self._event = self._queue.get(block=False)
  File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/queue.py", line 167, in get
    raise Empty
_queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/cameronnemo/docs/elk/py-logstash-demo/venv/lib/python3.7/site-packages/logstash_async/worker.py", line 224, in _flush_queued_events
    self._send_events(events)
  File "/Users/cameronnemo/docs/elk/py-logstash-demo/venv/lib/python3.7/site-packages/logstash_async/worker.py", line 255, in _send_events
    self._transport.send(events, use_logging=use_logging)
TypeError: send() missing 1 required positional argument: 'events'

2019-07-03 08:55:45: error: Error on closing transport: close() missing 1 required positional argument: 'self'

Django settings vs manually logging- what am I missing?

Hopefully it's something obvious that I'm missing but I am unable to get this working with Django logging settings but programmatically sending a message does work. Maybe you can see something before I do a deep dive into the internals.

I have logging enabled so I can see when messages are received to my remote logstash server.

settings.py:

LOGGING = {
    'version': 1,
    'formatters': {
        'logstash': {
                  '()': 'logstash_async.formatter.LogstashFormatter',
                  'message_type': 'django',
              },
    },
    'handlers': {
        'logstash': {
                  'level': 'DEBUG',
                  'class': 'logstash_async.handler.AsynchronousLogstashHandler',
                  'formatter': 'logstash',
                  'transport': 'logstash_async.transport.TcpTransport',
                  'host': 'myelkserver.com',
                  'port': 1026,
                  'database_path': './logstash.db',  # same results if None
              },
    },
    'loggers': {
        'logstash': {
                  'handlers': ['logstash'],
                  'level': 'DEBUG',
                  'propagate': True, 
              },
    }
}
>>>import logging
>>>logger = logging.getLogger("logstash")
>>>logger.debug('hello')  # no message is sent to remote

The following works which makes no sense to me, any ideas?

from logstash_async.handler import AsynchronousLogstashHandler
from logstash_async.transport import TcpTransport
from logstash_async.formatter import DjangoLogstashFormatter

lgr = logging.getLogger()
formatter = DjangoLogstashFormatter(message_type='django')
handler = AsynchronousLogstashHandler('myelkserver.com', 1026, database_path='./logstash.db', transport=TcpTransport)
handler.setFormatter(formatter)
lgr.addHandler(handler)
lgr.debug('test123') # this is received by remote

Feature Request: Option to work in sync

First thanks for your work!

I know it sounds extremely stupid to ask for this, yet I'd like to use your lib in Psono, yet it would need Google's Cloud Run support.
A stupid thing about cloud run is, that all processes are only executed during a request and afterwards the CPU is throttled down completely. So asynchronous events may not be processed at all or are extremely delayed and Cloud Run's scaling at the end will cause potential data loss :(

A flag that would tell the logger to handle the request synchronous would be amazing, which would allow people to switch between synchronous and asynchronous processing depending on their environment.

Error while using with aiohttp

Hi i several times get this error:

base_events.py,base_events,default_exception_handler,ERROR,6,LogProcessingWorker,Task exception was never retrieved, future: <Task finished coro=<Client._misc_loop() done, defined at /usr/local/lib/python3.7/site-packages/asyncio_mqtt/client.py:309> exception=CancelledError()>, concurrent.futures._base.CancelledError

After this error the logstash not sending data to Elastic.

How to solve this?
Thanks

Django processes hang

I have a django project with a celery worker with logstash logging enabled.

celery -A NewsSource worker -l DEBUG

When I trying to stop the process with Ctrl-C it tries to stop it gracefully, then perform a «cold shutdown» and then hangs, so the only way to kill it is kill -9 PID:

Restoring 22 unacknowledged message(s)
Batch length: 45, Batch size: 38323
^C
worker: Cold shutdown (MainProcess)
^C
worker: Cold shutdown (MainProcess)
^C
worker: Cold shutdown (MainProcess)
^C
worker: Cold shutdown (MainProcess)

Before I added the logstash-async it worked fine.

My logging config:

LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'formatters': {
        'verbose': {
            'format': '{levelname} {asctime} {module} {process:d} {thread:d} {message}',
            'style': '{',
        },
        'logstash': {
            '()': 'logstash_async.formatter.DjangoLogstashFormatter',
            'message_type': 'python-logstash',
            'fqdn': False,
            'extra_prefix': env('LOGSTASH_ENVIRONMENT'),
            'tags': ['newssource', ],
            'extra': {
                'application': 'NewsSource',
                'project_path': 'NewsSource',
                'environment': 'production',
            }
        },
    },
    'handlers': {
        'file': {
            'level': 'DEBUG',
            'class': 'logging.handlers.RotatingFileHandler',
            'formatter': 'verbose',
            'filename': env('LOG_FILE'),
            'maxBytes': 16 * 1024 * 1024,
        },
        'console': {
            'level': 'DEBUG',
            'class': 'logging.StreamHandler',
        },
        'logstash': {
            'enable': bool(env('LOGSTASH_ENABLED')),
            'level': env('LOGSTASH_LOG_LEVEL'),
            'class': 'logstash_async.handler.AsynchronousLogstashHandler',
            'formatter': 'logstash',
            'transport': 'logstash_async.transport.HttpTransport',
            'host': env('LOGSTASH_HOST'),
            'port': int(env('LOGSTASH_PORT')),
            'ssl_enable': False,
            'ssl_verify': False,
            'username': env('LOGSTASH_USERNAME'),
            'password': env('LOGSTASH_PASSWORD'),
            'database_path': env('LOGSTASH_DB_PATH'),
            'use_logging': bool(env('APP_DEBUG')),
        },
    },
    'loggers': {
        'django': {
            'handlers': ['file', 'console', 'logstash', ],
            'level': 'INFO',
        },
        'django.db': {
            'level': 'INFO',
            'handlers': ['console', 'logstash', ],
        },
        'celery': {
            'handlers': ['file', 'console', 'logstash', ],
            'level': 'INFO',
            'propagate': True
        },
        '': {
            'handlers': ['file', 'console', 'logstash', ],
            'level': 'DEBUG',
            'propagate': True,
        },
    },
}

If I enable the logstash handler for django logger, django server starts fine (both dev server and gunicorn), but doesn't accept connections and could be stopped only with SIGKILL. If I disable this handler, it works fine again.

macOS 11.6 (arm64), Python 3.9.7, Django 3.2.8, Celery 5.1.2, logstash-async 2.3.0, ELK 7.15.1.

P.S. Logging itself works fine. Celery worker works fine too unless I'm trying to stop it. Same with celery beat

Broken host mapping for ECS

With Elasticsearch 7.0 elastic common schema (ECS) was introduced. This maps the hostname to host.name instead of host. Currently logstash_async fails with:

[2019-04-29T08:04:22,562][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash", :_type=>"_doc", :routing=>nil}, #LogStash::Event:0x27275849], :response=>{"index"=>{"_index"=>"logstash-2019.04.26-000001", "_type"=>"_doc", "_id"=>"tRAfaGoB1XS_Z1QomfCX", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [host] tried to parse field [host] as object, but found a concrete value"}}}}

As a workaround we changed formatter.LogstashFormatter.format to:

message = {
       	    '@timestamp': self._format_timestamp(record.created),
            '@version': '1',
       	    'host.name': self._host,  # <-- FIXED: was: 'host': ...
       	    'level': record.levelname,
            'logsource': self._logsource,
            'message': record.getMessage(),
            'pid': record.process,
            'program': self._program_name,
       	    'type': self._message_type,
        }

Option to ignore logstash async connection failures

Is there a way so that we can keep a program running even if it's unable to connect to logstash?

I'm talking about a scenario where logstash was up and running, then stopped, resulting in a crash on the script because logstash was down.

I'm writing a script that logstash uptime isn't that important, but the script needs to continue running regardless.

Even if logstash-async logger stops sending logfiles after disconnecting, that's ok too.

Random Timeouts

I am experiencing timeout exceptions from time to time like

19/02/05 13:49:53 - ERROR - An error occurred while sending events: timed out
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/logstash_async/worker.py", line 118, in _fetch_events
    self._fetch_event()
  File "/usr/local/lib/python3.7/dist-packages/logstash_async/worker.py", line 140, in _fetch_event
    self._event = self._queue.get(block=False)
  File "/usr/lib/python3.7/queue.py", line 167, in get
    raise Empty
_queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/logstash_async/worker.py", line 224, in _flush_queued_events
    self._send_events(events)
  File "/usr/local/lib/python3.7/dist-packages/logstash_async/worker.py", line 255, in _send_events
    self._transport.send(events, use_logging=use_logging)
  File "/usr/local/lib/python3.7/dist-packages/logstash_async/transport.py", line 37, in send
    self._send(events)
  File "/usr/local/lib/python3.7/dist-packages/logstash_async/transport.py", line 54, in _send
    self._send_via_socket(event)
  File "/usr/local/lib/python3.7/dist-packages/logstash_async/transport.py", line 136, in _send_via_socket
    self._sock.sendall(data_to_send)
socket.timeout: timed out

Could you suggest?

TCP/Beats transport should be queuing

While using this Handler with no TCP or Beats Listener running it creates an exception when it's not able to connect to the remote endpoint.

Having either in memory or sqlite queuing in place I'd assume this lib should try to reconnect (using backoff) and try to async deliver log records?

Manually send shutdown event or customize constants

Hi!

First of all, I want to say thank you for your library. It helps me a lot :)

But I have some problem here: sometimes I want to manually push all messages from database to logstash

Maybe, ability to customize constants (from constants.py) would help me, idk

I can do PR, if you say me, what solution the better, by your opinion

filebeat

After experimenting with this app it dawned on me - this is what filebeat is for. Filebeat lives on the server you're shipping logs FROM and operates on an interval so it does not interfere with request times, etc.

So my question is why is it preferable to store messages in a sqlite db and async ship them as opposed to using filebeat?

Delay in sending logs to Logstash

It looks like in case of a lot of log events (about about 1 million per day), python-logstash-async doesn't process them in time and keeps them in the queue. This leads events being pushed to logstash with a long delay(several hours in our case).

Connection refused backtrace suppression / exception handling

What is the recommended way to catch when the connection to the Logstash endpoint is refused? I'd really like to be able to catch this and just locally log "CONNECTION REFUSED" rather than display 17 lines of backtrace that aren't going to help me. The connection was refused in this case because I deliberately pointed it at the wrong endpoint to trigger the errors, but the same principal applies to whenever it cannot make a connection to the Logstash endpoint.

<27>Sep 23 19:10:04 python3[2805]: An error occurred while sending events: [Errno 111] Connection refused
<27>Sep 23 19:10:04 python3[2805]: Traceback (most recent call last):
<27>Sep 23 19:10:04 python3[2805]:   File "/usr/lib/python3.8/site-packages/logstash_async/worker.py", line 118, in _fetch_events
<27>Sep 23 19:10:04 python3[2805]:   File "/usr/lib/python3.8/site-packages/logstash_async/worker.py", line 140, in _fetch_event
<27>Sep 23 19:10:04 python3[2805]:   File "/usr/lib/python3.8/queue.py", line 167, in get
<27>Sep 23 19:10:04 python3[2805]: _queue.Empty
<27>Sep 23 19:10:04 python3[2805]:
<27>Sep 23 19:10:04 python3[2805]: During handling of the above exception, another exception occurred:
<27>Sep 23 19:10:04 python3[2805]:
<27>Sep 23 19:10:04 python3[2805]: Traceback (most recent call last):
<27>Sep 23 19:10:04 python3[2805]:   File "/usr/lib/python3.8/site-packages/logstash_async/worker.py", line 214, in _flush_queued_events
<27>Sep 23 19:10:04 python3[2805]:   File "/usr/lib/python3.8/site-packages/logstash_async/worker.py", line 260, in _send_events
<27>Sep 23 19:10:04 python3[2805]:   File "/usr/lib/python3.8/site-packages/logstash_async/transport.py", line 176, in send
<27>Sep 23 19:10:04 python3[2805]:   File "/usr/lib/python3.8/site-packages/pylogbeat.py", line 85, in __enter__
<27>Sep 23 19:10:04 python3[2805]:   File "/usr/lib/python3.8/site-packages/pylogbeat.py", line 95, in connect
<27>Sep 23 19:10:04 python3[2805]:   File "/usr/lib/python3.8/site-packages/pylogbeat.py", line 104, in _create_and_connect_socket
<27>Sep 23 19:10:04 python3[2805]: ConnectionRefusedError: [Errno 111] Connection refused

After updateing to 2.1.0 the log events miss the *message* attribute.

Without any change in the logger setup the log events reaching ELK are missing the message attribute.
Just updated with pip.

Logs have following indication in the Kibana Message field: failed to find message
Checking the index, there are also no message field in the events.

Thanks for checking.

Memory leak when logstash is not available

Hello. My team recently faced a problem with python-logstash-async. When logstash server went into maintenance, the memory usage of one of our services started to rapidly grow.

Turns out this is because of python-logstash-async's feature to store failed events in the sqlite database. Even though we write everything to the filesystem, memory usage is not even comparable to database size (memory usage is much bigger).

To demonstrate that I created this little Django project: https://github.com/xunto-examples/async_logstash_memory_leak

I run start_logstash.sh and start_no_logstash.sh to start service. And tests.sh to start sending requests.

Missing @medatadata[beat]

This is when using the beats protocol.

I followed the common output pattern in logstsash:

output {
  elasticsearch {
    hosts => ["http://192.168.1.5:9200"]
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
  }
}

It seems @metadata is missing, therefore @metadata[beat] is missing. This configuration in logstash results in indexes like "%{[@metadata][beat]}-2020.05.25".

I would expect the index name to be something like my_beat_name-2020.05.25 for a standard output to elasticsearch.

It would be useful to add an example configuration in logstash pipeline to send data to elasticsearch.

Big database prevents treating events

Hi, thanks for the great work on the lib!

If the database gets too big, (witnessed with 30k lines of big logs (about 1k chars each)), there is a timeout on the connection when sending logs to logstash with TCP. The timeout creates an exception preventing the logs from being removed from the local database, although logstash still streats the logs he receives before the timeout!

This got us stuck in a loop of the same logs being processed by logstash over and over again! We avoided the problem by adding a 'LIMIT 100' on the query that gets the logs out of the DB, it works because it fits our data and connection. Maybe a cleaner fix would be to try with the whole database and if you get a 'timeout' try with a smaller portion of the DB and etc. until you can finally send your logs.

I don't need a fix, it's to let you and other users know of the issue.

Regards.

ImportError: cannot import name 'ParamSpec' from 'typing_extensions' on Python 3.11

May be that Python 3.11 is not supported yet?

from logstash_async.transport import HttpTransport
from logstash_async.handler import AsynchronousLogstashHandler

the second line fails with

...
  File "C:\prj\PLX_e1ns\common\src\plato\common\log_tools.py", line 206, in add_logstash_handler_to_logger
    from logstash_async.handler import AsynchronousLogstashHandler
  File "C:\VirtualEnv\python-3.11.1\.virtualenvs\e1ns_trunk\Lib\site-packages\logstash_async\handler.py", line 11, in <module>
    from logstash_async.worker import LogProcessingWorker
  File "C:\VirtualEnv\python-3.11.1\.virtualenvs\e1ns_trunk\Lib\site-packages\logstash_async\worker.py", line 12, in <module>
    from limits import parse as parse_rate_limit
  File "C:\VirtualEnv\python-3.11.1\.virtualenvs\e1ns_trunk\Lib\site-packages\limits\__init__.py", line 5, in <module>
    from . import _version, aio, storage, strategies
  File "C:\VirtualEnv\python-3.11.1\.virtualenvs\e1ns_trunk\Lib\site-packages\limits\aio\__init__.py", line 1, in <module>
    from . import storage, strategies
  File "C:\VirtualEnv\python-3.11.1\.virtualenvs\e1ns_trunk\Lib\site-packages\limits\aio\storage\__init__.py", line 6, in <module>
    from .base import MovingWindowSupport, Storage
  File "C:\VirtualEnv\python-3.11.1\.virtualenvs\e1ns_trunk\Lib\site-packages\limits\aio\storage\base.py", line 5, in <module>
    from limits.storage.registry import StorageRegistry
  File "C:\VirtualEnv\python-3.11.1\.virtualenvs\e1ns_trunk\Lib\site-packages\limits\storage\__init__.py", line 12, in <module>
    from .base import MovingWindowSupport, Storage
  File "C:\VirtualEnv\python-3.11.1\.virtualenvs\e1ns_trunk\Lib\site-packages\limits\storage\base.py", line 4, in <module>
    from limits.storage.registry import StorageRegistry
  File "C:\VirtualEnv\python-3.11.1\.virtualenvs\e1ns_trunk\Lib\site-packages\limits\storage\registry.py", line 5, in <module>
    from limits.typing import Dict, List, Tuple, Union
  File "C:\VirtualEnv\python-3.11.1\.virtualenvs\e1ns_trunk\Lib\site-packages\limits\typing.py", line 14, in <module>
    from typing_extensions import ClassVar, Counter, ParamSpec, Protocol
ImportError: cannot import name 'ParamSpec' from 'typing_extensions' (C:\VirtualEnv\python-3.11.1\.virtualenvs\e1ns_trunk\Lib\site-packages\typing_extensions.py)

Crash when logstash server is down (gunicorn with gevent worker)

I wanted to check how the behaviour when logstash server was down and a problem occurred (I am using gevent worker with gunicorn)

2023-01-05 22:03:23,859 - [9/140512473585056] - E - LogProcessingWorker - An error occurred while sending events: HTTPSConnectionPool(host='xxx.xxx.com', port=5960): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fcb8e71e920>, 'Connection to xxx.xxx.com timed out. (connect timeout=5.0)'))
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/logstash_async/worker.py", line 118, in _fetch_events
    self._fetch_event()
  File "/usr/local/lib/python3.10/site-packages/logstash_async/worker.py", line 140, in _fetch_event
    self._event = self._queue.get(block=False)
  File "/usr/local/lib/python3.10/queue.py", line 168, in get
    raise Empty
_queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn
    conn = connection.create_connection(
  File "/usr/local/lib/python3.10/site-packages/urllib3/util/connection.py", line 95, in create_connection
    raise err
  File "/usr/local/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
    sock.connect(sa)
  File "/usr/local/lib/python3.10/site-packages/gevent/_socketcommon.py", line 613, in connect
    self._wait(self._write_event)
  File "src/gevent/_hub_primitives.py", line 317, in gevent._gevent_c_hub_primitives.wait_on_socket
  File "src/gevent/_hub_primitives.py", line 322, in gevent._gevent_c_hub_primitives.wait_on_socket
  File "src/gevent/_hub_primitives.py", line 313, in gevent._gevent_c_hub_primitives._primitive_wait
  File "src/gevent/_hub_primitives.py", line 314, in gevent._gevent_c_hub_primitives._primitive_wait
  File "src/gevent/_hub_primitives.py", line 46, in gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
  File "src/gevent/_hub_primitives.py", line 46, in gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
  File "src/gevent/_hub_primitives.py", line 55, in gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
  File "src/gevent/_waiter.py", line 154, in gevent._gevent_c_waiter.Waiter.get
  File "src/gevent/_greenlet_primitives.py", line 61, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
  File "src/gevent/_greenlet_primitives.py", line 61, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
  File "src/gevent/_greenlet_primitives.py", line 65, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
  File "src/gevent/_gevent_c_greenlet_primitives.pxd", line 35, in gevent._gevent_c_greenlet_primitives._greenlet_switch
TimeoutError: timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
    httplib_response = self._make_request(
  File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request
    self._validate_conn(conn)
  File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
    conn.connect()
  File "/usr/local/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect
    self.sock = conn = self._new_conn()
  File "/usr/local/lib/python3.10/site-packages/urllib3/connection.py", line 179, in _new_conn
    raise ConnectTimeoutError(
urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x7fcb8e71e920>, 'Connection to xxx.xxx.com timed out. (connect timeout=5.0)')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/requests/adapters.py", line 489, in send
    resp = conn.urlopen(
  File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 787, in urlopen
    retries = retries.increment(
  File "/usr/local/lib/python3.10/site-packages/urllib3/util/retry.py", line 592, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='xxx.xxx.com', port=5960): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fcb8e71e920>, 'Connection to xxx.xxx.com timed out. (connect timeout=5.0)'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/logstash_async/worker.py", line 221, in _flush_queued_events
    self._send_events(events)
  File "/usr/local/lib/python3.10/site-packages/logstash_async/worker.py", line 284, in _send_events
    self._transport.send(events, use_logging=use_logging)
  File "/usr/local/lib/python3.10/site-packages/logstash_async/transport.py", line 364, in send
    response = self.__session.post(
  File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 635, in post
    return self.request("POST", url, data=data, json=json, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
    r = adapter.send(request, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/requests/adapters.py", line 553, in send
    raise ConnectTimeout(e, request=request)
requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='xxx.xxx.com', port=5960): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fcb8e71e920>, 'Connection to xxx.xxx.com timed out. (connect timeout=5.0)'))

this is the gunicorn command line I am using:

gunicorn config.wsgi:application --bind 0.0.0.0:8000 \
  --worker-class=gevent \
  --worker-connections=100 \
  --workers=4 \
  --timeout=60 \
  --graceful-timeout=30 \
  --max-requests=5000 \
  --max-requests-jitter=250 \

the config is the following:

ELK_ENABLED = env.bool('ELK_ENABLED', default=False)
if ELK_ENABLED:
    LOGSTASH_LOGGING_HANDLER_NAME = 'logstash'
    LOGGING_LOGGING_FORMATTER_NAME = 'logstash'

    from logstash_async.constants import constants

    constants.QUEUED_EVENTS_FLUSH_INTERVAL = 15
    constants.QUEUED_EVENTS_FLUSH_COUNT = 500

    LOGGING['formatters'].update(**{
        LOGGING_LOGGING_FORMATTER_NAME: {
            '()': 'logstash_async.formatter.DjangoLogstashFormatter',
            'message_type': env.str('ELK_MESSAGE_TYPE', default='Django'),
            'fqdn': False,  # Fully qualified domain name. Default value: false.
            'extra_prefix': 'extra',
        },
    })

    LOGGING['handlers'].update(**{
        LOGSTASH_LOGGING_HANDLER_NAME: {
            'level': env.str('ELK_LOGSTASH_LOGLEVEL', default='INFO'),
            'use_logging': True,
            'class': 'logstash_async.handler.AsynchronousLogstashHandler',
            'event_ttl': 5 * 60,  # 5 minutes
            'formatter': LOGGING_LOGGING_FORMATTER_NAME,
            'transport': 'logstash_async.transport.HttpTransport',
            'host': env.str('ELK_LOGSTASH_HOST', default='localhost'),
            'port': env.int('ELK_LOGSTASH_PORT', default=5959),
            'username': env.str('ELK_LOGSTASH_USERNAME', default=None),
            'password': env.str('ELK_LOGSTASH_PASSWORD', default=None),
            'max_content_length': 10 * 1024 * 1024,
            'ssl_enable': True,
            'ssl_verify': True,
            'ca_certs': None,
            'certfile': None,
            'keyfile': None,
            'database_path': f'{ROOT_DIR}/logstash.db',
        },
    })

When the server is up again, logs are not sent (I suppose the reason is because of the above crash). FYI Other services (celery and other manage.py commands) work just fine.

app versions:

  • python: 3.10.6
  • python-logstash-async==2.5.0
  • gevent==21.8.0
  • gunicorn==20.1.0
  • Django==3.2.16

Not working on django

I tried to integrate the library with django backend and seems its not working base on documentation

What I tried so far:

  • Verify the Ec2 security group is accepting on port tcp/5000 (logstash)
  • Monitor traffic from local machine and logging server to verify any network traffic. Using nc I was able to verify test logs being send and receive to our logging server.
  • Re-create a bare minimum django project to see if there is an issue from versioning
LOGGING = {
  'formatters': {
      'logstash': {
          '()': 'logstash_async.formatter.DjangoLogstashFormatter',
          'message_type': 'python-logstash',
          'fqdn': False, # Fully qualified domain name. Default value: false.
          'extra_prefix': 'dev', #
          'extra': {
              'environment': 'local'
          }
      },
  },
  'handlers': {
      'logstash': {
          'level': 'DEBUG',
          'class': 'logstash_async.handler.AsynchronousLogstashHandler',
          'transport': 'logstash_async.transport.TcpTransport',
          'host': 'my.remote.host', 
          'port': 5000,
          'ssl_enable': False,
          'ssl_verify': False,
          'database_path': 'memory',
      },
  },
  'loggers': {
      'django.request': {
          'handlers': ['logstash'],
          'level': 'DEBUG',
          'propagate': True,
      },
  },
}

Logstash conf:

input {
stdin {}
    tcp {
        port => 5000
        codec => json
    }
    gelf {
        type => docker
        port => 12201
    }

}

## Add your filters / logstash plugins configuration here

output {
stdout {}
        elasticsearch {
                hosts => "elasticsearch:9200"
        }
}

I also posted the question on Stackoverflow:
https://stackoverflow.com/questions/65712591/python-logstash-and-python-logstash-async-not-working-with-django

Details:

Python Version: 3.7.2
Django version: 2.1.3
Re-created django project:

Python Version: 3.7.9
Django Version: 3.1.5

OS:
elementary OS 5.1.7 Hera
Built on Ubuntu 18.04.4 LTS
Linux 5.4.0-60-generic

Memory cache with timeout

Hey! First, thanks so much for this library.

I would like to implement a backend that does not persist to disk. I'm thinking of doing this by adding a TTL on the messages along with some new config values. If I did this and submitted a PR, would you be interested in accepting it?

[request] multiple hosts support

Hello, and thank you for your well maintained fork.

One benefit of using something like filebeat is that it can fall back to a backup logstash host when the first one defined is unavailable. Do you have any recommendations on how to add this type of feature here, allowing for multiple host/port combos to be specified for the same log handler?

Is this lib fork-safe?

Hello! Can I use this log handler in multi processing program in UNIX/LINUX system?

Thanks for your great job!

Not working with Django

LOGGING = {
    "version": 1,
    "disable_existing_loggers": False,
    "formatters": {
        "logstash": {
            "()": "logstash_async.formatter.DjangoLogstashFormatter",
            "message_type": "django-logstash",
            "fqdn": False,
            "extra_prefix": None,
            "extra": {
                "application": "my-app",
                "project_path": os.getcwd(),
                "environment": "test",
            },
        },
    },
    "handlers": {
        "logstash": {
            "level": "DEBUG",
            "class": "logstash_async.handler.AsynchronousLogstashHandler",
            "formatter": "logstash",
            "transport": "logstash_async.transport.TcpTransport",
            "host": "xxx.xxx.xxx.xxx",
            "port": xxxx,
            "database_path": "{}/logstash.db".format(os.getcwd()),
        },
    },
    "loggers": {
        "django.request": {
            "handlers": ["logstash"],
            "level": "DEBUG",
            "propagate": True,
        },
    },
}

Hello, I tried your library in my Django Rest Framework application following the documentation, but nothing is working.

I know that my elastic and logstash is fully working because I'm using it with a lot of other application.

Do you have an idea why it's not working?

Drop fields

Hello!
I would like to remove fields like "line", "logsource", "logstash_async_version" from my logs.
is there some way to do that?

LogstashFormatter is not working with StreamHandler

I need to output logstash formatted records to stdout for further processing via external tools. When I do it like:

ch = logging.StreamHandler(stream=sys.stdout)
ch.setFormatter(LogstashFormatter())

I get errors that write method accepts str and not bytes.

Here is excerpt from Formatter docstring:

    Formatter instances are used to convert a LogRecord to text.

Seems that conversion to bytes is already handled in transport, so I suggest to remove another one from Formatter.

Cant't send tags as 1st level of document

Hi,
I am sending logs to ELK and i want to send some extrainfo without extra fields mentions while logging message. I want to send environment name with all messages but not able to send it, is there any central place where I can set.

below is my config file.

`# loggers
[loggers]
keys = root

[logger_root]
name = python-app
level = INFO
handlers = console,logstash,file
propagate = 1
qualname = root

[handlers]
keys = console,logstash,file

[handler_console]
class = StreamHandler
level = NOTSET
formatter = console
args = (sys.stdout,)

[handler_logstash]
class = logstash_async.handler.AsynchronousLogstashHandler
level = NOTSET
formatter = logstash
args = ('%(host)s', %(port)s, '%(database_path)s', '%(transport)s',%(enable)s)
transport = logstash_async.transport.UdpTransport
host = 10.10.0.19
port = 9563
enable = True
database_path =

[handler_file]
class : logging.handlers.RotatingFileHandler
level : NOTSET
formatter: file
args = ('%(filename)s','a','%(maxBytes)s','%(backupCount)s',None,0)
filename: /var/log/es-streaming-transporter/transporter.log
maxBytes: 1024
backupCount: 3

[formatters]
keys = console,logstash,file

[formatter_console]
format = [%(asctime)s][%(module)s][%(levelname)s] %(message)s

[formatter_file]
format = [%(asctime)s][%(module)s][%(levelname)s] %(message)s

[formatter_logstash]
class = logstash_async.formatter.LogstashFormatter

format = python-logstash
style = True`

message_type not being set

I have a Django Server, I was using python-logstash with UDP, now I change my script to use this one with SSL, it's working pretty well, but the message_type is not being set.

I tried everything, I can see all the values being set in the config, but in Logstash I'm still seeing python-logstash as type.

Anything I can do to solve this?

Events stored in the DB is sent before

If Logstash crashes, python-logstash-async is working perfectly, the events get stored in the sqlite database. How ever, when restarting the Logstash container, the connection is established, sent and deleted from the database instantly, but the Logstash is not ready to handle them. All the stored messages is then lost.

My workaround will be to have health checks before allowing traffic in to the container.

ConnectionRefusedError: [Errno 111] Connection refused

Hi, finally i'm getting this working, it was a hard way but i have it.

Unfortunatelly i have a problem with it:

xprende.blog     | Traceback (most recent call last):
xprende.blog     |   File "/usr/local/lib/python3.6/site-packages/logstash_async/worker.py", line 118, in _fetch_events
xprende.blog     |     self._fetch_event()
xprende.blog     |   File "/usr/local/lib/python3.6/site-packages/logstash_async/worker.py", line 140, in _fetch_event
xprende.blog     |     self._event = self._queue.get(block=False)
xprende.blog     |   File "/usr/local/lib/python3.6/queue.py", line 161, in get
xprende.blog     |     raise Empty
xprende.blog     | queue.Empty
xprende.blog     | 
xprende.blog     | During handling of the above exception, another exception occurred:
xprende.blog     | 
xprende.blog     | Traceback (most recent call last):
xprende.blog     |   File "/usr/local/lib/python3.6/site-packages/logstash_async/worker.py", line 223, in _flush_queued_events
xprende.blog     |     self._send_events(events)
xprende.blog     |   File "/usr/local/lib/python3.6/site-packages/logstash_async/worker.py", line 253, in _send_events
xprende.blog     |     self._transport.send(events)
xprende.blog     |   File "/usr/local/lib/python3.6/site-packages/logstash_async/transport.py", line 28, in send
xprende.blog     |     self._create_socket()
xprende.blog     |   File "/usr/local/lib/python3.6/site-packages/logstash_async/transport.py", line 94, in _create_socket
xprende.blog     |     sock.connect((self._host, self._port))
xprende.blog     | ConnectionRefusedError: [Errno 111] Connection refused

As you can see, the socket can't connect with logstash, but i can't understand why.

Can't create several handlers with different databases

Hi, thanks for the great work on the lib!

The below code opening two different databases won't work.

I don't need a fix, it's to let you and other users know of the issue.

import logging
import sys
from logstash_async.handler import AsynchronousLogstashHandler

test_logger1 = logging.getLogger('python-logstash-logger')
test_logger1.setLevel(logging.INFO)
test_logger1.addHandler(AsynchronousLogstashHandler(
    'nowhere.test.com',
    5002,
    database_path='handler_1.sqlite',
))

test_logger1.info('logger1')


test_logger2 = logging.getLogger('python-logstash-logger2')
test_logger2.setLevel(logging.INFO)
test_logger2.addHandler(AsynchronousLogstashHandler(
    'nowhere.test.com',
    5000,
    database_path='handler_2.sqlite',
))

test_logger2.info('logger2')  # goes into first database

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.