Code Monkey home page Code Monkey logo

structlog's People

Contributors

adamchainz avatar aucampia avatar callek avatar codeinthehole avatar codepainters avatar daybarr avatar dependabot[bot] avatar gcbirzan avatar gilbsgilbs avatar hynek avatar impredicative avatar kamalmarhubi avatar kludex avatar leplatrem avatar methane avatar noamnol avatar ofek avatar pahrohfit avatar peterschutt avatar piotr1212 avatar pquentin avatar pre-commit-ci[bot] avatar quodlibetor avatar radix avatar sibson avatar sscherfke avatar stevepiercy avatar tinche avatar underyx avatar wbolster-eiq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

structlog's Issues

tmp_bind cleanup in the face of exceptions

The tmp_bind context manager clean-up logic (the code after the yield) isn't protected by a finally clause, so it doesn't get executed when an exception is raised.

>>> from structlog import wrap_logger, PrintLogger
>>> from structlog.threadlocal import tmp_bind, wrap_dict
>>> logger = wrap_logger(PrintLogger(),  context_class=wrap_dict(dict))
>>> try:
...  with tmp_bind(logger, x=5) as tmp_logger:
...   logger = logger.bind(y=3)
...   tmp_logger.msg('event')
...   raise ValueError()
... except:
...  pass
...
x=5 event='event' y=3
>>> logger.msg('event')
x=5 event='event' y=3

I would say the expected behavior is for the last logger to just emit event='event'.

If you agree, it's a simple fix that I could contribute.

Add more stdlib specific processors and recommend their use

Following discussion in #40, the processors that add the logger name and log level to the event should be added to structlog itself instead of living as mere suggestions in the docs. In addition the use of these processors should be recommended in the configuration example in the docs. In addition to that, the recently added positional arguments processor from #19 and #34 should also make it into the suggested configuration.

What needs to be done is:

  • Move the add_logger_name() and add_log_level() processors from the suggested documentation changes in from #40 into the structlog.stlib module.
  • Add tests for those processors
  • Update the recommended configuration in the docs to include these two processors, and also the positional arguments formatter
  • Rework the documentation changes in #40 so that it reflects the changes above

SyntaxHighlighter processor

It would be really awesome if I could add

structlog.processors.SyntaxHighlighter(language_mapping={'body': 'json', 'xml': 'xml'}, file=sys.stdout)

which would try to pop body and xml, and when they exist write a syntax highlighted version of them to file.

Exception when _process_event returns dict

This could be a documentation issue, or me not understanding how to use structlog, but here goes:

In BoundLoggerBase (_base.py), _process_event can either return a tuple of: (event_dict,), {} or just event_dict. However in the same class, _proxy_to_logger always calls _process_event like so:

            args, kw = self._process_event(method_name, event, event_kw)

I have a custom logger that needs access to the raw event_dict (to store it in a database), so I don't have a last processor to turn it into a string.

I have various workarounds I can think of (e.g. serialize the event_dict, and deserialize in the logger as needed), but I was wondering if this was a bug.

Allow positional arguments without configuring stdlib integration

I'm aware of #70, and totally understand the arguments against it. This is not a request for that.

In our project, we've adopted structlog with the stdlib configuration, as a drop-in replacement, to allow us to gradually move to a more structured approach. However, one pain point is that during unit testing, if we haven't configured structlog, any logging calls that have positional arguments fail because BoundLoggerBase._proxy_to_logger does not accept positional arguments.

An implementation that didn't throw an exception in this case would be useful (for example, the approach used in stdlib.BoundLogger._proxy_to_logger of putting them into a key called 'positional_args' would work great), since we could write unit tests without having to think about logging config.

Happy to raise a PR for this, but wanted to check that the approach would be acceptable before starting work.

Using class name instead of module name results in no loggings

When I do
self.logger = wrap_logger(logger=logging.getLogger(__name__)) it works just fine, however doing
self.logger = wrap_logger(logger=logging.getLogger(self.__class__.__name__)) no logs appear in the file handler!
Is there any reason for this behaviour or am I doing something wrong?
Thanks!

AttributeError: 'PrintLogger' object has no attribute 'isEnabledFor'

Trying to setup structlog after it has already logged a message can put it into a strange state where logging messages causes exceptions.

import structlog
log = structlog.get_logger()
log.msg('greeted', whom='world', more_than_a_string=[1, 2, 3])

from structlog import get_logger, configure
configure(
    logger_factory=structlog.stdlib.LoggerFactory(),
    wrapper_class=structlog.stdlib.BoundLogger,
    # Setup logger output format.
    processors=[
        structlog.stdlib.filter_by_level,
        structlog.processors.format_exc_info,
    ])

log.warn("Hey, this isn't cool")

Output:

whom='world' more_than_a_string=[1, 2, 3] event='greeted'
Traceback (most recent call last):
  File "main.py", line 15, in <module>
    log.warn("Hey, this isn't cool")
  File "/home/ethan/Jobs/Mozilla/tmp/structlog-bug/.venv/lib/python2.7/site-packages/structlog/stdlib.py", line 73, in warning
    return self._proxy_to_logger('warning', event, *args, **kw)
  File "/home/ethan/Jobs/Mozilla/tmp/structlog-bug/.venv/lib/python2.7/site-packages/structlog/stdlib.py", line 119, in _proxy_to_logger
    **event_kw)
  File "/home/ethan/Jobs/Mozilla/tmp/structlog-bug/.venv/lib/python2.7/site-packages/structlog/_base.py", line 176, in _proxy_to_logger
    args, kw = self._process_event(method_name, event, event_kw)
  File "/home/ethan/Jobs/Mozilla/tmp/structlog-bug/.venv/lib/python2.7/site-packages/structlog/_base.py", line 136, in _process_event
    event_dict = proc(self._logger, method_name, event_dict)
  File "/home/ethan/Jobs/Mozilla/tmp/structlog-bug/.venv/lib/python2.7/site-packages/structlog/stdlib.py", line 327, in filter_by_level
    if logger.isEnabledFor(_NAME_TO_LEVEL[name]):
AttributeError: 'PrintLogger' object has no attribute 'isEnabledFor'

This makes it difficult to log messages during startup, because logging isn't configured yet, and if we log messages before we configure, later log messages will just crash and burn with this exception.

Allow direct configuration of loggers for testing purposes

There should be a way to configure loggers directly in test environments for finer introspection and assertions. That would for example allow to assert on log “output” of certain modules or classes instead of wading through global state and filtering unrelated entries.

Structlog's exceptions are hard to be used out of the box with files

This, I guess, can be argued against. But, basically, we run logging with files in production and it's impossible to read stack traces in repr() format. This is easily solved by a processor, but care needs to be taken (especially given #16 ) to actually print the stack trace last, and with newlines expanded.

Either way, since this is actually not something that breaks existing functionality, I am not 100% sure it should live in structlog, but I'm sure it'd be useful for people (like us) that are looking to migrate, but have a large code base, and still need to use log files (because log readers are a bit meh sometimes, and everyone knows how to use less and grep :D)

PositionalArgumentsFormatter does not remove empty list of args

PositionalArgumentsFormatter with remove_positional_args=True is supposed to remove the positional_args from the event dict. However this only happens when positional_args is truthy which is not the case for the empty list. _proxy_to_logger takes care not to add these to begin with but you can still hit this unexpected behaviour if setting this attribute somewhere else. In my case using a stdlib logging handler to funnel messages into structlog.

Add public method to access _processors in BoundLogger

My use case here is, that I have one special processor wich forwards events to listeners, independent of the loglevel. in order to check if the logger 'is active' i need to access the first processor in list and check if it has listeners. currently i have to access the private member _processors.

i think a pythonic way would to solve this would be to simply make the processor list public.

Flask example broken? No handlers could be found for logger "some_module"

Following the example shown here: http://www.structlog.org/en/stable/examples.html#flask-and-thread-local-data

I get the following output when requesting http://127.0.0.1:5000/login:

$ python main.py 
 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
No handlers could be found for logger "some_module"
127.0.0.1 - - [03/Aug/2016 22:32:37] "GET /login HTTP/1.1" 200 -

I expect to see some logging data like event="user logged in".


And the example code isn't runnable without slight modifications:

  1. Removing leading . from from .some_module import some_function
  2. Make def login() return a string response.

So here's my actual files:

main.py:

import uuid

import flask
import structlog

from some_module import some_function


logger = structlog.get_logger()
app = flask.Flask(__name__)


@app.route('/login', methods=['POST', 'GET'])
def some_route():
    log = logger.new(
        request_id=str(uuid.uuid4()),
    )
    # do something
    # ...
    log.info('user logged in', user='test-user')
    # gives you:
    # event='user logged in' request_id='ffcdc44f-b952-4b5f-95e6-0f1f3a9ee5fd' user='test-user'
    # ...
    some_function()
    # ...
    return 'hey'

if __name__ == "__main__":
    structlog.configure(
        processors=[
            structlog.processors.KeyValueRenderer(
                key_order=['event', 'request_id'],
            ),
        ],
        context_class=structlog.threadlocal.wrap_dict(dict),
        logger_factory=structlog.stdlib.LoggerFactory(),
    )
    app.run()

some_module.py:

from structlog import get_logger

logger = get_logger()


def some_function():
    # later then:
    logger.error('user did something', something='shot_in_foot')
    # gives you:
    # event='user did something 'request_id='ffcdc44f-b952-4b5f-95e6-0f1f3a9ee5fd' something='shot_in_foot'

I'm running with Python 2.7

$ pip freeze
You are using pip version 7.1.0, however version 8.1.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
click==6.6
Flask==0.11.1
itsdangerous==0.24
Jinja2==2.8
MarkupSafe==0.23
six==1.10.0
structlog==16.1.0
Werkzeug==0.11.10
wheel==0.24.0

Is it just me or is it broken for everyone?

Wrapping django loggers

I code use structlogs to wrap my loggers in django apps, but is there any way to wrap the logs come from the django.* loggers?

Make logging message key 'event' configurable

Hello,

Nice project. Really loving it :)

Our product works with objects that represent touristic events and so I would like to use the
'event' key for the serialization of these objects. But the 'event' key is currently reserved for the logging message. Would you agree to make these

if event is not None:
event_dict["event"] = event

hard-coded key configurable? i.e. structlog.configure(event_message_key="message")

Seems pretty easy to implement. I could work on a pull request if you like the idea.

Thank you!

Pure JSON output w/o StdLib

@hynek In #68, you mentioned that it's possible to get pure JSON output by using the logger provided by the logging module of Python's standard library. I got this working but noticed that when I set the log level to DEBUG, I also see log outputs from 3rd-party libraries I'm using. Is there a way to apply transformations or filter these logs through structlog? Better yet, is there a way to get pure JSON output without using the stdlib logger so that I don't change the minimum log level there & packages I use don't start logging?

LoggerFactory frame name guessing is spoiled

Under following circumstances, name guessing from frames doesn't work:

  • have structlog configured using stdlib logger and threadlocals
  • create a module scan.py where structlog is used to call logger = get_logger();logger.debug(), but at the same time issues a venusian.scan()
  • in a module foo.py that venusian picks up, have logger = get_logger() call at module level

You'll end up with a stacktrace in frame guessing from venusian call and logger in foo.py will have that logger cached. Printing foo.py:logger._logger.name confirms that.

format_exc_info should add structured data about the exception to the event_dict

The stdlib's Logger class has a .exception() method that is basically .error(..., exc_info=True). When porting an existing application that used logger.exception("oops something went wrong") in many exception handlers, all this code had to be updated to logger.error("oops...", exc_info=True).

Since the whole point of structlog.stdlib.BoundLogger is that it emulates the stdlib's Logger API, would you consider adding an .exception() method?

Greenlet dict wrapper behaves surprisingly on Travis

The addition of unrelated code led to a test failure: https://travis-ci.org/hynek/structlog/builds/11921511

=================================== FAILURES ===================================
_______________ TestThreadLocalDict.test_non_dunder_proxy_works ________________
self = <tests.test_threadlocal.TestThreadLocalDict object at 0x23a7c50>
D = <class 'structlog.threadlocal.WrappedDict-f561929d-5fca-4b25-afcf-956a40968ad0'>
    def test_non_dunder_proxy_works(self, D):
        d = D({'a': 42})
>       assert 1 == len(d)
E       assert 1 == 2
E        +  where 2 = len(<WrappedDict-f561929d-5fca-4b25-afcf-956a40968ad0({'a': 42, 'b': 23})>)
tests/test_threadlocal.py:131: AssertionError

It appears that the greenlet wrapper does not create distinct dict classes on each call but just one that shares the same data store within one greenlet.

That shouldn’t be a problem in practice, but it’s contrary to what the docs say.

I wasn’t able to trigger this bug locally. The test is commented out as of 1995a8b.

/cc @hpk42

Can not print utf-8(e.g. Chinese)

What I wanted

  • I want to print utf-8 like '哈哈哈' in my code

What I did

  • I set configure in my main()
    structlog.configure(
        processors=[
            structlog.processors.UnicodeDecoder(),
            structlog.processors.KeyValueRenderer(
                key_order=['event', 'request_id'],
            ),
        ],
        context_class=structlog.threadlocal.wrap_dict(dict),
        logger_factory=structlog.stdlib.LoggerFactory(),
    )
  • and print log
log = structlog.get_logger(request_id=shortuuid.uuid().upper())
log.info('哈哈哈', api='/api_card_save_kyc_data/')

What I get

  • Print the code, not the text. The code u'\u54c8\u54c8\u54c8' below is '哈哈哈' in Chinese
event=u'\u54c8\u54c8\u54c8' request_id=u'HDRWLXJA9NWHQR3NWDSMLF' 

What I found

  • I debugged the code I found it described below
  • The magic method __call__ in structlog/processors.py the class KeyValueRenderer
  • seems like “cook” the return value
        return ' '.join(k + '=' + repr(v)
                        for k, v in self._ordered_items(event_dict))
  • The very repr make the v from '哈哈哈' to u'\u54c8\u54c8\u54c8'
  • After I removed the repr I get the info below
event=哈哈哈 request_id=W3YHQZUNQDG4XDA2CTYYRE 

What the last

  • I had tried many ways to solve this problem without changing the origin code, but failed.
  • Maybe there is best way to deal it that I can't reach.
  • If it is true, please kindly tell me ,and I will be appreciate it very much
  • If it is false, could you make some change on the code above, to make it print utf-8 better
  • thanks very much

@hynek

docs: Logging of Exceptions

Please provide a recipe in the docs how to handle logging of exceptions.

I guess this use case is wide spread and most users of structlog handle it in a custom way.

I could solve it my way, but a common recipe or even API would help other new users, too.

Thank you for this module!

Get pure JSON output?

import structlog
import logging
import sys

def levelMapper(log, method, message):
    if (method == 'critical'):
        message['level'] = 'fatal'
    return message

def create(name, level = 'INFO'):
    # set minimum log level
    logging.basicConfig(level=getattr(logging, level))

    structlog.configure(
        processors=[
            structlog.stdlib.filter_by_level,
            structlog.stdlib.add_log_level,
            levelMapper,
            structlog.stdlib.PositionalArgumentsFormatter(),
            structlog.processors.StackInfoRenderer(),
            structlog.processors.format_exc_info,
            structlog.processors.JSONRenderer()
        ],
        context_class=dict,
        logger_factory=structlog.stdlib.LoggerFactory(),
        wrapper_class=structlog.stdlib.BoundLogger,
        cache_logger_on_first_use=True,
    )

    log = structlog.get_logger()
    log = log.bind(program=name)
    log.debug('hi', { 'a': 'b' })

create('test', 'DEBUG')

results in

DEBUG:__main__:{"level": "debug", "program": "test", "event": "hi"}

How do I get pure JSON output? The DEBUG:__main__ is throwing off my parsing atm.

More examples and stdlib friendly documentation

Hi,

First of all, thank you very much for the effort you have put together in-order to create this module.

I really would like to start using it for my logging tasks as it looks very promising in terms of the features provided, the scalability options and simply due to the fact that is compatible with the stdlib logging module in Python.

As I said, I am keen to start using structlog, however, I have been reading the documentation all day long and I feel a bit lost. It is probably normal as with any new API and maybe the examples are a bit Flask and Django oriented. It would be nice to have some simple examples that are oriented for CLI applications.

For example, I got through the first bit and understood the context binding etc. And managed to come up with simple log messages such as:

016-11-03 15:31.41 greeted                        more_than_a_string=[1, 2, 3] whom='world'

or

016-11-03 15:31.41 something                      foo='apple' peer_ip='1.2.3.4' user_agent='banana'

But then got stuck at the following questions:

  1. How do we format the messages? Where is this gap between the message and the contextual information coming from?

  2. How do we change the logging.level? With the following code, I am only getting the ERROR, WARNING and CRITICAL levels, but not the DEBUG and INFO.

    import structlog
    import logging

    log = structlog.get_logger()
    logging.basicConfig(level=logging.DEBUG)

    structlog.configure(
        processors=[
            structlog.stdlib.add_logger_name,
            structlog.stdlib.add_log_level,
            structlog.stdlib.PositionalArgumentsFormatter(),
            structlog.processors.TimeStamper(fmt="%Y-%m-%d %H:%M.%S"),
            structlog.processors.StackInfoRenderer(),
            structlog.processors.format_exc_info,
            structlog.dev.ConsoleRenderer()  # <===
        ],
        context_class=dict,
        logger_factory=structlog.stdlib.LoggerFactory(),
        wrapper_class=structlog.stdlib.BoundLogger,
        cache_logger_on_first_use=True,
    )

    # Test Levels
    log.info('This is just to let you know.')
    log.debug('This is the expected value.')
    log.error('The operation failed.')
    log.fatal('This is the end of everything!')
    log.warn('You have been warned.')
    log.critical('That is NOT good.')

Usually, with the native logging, the following message is a good example for what I am trying to usually get into my logs:

[2016-11-02 22:27:36,695] <main>        {myApp :<module>    :29} WARNING : I have to warn you about this!
[2016-11-02 22:27:36,695] <main.entity> {entity:__init__    :20} DEBUG   : Class instance call.
[2016-11-02 22:27:36,696] <main.entity> {entity:get_name    :28} INFO    : PET name is Molly
[2016-11-02 22:27:36,696] <main.entity> {entity:third_party :31} INFO    : Initial message from: __THIRD_PARTY!

Could you please help me to understand the structlog way of achieving such a result. The Development section of the docs seems to attempt to address these needs. But as I said, it might be a good idea to include more examples to help us understand the API better.

Thanks.

stack_info is not processed for python 3

Just like exc_info is processed as a special argument for log messages, so should stack_info. stack_info should evaluate to the traceback, even without an exception (http://docs.python.org/3.3/library/logging.html#logging.Logger.debug)

An interesting question is whether this should go in as a feature for python2 (which doesn't support it natively), and I'd say yes, because we already have a hackish way of doing it currently, and it won't break backwards compatibility.

Logging levels for an application

Hi.

I couldn't find a simple example of setting a logging level properly for the whole application.
This is something which turned out to be non-trivial.

If I used wrap at the top level, the loggers for modules would still use PrintLogger, and the logger which has level defined wouldn't be picked up by submodules.
The only way I could make it work is to pass a factory to a structlog.config.

Is there a canonical/easy way out ?

Extract renderers into a separate API from processors?

First, structlog is awesome :)

Integrating structlog in a Django project I wrote a get_logger function that formats logs with KeyValueRenderer in development and JSONRenderer + some extra stuff for logstash in production.

def get_logger(name):
    processors = [format_exc_info]

    if settings.DEBUG:
        processors.append(KeyValueRenderer())
    else:
        processors.extend([logstash_processor, JSONRenderer()])

    return wrap_logger(logging.getLogger(name), processors=processors)

However at first I didn't add the KeyValueRenderer in dev. It's only when looking at the default processors that I saw it. Without it I got exceptions when logging stuff because the signature of BoundLoggerBase._process_event changes depending on the type returned by the last processor:

https://github.com/hynek/structlog/blob/master/structlog/_base.py#L141-L144

With only [format_exc_info] the processor returns a dict, making _process_event() return the same dict and breaking this call (unpacking error):

https://github.com/hynek/structlog/blob/master/structlog/_base.py#L170

And then after reading the docs I found this paragraph which highlights the importance of the last processor :)

In the end it's not a bug in structlog but rather me failing to read and understand.

However… do you think the processors API be split into processors (which process and return dicts all the time) and a "renderer" that does something with the final structure? My get_logger would become:

def get_logger(name):
    processors = [format_exc_info]

    if settings.DEBUG:
        renderer = KeyValueRenderer()
    else:
        processors.append(logstash_processor)
        renderer = JSONRenderer()

    return wrap_logger(logging.getLogger(name), processors=processors, renderer=renderer)

I'd find the API nicer with such a separation :)

Timestamper should allow providing our own timestamp

TL; DR: it would be convenient in some rare cases to allow providing our own timestamp. Right now, I need to reimplement structlog.processors.TimeStamper to do achieve this.

I'm writing a toy microservices PaaS (purely for educative purposes) and I'm currently working on request tracing for correlation of events across multiple services. The idea is that each service logs the request URL path, HTTP response status, processing time and request ID for each HTTP request so I an understand where errors are coming from (nothing new here).

Now that I've got something working, I noticed that events show up in a rather confusing order: since I log at request completion time to get the response status code, the timestamp of the request causes an order which looks time ordered across requests, but the order is reversed for sub-requests, which complete before the enclosing/source request (the one at the edge of the network).

It's trivial for me to record the arrival time of the request, so I'd like to use that and inject it in the event data I pass to logger.info('http.access', path=..., status=..., processing_time=..., timestamp=...).

From the looks of it, adding support for this is mostly replacing a few event_dict[key] = ... statements to event_dict.setdefault(key, ...), plus some docs to explain that the caller is responsible for formatting the timestamps themselves when they use this (OK for me, I always log iso anyways).

Would you be open to such a change? I'm willing to put in a PR if you're OK with this proposal :-)

`JSONRenderer(default=custom_json_fallback_handler)` results in `TypeError: dumps() got multiple values for keyword argument 'default'`

I use the JSONEncoder() default argument to handle rendering datetime using ISO format. With 9dd4d65 JSONRenderer() provides a default keyword argument, and providing my own results in the TypeError I described in the title.

This could be fixed by having JSONRenderer mix the _json_fallback_handler default into self._dumps_kw.

Is there another way I should be doing this? I already use the TimeStamper processor, but I have other datetime objects that I would prefer not to render as repr. Maybe structlog could provide a functools.singledispatch callable as an alternative to __structlog__ methods?

Relicense under dual Apache2/MIT

structlog has historically been licensed under the Apache 2 license because it offers protection against certain patent-related jerk moves.

Unfortunately that also limits structlog’s usefulness because it’s incompatible with GPLv2 (yes I don’t like the GPL but I also don’t like “eye for an eye”).

Over at cryptography we’ve consulted an actual IP lawyer, and turns out a dual license fixes this: pyca/cryptography#1209

Therefore I’d like to re-license the project under a dual Apache 2/MIT license which gives me best of both worlds.

For that, I have to ask my contributors for permission. According to GitHub, that are the following ones:

Please be so kind and reply with something along of “Permission granted.” to this ticket and I’ll tick you off (this sentence has been edited, @radix is in fact capable of reading :)).

Thank you!

P.S. Changes by those who don’t agree will be re-written. So your veto won’t stop the change; sorry. :)

stuctlog.processors.TimeStamper doesn't seem to work as documented

>>> logger = structlog.wrap_logger(logger, processors=[structlog.processors.TimeStamper()])
>>> logger.info("zxcv")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.4/site-packages/structlog/_base.py", line 176, in _proxy_to_logger
    return getattr(self._logger, method_name)(*args, **kw)
TypeError: msg() got an unexpected keyword argument 'timestamp'

Judging by the docs that should work. Not sure what is wrong.

Running on python3, structlog 15.1.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.