Code Monkey home page Code Monkey logo

loguru's Introduction

Loguru logo

Pypi version Python versions Documentation Build status Coverage Code quality License

Loguru logo


Loguru is a library which aims to bring enjoyable logging in Python.

Did you ever feel lazy about configuring a logger and used print() instead?... I did, yet logging is fundamental to every application and eases the process of debugging. Using Loguru you have no excuse not to use logging from the start, this is as simple as from loguru import logger.

Also, this library is intended to make Python logging less painful by adding a bunch of useful functionalities that solve caveats of the standard loggers. Using logs in your application should be an automatism, Loguru tries to make it both pleasant and powerful.

Installation

pip install loguru

Features

Take the tour

Ready to use out of the box without boilerplate

The main concept of Loguru is that there is one and only one logger.

For convenience, it is pre-configured and outputs to stderr to begin with (but that's entirely configurable).

from loguru import logger

logger.debug("That's it, beautiful and simple logging!")

The logger is just an interface which dispatches log messages to configured handlers. Simple, right?

No Handler, no Formatter, no Filter: one function to rule them all

How to add a handler? How to set up logs formatting? How to filter messages? How to set level?

One answer: the add() function.

logger.add(sys.stderr, format="{time} {level} {message}", filter="my_module", level="INFO")

This function should be used to register sinks which are responsible for managing log messages contextualized with a record dict. A sink can take many forms: a simple function, a string path, a file-like object, a coroutine function or a built-in Handler.

Note that you may also remove() a previously added handler by using the identifier returned while adding it. This is particularly useful if you want to supersede the default stderr handler: just call logger.remove() to make a fresh start.

Easier file logging with rotation / retention / compression

If you want to send logged messages to a file, you just have to use a string path as the sink. It can be automatically timed too for convenience:

logger.add("file_{time}.log")

It is also easily configurable if you need rotating logger, if you want to remove older logs, or if you wish to compress your files at closure.

logger.add("file_1.log", rotation="500 MB")    # Automatically rotate too big file
logger.add("file_2.log", rotation="12:00")     # New file is created each day at noon
logger.add("file_3.log", rotation="1 week")    # Once the file is too old, it's rotated

logger.add("file_X.log", retention="10 days")  # Cleanup after some time

logger.add("file_Y.log", compression="zip")    # Save some loved space

Modern string formatting using braces style

Loguru favors the much more elegant and powerful {} formatting over %, logging functions are actually equivalent to str.format().

logger.info("If you're using Python {}, prefer {feature} of course!", 3.6, feature="f-strings")

Exceptions catching within threads or main

Have you ever seen your program crashing unexpectedly without seeing anything in the log file? Did you ever notice that exceptions occurring in threads were not logged? This can be solved using the catch() decorator / context manager which ensures that any error is correctly propagated to the logger.

@logger.catch
def my_function(x, y, z):
    # An error? It's caught anyway!
    return 1 / (x + y + z)

Pretty logging with colors

Loguru automatically adds colors to your logs if your terminal is compatible. You can define your favorite style by using markup tags in the sink format.

logger.add(sys.stdout, colorize=True, format="<green>{time}</green> <level>{message}</level>")

Asynchronous, Thread-safe, Multiprocess-safe

All sinks added to the logger are thread-safe by default. They are not multiprocess-safe, but you can enqueue the messages to ensure logs integrity. This same argument can also be used if you want async logging.

logger.add("somefile.log", enqueue=True)

Coroutine functions used as sinks are also supported and should be awaited with complete().

Fully descriptive exceptions

Logging exceptions that occur in your code is important to track bugs, but it's quite useless if you don't know why it failed. Loguru helps you identify problems by allowing the entire stack trace to be displayed, including values of variables (thanks better_exceptions for this!).

The code:

# Caution, "diagnose=True" is the default and may leak sensitive data in prod
logger.add("out.log", backtrace=True, diagnose=True)

def func(a, b):
    return a / b

def nested(c):
    try:
        func(5, c)
    except ZeroDivisionError:
        logger.exception("What?!")

nested(0)

Would result in:

2018-07-17 01:38:43.975 | ERROR    | __main__:nested:10 - What?!
Traceback (most recent call last):

  File "test.py", line 12, in <module>
    nested(0)
    └ <function nested at 0x7f5c755322f0>

> File "test.py", line 8, in nested
    func(5, c)
    │       └ 0
    └ <function func at 0x7f5c79fc2e18>

  File "test.py", line 4, in func
    return a / b
           │   └ 0
           └ 5

ZeroDivisionError: division by zero

Note that this feature won't work on default Python REPL due to unavailable frame data.

See also: Security considerations when using Loguru.

Structured logging as needed

Want your logs to be serialized for easier parsing or to pass them around? Using the serialize argument, each log message will be converted to a JSON string before being sent to the configured sink.

logger.add(custom_sink_function, serialize=True)

Using bind() you can contextualize your logger messages by modifying the extra record attribute.

logger.add("file.log", format="{extra[ip]} {extra[user]} {message}")
context_logger = logger.bind(ip="192.168.0.1", user="someone")
context_logger.info("Contextualize your logger easily")
context_logger.bind(user="someone_else").info("Inline binding of extra attribute")
context_logger.info("Use kwargs to add context during formatting: {user}", user="anybody")

It is possible to modify a context-local state temporarily with contextualize():

with logger.contextualize(task=task_id):
    do_something()
    logger.info("End of task")

You can also have more fine-grained control over your logs by combining bind() and filter:

logger.add("special.log", filter=lambda record: "special" in record["extra"])
logger.debug("This message is not logged to the file")
logger.bind(special=True).info("This message, though, is logged to the file!")

Finally, the patch() method allows dynamic values to be attached to the record dict of each new message:

logger.add(sys.stderr, format="{extra[utc]} {message}")
logger = logger.patch(lambda record: record["extra"].update(utc=datetime.utcnow()))

Lazy evaluation of expensive functions

Sometime you would like to log verbose information without performance penalty in production, you can use the opt() method to achieve this.

logger.opt(lazy=True).debug("If sink level <= DEBUG: {x}", x=lambda: expensive_function(2**64))

# By the way, "opt()" serves many usages
logger.opt(exception=True).info("Error stacktrace added to the log message (tuple accepted too)")
logger.opt(colors=True).info("Per message <blue>colors</blue>")
logger.opt(record=True).info("Display values from the record (eg. {record[thread]})")
logger.opt(raw=True).info("Bypass sink formatting\n")
logger.opt(depth=1).info("Use parent stack context (useful within wrapped functions)")
logger.opt(capture=False).info("Keyword arguments not added to {dest} dict", dest="extra")

Customizable levels

Loguru comes with all standard logging levels to which trace() and success() are added. Do you need more? Then, just create it by using the level() function.

new_level = logger.level("SNAKY", no=38, color="<yellow>", icon="🐍")

logger.log("SNAKY", "Here we go!")

Better datetime handling

The standard logging is bloated with arguments like datefmt or msecs, %(asctime)s and %(created)s, naive datetimes without timezone information, not intuitive formatting, etc. Loguru fixes it:

logger.add("file.log", format="{time:YYYY-MM-DD at HH:mm:ss} | {level} | {message}")

Suitable for scripts and libraries

Using the logger in your scripts is easy, and you can configure() it at start. To use Loguru from inside a library, remember to never call add() but use disable() instead so logging functions become no-op. If a developer wishes to see your library's logs, they can enable() it again.

# For scripts
config = {
    "handlers": [
        {"sink": sys.stdout, "format": "{time} - {message}"},
        {"sink": "file.log", "serialize": True},
    ],
    "extra": {"user": "someone"}
}
logger.configure(**config)

# For libraries, should be your library's `__name__`
logger.disable("my_library")
logger.info("No matter added sinks, this message is not displayed")

# In your application, enable the logger in the library
logger.enable("my_library")
logger.info("This message however is propagated to the sinks")

For additional convenience, you can also use the loguru-config library to setup the logger directly from a configuration file.

Entirely compatible with standard logging

Wish to use built-in logging Handler as a Loguru sink?

handler = logging.handlers.SysLogHandler(address=('localhost', 514))
logger.add(handler)

Need to propagate Loguru messages to standard logging?

class PropagateHandler(logging.Handler):
    def emit(self, record: logging.LogRecord) -> None:
        logging.getLogger(record.name).handle(record)

logger.add(PropagateHandler(), format="{message}")

Want to intercept standard logging messages toward your Loguru sinks?

class InterceptHandler(logging.Handler):
    def emit(self, record: logging.LogRecord) -> None:
        # Get corresponding Loguru level if it exists.
        level: str | int
        try:
            level = logger.level(record.levelname).name
        except ValueError:
            level = record.levelno

        # Find caller from where originated the logged message.
        frame, depth = inspect.currentframe(), 0
        while frame and (depth == 0 or frame.f_code.co_filename == logging.__file__):
            frame = frame.f_back
            depth += 1

        logger.opt(depth=depth, exception=record.exc_info).log(level, record.getMessage())

logging.basicConfig(handlers=[InterceptHandler()], level=0, force=True)

Personalizable defaults through environment variables

Don't like the default logger formatting? Would prefer another DEBUG color? No problem:

# Linux / OSX
export LOGURU_FORMAT="{time} | <lvl>{message}</lvl>"

# Windows
setx LOGURU_DEBUG_COLOR "<green>"

Convenient parser

It is often useful to extract specific information from generated logs, this is why Loguru provides a parse() method which helps to deal with logs and regexes.

pattern = r"(?P<time>.*) - (?P<level>[0-9]+) - (?P<message>.*)"  # Regex with named groups
caster_dict = dict(time=dateutil.parser.parse, level=int)        # Transform matching groups

for groups in logger.parse("file.log", pattern, cast=caster_dict):
    print("Parsed:", groups)
    # {"level": 30, "message": "Log example", "time": datetime(2018, 12, 09, 11, 23, 55)}

Exhaustive notifier

Loguru can easily be combined with the great notifiers library (must be installed separately) to receive an e-mail when your program fail unexpectedly or to send many other kind of notifications.

import notifiers

params = {
    "username": "[email protected]",
    "password": "abc123",
    "to": "[email protected]"
}

# Send a single notification
notifier = notifiers.get_notifier("gmail")
notifier.notify(message="The application is running!", **params)

# Be alerted on each error message
from notifiers.logging import NotificationHandler

handler = NotificationHandler("gmail", defaults=params)
logger.add(handler, level="ERROR")

10x faster than built-in logging

Although logging impact on performances is in most cases negligible, a zero-cost logger would allow to use it anywhere without much concern. In an upcoming release, Loguru's critical functions will be implemented in C for maximum speed.

Documentation

loguru's People

Contributors

blueyed avatar delgan avatar dependabot[bot] avatar dotlambda avatar etianen avatar hillairet avatar jessekrubin avatar kornicameister avatar layday avatar mayanyax avatar nasyxx avatar ncoudene avatar needoptimism avatar nightblade avatar phillipuniverse avatar ponponon avatar pylipp avatar sadikkuzu avatar slhck avatar smsearcy avatar sobolevn avatar ss18 avatar t-mart avatar tcatshoek avatar tds333 avatar timgates42 avatar trim21 avatar tunaflsh avatar tyong920 avatar zakstucke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

loguru's Issues

How to configure logger in another module?

This might be a noob question, but suppose that in a module mod.py I import logger and do some logging.
Now how do I set the level of that logger (in mod.py) from my main.py?
Is this even the correct way to use the logger? If not then how should I use it?

Use separate option to control exception formatting

Currently, the backtrace option is overloaded to control both displaying more of the traceback as well as formatting with better_exceptions. Semantically, those two things should be separate. Having them tied together prevents users from logging just the caught frame with better_exceptions formatting (unless I'm missing something). That's actually how I want most of my exception logging to be.

Allow multiple sinks to be used

I would love if multiple sinks could be used, and not just the one. For me, I'd like to be able to print to stdout/stderr as is the default, while also sending the messages to a custom sink.

I would image it would be easiest to allow logger.start to accept a list that would be called in order.

enter/exit decorators

It would be nice to have a decorator that one could add to a method (in a class or bare) that logs entry and exit from a method. I suppose you could have one of each, maybe something like:

@logger.enter
@logger.exit
def themethod():
    a = 5

and the logging would then include a logging statement saying the method the was entered and a logging statement when the method is exited.

The general idea could be something like the answer in https://stackoverflow.com/questions/23435488/how-to-do-logging-at-function-entry-inside-and-exit-in-python

I think it might follow something along the lines of the logger.catch.

One program, multi modules to their respective files.

I am fairly new to logging, I used print statements everywhere just to debug.

Now I'm trying to implement a logger for my program I plan to pass it to each sub-module as the program is running and I want it to have each module output their logs to their own file.

I know this is a simple question and will probably take little to no time to answer I just need clarification.

For context it's a Discord bot, and in my launcher.py file I'm looking to add a logger instance and just pass it to each cog where they can use it.

If this is not good practice, and I should have each module use their own logger then that's fine it just seemed best to do this way.

Also, as I'm new to logging what is the use of multiple logger.add(...)'s? Is this to format how logger.INFO, logger.DEBUG, logger.ERROR output? As in if I use it multiple times to multiple files I would in essence be re-establishing where it outputs. Example:

logger.add("file1.log", format="{time} {level} {message}", filter="my_module", level="INFO")
logger.add("file2.log", format="{time} {level} {message}", filter="my_module", level="INFO")

Would these output the same message to each file?

Another way to set defaults

There needs to be another way to set defaults.
Environmental variables do not work for everything on Windows.
For example:
If I do not want any color for INFO level, I cannot set environment variable LOGURU_INFO_COLOR="" because then it's disabled on Windows.
I cannot do:

_defaults.LOGURU_INFO_COLOR = ""

Since logger is already initialized.
Instead, I have to do extremely hacky stuff like:

from loguru import _defaults, logger
from loguru._logger import Level

logger.info('bold')
logger._levels['INFO'] = Level(_defaults.LOGURU_INFO_NO, "", _defaults.LOGURU_INFO_ICON)
logger.info('not bold')

Since Level is a namedtuple, I cannot even modify it in-place

Perhaps a config file or an .ini file as well would be better.

Any plan to make this work for python 2.7?

I really like your project. And I want to integrate into my company's python codebase. However, we currently use python2.7 since some of our main libraries depend on it.

I am wondering if it is possible to support python2.7?

different log format strings

Unsure if this is supported (This could be a good candidate for an example in a nonexistent Examples section), but i'd like to essentially log something with extra data to pipe in a format string.

Specifically, a UUID that identifies a particular process. Having a logger that includes it would make it super easy to find all logs pertaining to that process.

How to write logs to StringIo

I like loguru very much. but I'm using the unit test framework to generate reports to read data in StringIo. Is there any way to solve this problem??

Does colorama need to be pinned above 0.4.1?

The requirement of colorama pinned at 0.4.1 is blocking deploments to AWS lambda because awscli in boto3 has it pinned below 0.3.9. Can this be relaxed?

awscli 1.16.72 has requirement colorama<=0.3.9,>=0.2.5, but you'll have colorama 0.4.1 which is incompatible.

loguru 0.2.2 has requirement colorama>=0.4.1, but you'll have colorama 0.3.9 which is incompatible.

How to give a unified id to sign related log

hi,I read through the readme and api reference but did not find out how to give a unified id to sign related log .

just like this
default

with unified log_id I can parse and get related log generated by a http request or a func.

I have a producer-consumer model. When the consumer get data and start to work, i want to give a
unified log_id ,and the consumer will call many functions, i need the all process log have unified log_id.

Question on the adequate behavior of formatting color.

Hello,

I have just found that the following code snippet,

from loguru import logger
config = {
    "handlers": [
        {"sink": sys.stderr,
         "level": "INFO",
         "format": "<Green>{time:YYYY-MM-DD HH:mm:ss.SSS}</Green> | <level>{level: <8}</level> | <cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> - <level>{message}</level>"
         }
    ]
}
logger.configure(**config)
logger.info("Hello,")
logger.warning("World")

results in

<Green>2019-02-17 22:27:54.564</Green> | INFO     | __main__:<module>:16 - Hello,
<Green>2019-02-17 22:27:54.565</Green> | WARNING  | __main__:<module>:17 - World

It seems that tags(ex.<Green></Green>) which are not properly parsed by ansimarkup library are sent to sink without any blockers.

Is it the expected behavior when it was designed?
or should we raise an exception for this? :)

In my eyes, if color tags cannot be properly converted, it should give feedback to users instead of letting them recognize it after logs are recorded in sinks. :)
What do you think?

Integration with notifiers

So I'm checking out this amazing library, which happens to be exactly what I was looking for, and to my great surprise, the last segment talks about integration with notifiers, which I'm its creator.

First off, I'm honored and flattened that you decided to include notifiers as a potential use case for your library. Thanks for that.

I'm not sure if you're aware of this, but I had the same idea in mind, so I created a custom logging handler for notifiers: https://notifiers.readthedocs.io/en/latest/Logger.html

Using it with logger.start could be even nicer with less boilerplate, which is kinda the point of this great library.

This is just a suggestion obviously, as your example is a perfectly valid use case.

Anyway, thanks again for mentioning notifiers and ofcourse, for this awesome library!

Change level of default handler

First off, this library is terrific, I found it via the podcast Python Bytes and I've been using it ever since.

So here is my question: I understand, the default handler for from loguru import logger goes to sys.stderr.

When I try: logger.add(sys.stderr, level="INFO"), I still get DEBUG level messages in the terminal.

My goal is to change the level of the logging to sys.stderr. I don't have any other handlers.

Re-emitting log records received from another process

I wrote a multiprocessing-like module I use for creating worker processes. Worker processes add a callable sink that serializes logging records and sends them to the parent process via a pipe. Once records are received by the parent, it should deserialize them and forward them on to handlers for formatting and final logging, but I don't see any public methods for re-emitting records to handlers. So I'm currently using private attributes to re-emit the records.

Here is an incomplete example to provide a feel for what I am currently doing:

# worker.py

import logging
from loguru import logger

def send_record(message):
    pipe.write(pickle.dumps(message))

logger.add(send_record, level=logging.DEBUG, format='')  # Final handlers will take care of formatting
# parent.py

import logging
from loguru impor logger

def emit(message):
    record = message.record
    level = record['level']
    try:
        _, level_color, _ = logger._levels[level.name]
    except KeyError:
        level_color = ''
    for handler in logger._handlers.values():
        handler.emit(record, level_color, logger._ansi, logger._raw)

def recv_record():
    message = pickle.loads(pipe.read())
    emit(message)

logger.add(sys.stdout, level=logging.DEBUG)

Did I miss something in the loguru API? Are there public routines to handle this case? Is this a feature that could be added to loguru? I know the builtin logging libraries have similar support, but I would like to use loguru, if possible. Any help is appreciated.

Thanks!

CI/tests: use tox?

I think it is very useful to have a tox.ini to run tests in an isolated way.
This would also allow for using tox-travis on Travis then.

What is your opinion? I'd be happy to provide a PR for this.

Log file rotation not rotating file

I have a log configured as per the below:

logger.add("path/to/log.log",
format="{time:DD/MM/YY HH:mm:ss} - {message}",
level="CustomLogLevel",
filter=lamba record: record['level']="CustomLogLevel",
rotation="1 week")

The script which the above logger is linked to runs once per hour, for about 30 seconds. All works fine except the rotation, it's been configured since the 6th of Feb and hasn't created a new file yet, just keeps appending to the file which was initially created. Is this configured wrong on my end?

Edit:
Just read the contributions page!
using:
Python 3.7.2
Loguru 0.2.5
Windows Server 2016
Using both VS Code and terminal.

multiple logger instance

hi, I read through the readme and api reference but didnot find out how to get multiple logger instances, like the standard logging.getLogger('app.log') logging.getLogger('db.log') logging.getLogger('sub-process-task.log') , each logger has their own handlers. Is there a way to do this?

Many tests fail on when run locally

63 failed, 627 passed, 1 skipped, 1 xfailed

Same results on my Windows desktop and a Linux server.

Edit: Should've mentioned, just for completeness, Python 3.7 on Windows and Python 3.6 on the Linux server.

Binding without creating a new logger

I'm wondering if we can have new APIs that can bind and unbind structured arguments in place (i.e. without creating a new logger). Maybe an add_extra and a remove_extra method? I also think it'll be useful if Logger._extra can be promoted to a public member for viewing (a getter property would suffice).

The motivation here is that I want to be able to do from loguru import logger from all my modules and get the structured arguments I've set elsewhere. I can workaround this issue by keeping my own logger registry or adding values to _extra directly but I think it'll be nice if this can be officially supported.

Async Version

It would be great if we could "fire & forget" log entries using an async handler that uses a seperate thread or process with options to guarantee order, commit, performance (lazy write) and/or confirmation that the write was completed.

Pytest's caplog fixture doesn't seem to work

Summary

Pytest's caplog fixture is a critical part of testing. I'd love to move to loguru, but loguru doesn't seem to work with caplog.

I'm not sure if this is user error (perhaps it's documented somewhere? I haven't been able to find it.), if it is some design oversight/choice, or if the problem is actually on pytest's end.

Expected Result

Users should be able to use loguru as a drop-in replacement for the stdlib logging package and have tests that use the caplog fixture still work.

Actual Result

Drop-in replacement causes tests that use the caplog pytest fixture to fail.

Steps to Reproduce

Base test file

# test_demo.py
import pytest
import logging
logger = logging.getLogger()
logger.addHandler(logging.StreamHandler())
# from loguru import logger

def some_func(a, b):
    if a < 1:
        logger.warning("Oh no!")
    return a + b

def test_some_func_logs_warning(caplog):
    assert some_func(-1, 2) == 1
    assert "Oh no!" in caplog.text

if __name__ == "__main__":
    some_func(-1, 1)
    print("end")

Without Loguru:

$ python test_demo.py
Oh no!
end
(.venv) Previous Dir: /home/dthor
09:59:56 dthor@Thorium /home/dthor/temp/loguru
$ pytest
========================== test session starts ==========================
platform linux -- Python 3.6.7, pytest-4.3.0, py-1.8.0, pluggy-0.8.1
rootdir: /home/dthor/temp/loguru, inifile:
collected 1 item

test_demo.py .                                                    [100%]

======================= 1 passed in 0.03 seconds ========================

With Loguru:

Adjust test_demo.py by commenting out stdlib logging and uncommenting loguru:

...
# import logging
# logger = logging.getLogger()
# logger.addHandler(logging.StreamHandler())
from loguru import logger
...
$ python test_demo.py
2019-02-22 10:02:35.551 | WARNING  | __main__:some_func:9 - Oh no!
end
(.venv) Previous Dir: /home/dthor
10:02:35 dthor@Thorium /home/dthor/temp/loguru
$ pytest
========================== test session starts ==========================
platform linux -- Python 3.6.7, pytest-4.3.0, py-1.8.0, pluggy-0.8.1
rootdir: /home/dthor/temp/loguru, inifile:
collected 1 item

test_demo.py F                                                    [100%]

=============================== FAILURES ================================
______________________ test_some_func_logs_warning ______________________

caplog = <_pytest.logging.LogCaptureFixture object at 0x7f8e8b620438>

    def test_some_func_logs_warning(caplog):
        assert some_func(-1, 2) == 1
>       assert "Oh no!" in caplog.text
E       AssertionError: assert 'Oh no!' in ''
E        +  where '' = <_pytest.logging.LogCaptureFixture object at 0x7f8e8b620438>.text

test_demo.py:14: AssertionError
------------------------- Captured stderr call --------------------------
2019-02-22 10:02:37.708 | WARNING  | test_demo:some_func:9 - Oh no!
======================= 1 failed in 0.20 seconds ========================

Version information

$ python --version
Python 3.6.7
(.venv) Previous Dir: /home/dthor
10:10:03 dthor@Thorium /home/dthor/temp/loguru
$ pip list
Package                Version
---------------------- -----------
ansimarkup             1.4.0
atomicwrites           1.3.0
attrs                  18.2.0
better-exceptions-fork 0.2.1.post6
colorama               0.4.1
loguru                 0.2.5
more-itertools         6.0.0
pip                    19.0.3
pkg-resources          0.0.0
pluggy                 0.8.1
py                     1.8.0
Pygments               2.3.1
pytest                 4.3.0
setuptools             40.8.0
six                    1.12.0
(.venv) Previous Dir: /home/dthor
10:10:07 dthor@Thorium /home/dthor/temp/loguru
$ uname -a
Linux Thorium 4.4.0-17763-Microsoft #253-Microsoft Mon Dec 31 17:49:00 PST 2018 x86_64 x86_64 x86_64 GNU/Linux
(.venv) Previous Dir: /home/dthor
10:11:33 dthor@Thorium /home/dthor/temp/loguru
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.2 LTS
Release:        18.04
Codename:       bionic

loguru and mingw64

Hi Sir,
When I use mingw64 to compile my code, the following msg. :

C:\Users\Amber\Desktop\tsn-milestone2\module\lib\loguru\loguru.cpp:117:0: warning: "NOMINMAX" redefined
#define NOMINMAX
In file included from C:/msys64/mingw64/include/c++/7.3.0/x86_64-w64-mingw32/bits/c++config.h:533:0,
from C:/msys64/mingw64/include/c++/7.3.0/utility:68,
from C:/msys64/mingw64/include/c++/7.3.0/algorithm:60,
from C:\Users\Amber\Desktop\tsn-milestone2\module\lib\loguru\loguru.cpp:29:
C:/msys64/mingw64/include/c++/7.3.0/x86_64-w64-mingw32/bits/os_defines.h:45:0: note: this is the location of the previous definition
#define NOMINMAX 1

This is a warning msg., but I cannot ensure this will cause some error or not. Can I ignore this warning or what can i do??

Thank you.

Example for logger.configure activation wrong

should be
activation=[("my_module.secret", False), ("another_library.module", True)]
instead of
activation=[("my_module.secret": False, "another_library.module": True)]
since the activation has to be a list of tuples.

Should backtrace work in a console?

The backtrace output works in a Python script run via an interpreter, but not in a Python shell, instead printing less information. Is this expected?

Python 3.6.6 | packaged by conda-forge | (default, Jul 26 2018, 09:55:02)
[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from loguru import logger
>>>
>>> logger.start("output.log", backtrace=True)  # Set 'False' to avoid leaking sensitive data in prod
1
>>>
>>> def func(a, b):
...     return a / b
...
>>> def nested(c):
...     try:
...         func(5, c)
...     except ZeroDivisionError:
...         logger.exception("What?!")
...
>>> nested(0)
2018-12-09 18:20:15.019 | ERROR    | __main__:nested:5 - What?!
Traceback (most recent call last):

  File "<stdin>", line 1, in <module>
> File "<stdin>", line 3, in nested
  File "<stdin>", line 2, in func

ZeroDivisionError: division by zero
>>>

Pass log kwargs to custom sink

Loguru supports string formatting, but it seems that these kwargs are dropped if there is nothing to format in the string. I would think these would be passed to the custom sink, but it does not seem like this is the case.

See the below example.

>>> from loguru import logger
>>> 
>>> def callback(payload):
...     print(payload)
... 
>>> logger.start(callback, serialize=True)
>>> logger.info("testing", hello="world")
2018-12-08 15:56:07.675 | INFO     | __main__:<module>:1 - yoo
{"text": "2018-12-08 15:56:07.675 | INFO     | __main__:<module>:1 - testing\n", "record": {"elapsed": {"repr": "0:01:35.065585", "seconds": 95.065585}, "exception": null, "extra": {}, "file": {"name": "<stdin>", "path": "<stdin>"}, "function": "<module>", "level": {"icon": "\u2139\ufe0f", "name": "INFO", "no": 20}, "line": 1, "message": "testing", "module": "<stdin>", "name": "__main__", "process": {"id": 22013, "name": "MainProcess"}, "thread": {"id": 140289032955392, "name": "MainThread"}, "time": {"repr": "2018-12-08 15:56:07.675593-06:00", "timestamp": 1544306167.675593}}}
>>> logger.info("testing {hello}", hello="world")
2018-12-08 15:58:37.702 | INFO     | __main__:<module>:1 - testing world
{"text": "2018-12-08 15:58:37.702 | INFO     | __main__:<module>:1 - testing world\n", "record": {"elapsed": {"repr": "0:04:05.092374", "seconds": 245.092374}, "exception": null, "extra": {}, "file": {"name": "<stdin>", "path": "<stdin>"}, "function": "<module>", "level": {"icon": "\u2139\ufe0f", "name": "INFO", "no": 20}, "line": 1, "message": "testing world", "module": "<stdin>", "name": "__main__", "process": {"id": 22013, "name": "MainProcess"}, "thread": {"id": 140289032955392, "name": "MainThread"}, "time": {"repr": "2018-12-08 15:58:37.702382-06:00", "timestamp": 1544306317.702382}}}

I would expect that {"hello": "world"} to be somewhere in the payload passed to the custom sink

AttributeError: '_Code' object has no attribute 'co_consts'

loguru==0.2.5

[2019-02-06 17:44:42,144: WARNING/ForkPoolWorker-2] --- Logging error in Loguru Handler #0 ---
[2019-02-06 17:44:42,145: WARNING/ForkPoolWorker-2] Record was:
[2019-02-06 17:44:42,145: WARNING/ForkPoolWorker-2] {'elapsed': datetime.timedelta(0, 3, 912801), 'exception': <loguru._recattrs.ExceptionRecattr object at 0x7fde0b437ba8>, 'extra': {}, 'file': 'logging_settings.py', 'function': 'emit', 'level': 'Level 40', 'line': 15, 'message': 'Task epex.tasks.run_user_worker[012ace7d-8264-4ac9-8305-0c243e20e8bc] raised unexpected: NameError("name \'blabla\' is not defined",)', 'module': 'logging_settings', 'name': 'navydog.settings.logging_settings', 'process': '20371', 'thread': '140591889479488', 'time': datetime(2019, 2, 6, 17, 44, 42, 144557, tzinfo=datetime.timezone(datetime.timedelta(0), 'UTC'))}
[2019-02-06 17:44:42,146: WARNING/ForkPoolWorker-2] Traceback (most recent call last):
[2019-02-06 17:44:42,147: WARNING/ForkPoolWorker-2] File "/path/to/project/venv/lib/python3.6/site-packages/celery/app/trace.py", line 382, in trace_task
    R = retval = fun(*args, **kwargs)
[2019-02-06 17:44:42,147: WARNING/ForkPoolWorker-2] File "/path/to/project/venv/lib/python3.6/site-packages/celery/app/trace.py", line 641, in __protected_call__
    return self.run(*args, **kwargs)
[2019-02-06 17:44:42,147: WARNING/ForkPoolWorker-2] File "/path/to/project/navydog/apps/epex/tasks.py", line 37, in run_user_worker
    blabla(epex_user)
[2019-02-06 17:44:42,147: WARNING/ForkPoolWorker-2] NameError: name 'blabla' is not defined
[2019-02-06 17:44:42,147: WARNING/ForkPoolWorker-2] During handling of the above exception, another exception occurred:
[2019-02-06 17:44:42,147: WARNING/ForkPoolWorker-2] Traceback (most recent call last):
[2019-02-06 17:44:42,148: WARNING/ForkPoolWorker-2] File "/path/to/project/venv/lib/python3.6/site-packages/loguru/_handler.py", line 174, in emit
    error = exception.format_exception(self.backtrace, self.colorize, self.encoding)
[2019-02-06 17:44:42,148: WARNING/ForkPoolWorker-2] File "/path/to/project/venv/lib/python3.6/site-packages/loguru/_recattrs.py", line 160, in format_exception
    self._extended_traceback = self._extend_traceback(traceback_, self._decorated)
[2019-02-06 17:44:42,148: WARNING/ForkPoolWorker-2] File "/path/to/project/venv/lib/python3.6/site-packages/loguru/_recattrs.py", line 92, in _extend_traceback
    tb = self._make_catch_traceback(tb.tb_frame, tb.tb_lasti, tb.tb_lineno, tb.tb_next)
[2019-02-06 17:44:42,148: WARNING/ForkPoolWorker-2] File "/path/to/project/venv/lib/python3.6/site-packages/loguru/_recattrs.py", line 119, in _make_catch_traceback
    c.co_consts,
[2019-02-06 17:44:42,148: WARNING/ForkPoolWorker-2] AttributeError: '_Code' object has no attribute 'co_consts'
[2019-02-06 17:44:42,148: WARNING/ForkPoolWorker-2] --- End of logging error ---

Can't Import loguru (error un time.struct_time)

$ python
Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:01:18) [MSC v.1900 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from loguru import logger
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Program Files (x86)\Python35-32\lib\site-packages\loguru\__init__.py", line 10, in <module>
    from ._logger import Logger as _Logger
  File "C:\Program Files (x86)\Python35-32\lib\site-packages\loguru\_logger.py", line 30, in <module>
    start_time = now()
  File "C:\Program Files (x86)\Python35-32\lib\site-packages\loguru\_datetime.py", line 78, in now
    tzinfo = timezone(timedelta(seconds=local.tm_gmtoff), local.tm_zone)
AttributeError: 'time.struct_time' object has no attribute 'tm_gmtoff'

Add/Remove are not thread-safe

import threading
from loguru import logger

def open_log_file(fn:str):

logger.add(fn)
logger.info("test message")

logger.info("write to 10 files")
for i in range(10):
fn = "test_" + str(i) + ".log"
logger.info(f"fn = {fn}")
t = threading.Thread(target=open_log_file, kwargs={"fn":fn})
t.start()

Set LOGURU_AUTOINIT to False

I don't mind having an option to auto-init the sinks, but have it set to True by default seems off to me. As a novice user of the library i would think that i don't have any sinks set up at the start therefore calling the .info method would yield nothing. So wouldn't it be better to set it to False?

Data scrubbing

sentry.io (https://docs.sentry.io/data-management/sensitive-data/) comes with a nice way to scrub sensitive data (as stipulated by data privacy guidelines or regulations, such as the GDPR).

We are looking for a convenient way to hide/obfuscate sensitive data which can be enabled/disabled by configuration for staging/production environments.
We would like to use a predefined filter (identical to sentry.io) that allows customization, so we could add more filters to it.

custom log level not working

this doesn't produce any log

new_level = logger.level("SNAKY", no=8, color="<yellow>", icon="🐍")
logger.log("SNAKY", "Here we go!")

Consolidated email notification

The Exhaustive Notifier gives a great example on how to get email notification on logging events. Are email notifications separate or consolidated? In another words, will each logging event for email sent in a separate email or in a single email? If sent by separate emails, is there any way to consolidate them into one single email?

How to configure multiple logger like standard logging.config.dictConfig

hi,I read through the readme and api reference but didnot find out how to get multiple logger instances.
here is my configure of logging loggers:
image

in my project,i rely on getLogger("xxx") to process the log instead of getLogger() to distribute the log. And I don't want some of the logs to be passed to multiple loggers, so I added propagate: no to each logger.
in loguru's api reference,I only found the filter Parameters in logger.add to handle multiple loggers.
but mine multiple loggers are stored in the same log, if i used filter=“xxx.log”,log record will send to multiple loggers,so how to deal with my scene

Enable suppression of DEBUG messages on default logger

I don't want the default logger to show DEBUG messages on the console in my scripts.

I see that I could just change the min level by going:
logger._min_level = 20
but I hate to touch the internals.

If there is a supported way to do this, I didn't see it in the docs.

colorize=true the file output confused.

^[[32m2018-12-27 14:04:17.103^[[0m | ^[[1mINFO ^[[0m | ^[[36m__main__^[[0m:^[[36m^[[0m:^[[36m9^[[0m - ^[[1mb'in8'^[[0m

but actually the output should be
2018-12-27 14:07:20.106 | INFO | main::9 - b'in8'

and my config is as below:

from loguru import logger

_logInfo= dict(
rotation='1KB',
enqueue=True,
# colorize=True,
# format="{time} {message}",
encoding='utf-8',
)
fp='name
{time}.log'
logger.add(fp,**__logInfo)

multithread log error with TimedRotatingFileHandler

I am using logging now and I have many threading, which all log to a log file.
If not using TimedRotatingFileHandler, everything fine.
If using TimedRotatingFileHandler, wich will cause error at the rotate time.
So I want to know whether loguru can solve this problem?

logging full warning message

Hi @Delgan ,

Thanks for the great package! I'm swapping from the core logging package to loguru in a python package I am developing. In general the swap has been really easy - but I'm having a problem with warnings/error messages.

I have made a minimal example of the conversion I am trying to make and the log outputs. From the logging package:

import logging

def make_logging_warnings():
    logging.basicConfig(filename="output.log", 
            format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
            level=logging.INFO)
    logger = logging.getLogger()

    logger.info("Let's start logging")
    try:
        raise UserWarning("I'm warning you!")
    except UserWarning:
        logger.warning('Warning encountered in function:', exc_info=True)

make_logging_warnings()

Which returns the following log:

2019-01-09 12:37:55,666 root         INFO     Let's start logging
2019-01-09 12:37:55,666 root         WARNING  Warning encountered in function:
Traceback (most recent call last):
  File "<ipython-input-1-59e413187e43>", line 12, in make_logging_warnings
    raise UserWarning("I'm warning you!")
UserWarning: I'm warning you!

Converting this into loguru the warning traceback doesn't seem to work:

from loguru import logger

def make_loguru_warnings():
    logger.add("output.log", level='INFO')

    logger.info("Let's start logging")
    try:
        raise UserWarning("I'm warning you!")
    except UserWarning:
        logger.warning('Warning encountered in function:', exc_info=True)

make_loguru_warnings()

I get the output:

2019-01-09 12:40:17.751 | INFO     | __main__:make_loguru_warnings:6 - Let's start logging
2019-01-09 12:40:17.754 | WARNING  | __main__:make_loguru_warnings:10 - Warning encountered in function:

See how I'm missing the traceback here?

I have a similar issue with errors, but they're giving too much information back in loguru - I think this is to do with better_exceptions, can this be turned off?

To make the above code work for errors I have just changed:

  • UserWarning("I'm warning you!") to AssertionError("You're not assertive!")
  • logger.warning("Warning encountered in function: ", exc_info=True) to logger.exception("Error encountered in function: ")

I get the following logs for both packages:

From logging

2019-01-09 12:44:06,843 root         INFO     Let's start logging
2019-01-09 12:44:06,843 root         ERROR    Error encountered in function:
Traceback (most recent call last):
  File "<ipython-input-2-759d24f4d830>", line 9, in make_logging_errors
    raise AssertionError("You're not assertive")
AssertionError: You're not assertive

From loguru

2019-01-09 12:45:36.377 | INFO     | __main__:make_loguru_errors:6 - Let's start logging
2019-01-09 12:45:36.379 | ERROR    | __main__:make_loguru_errors:10 - Error encountered in function:
Traceback (most recent call last):

  File "C:\Users\kmarks\AppData\Local\Continuum\miniconda3\envs\cm\Scripts\ipython-script.py", line 10, in <module>
    sys.exit(start_ipython())
    |   |    -> <function start_ipython at 0x00000181A3133598>
    |   -> <built-in function exit>
    -> <module 'sys' (built-in)>

  File "C:\Users\kmarks\AppData\Local\Continuum\miniconda3\envs\cm\lib\site-packages\IPython\__init__.py", line 125, in start_ipython
    return launch_new_instance(argv=argv, **kwargs)
           |                   |    |       -> {}
           |                   |    -> None
           |                   -> None
           -> <bound method Application.launch_instance of <class 

            .... + loads more lines .....

  File "<ipython-input-3-e3410db626c5>", line 12, in <module>
    make_loguru_errors()
    -> <function make_loguru_errors at 0x00000181A3DEA0D0>
> File "<ipython-input-3-e3410db626c5>", line 8, in make_loguru_errors
    raise AssertionError("You're not assertive")
          -> <class 'AssertionError'>

AssertionError: You're not assertive

I see information on using the backtrace=True/False argument, but that still gives much more information on the error than I am looking for. I am guessing the error problem is to do with better_exceptions? I'm not sure about the warning one. I would really appreciate some help with this.

Thanks!

Changing format of handler

It appears that you cannot change the format (easily), but have to start a new logger/sink for it (and then stop the previous one), or am I missing something?

I think it would be good to have a method to update the format of a handler (which would take care of handling caches etc) maybe.

Currently I am using:

log_format = loguru._defaults.LOGURU_FORMAT + ' ({extra[reqid]}:{extra[ip]}:{extra[user]})'
logger = loguru.logger
logger.start(sys.stderr, format=log_format)
logger = logger.bind(reqid='-', ip='-', user="-")
logger.stop(0)

And then later bind extra per request:

def get_logger_for_request(request):
    global request_id
    request_id += 1
    ip = request.headers.get("x-real-ip", "?")
    return logger.bind(reqid=request_id, ip=ip, user="?")

Something like logger._handlers[0].set_format(loguru._defaults.LOGURU_FORMAT + ' ({extra[reqid]}:{extra[ip]}:{extra[user]})') could replace the first block here.

I've found that there is Handler.update_format, which looked promising, but then it is only about updating formats for color.

Another way would be to use LOGURU_AUTOINIT=0, but I wonder what you think about changing the format on the fly.

Request: Default formatter remembers length of previous message prefixes

As an example of current output:

2019-02-17 12:06:09.989 | TRACE    | utils.plex:get_movies:65 - Skipping row as unable to parse guid local://112573 for: {'library_id': 12, 'item_id': 115667, 'item_guid': 'local://112573', 'item_title': 'Von Grey Poison in the Water Demo 06', 'item_year': None}
2019-02-17 12:06:09.990 | TRACE    | utils.plex:get_movies:65 - Skipping row as unable to parse guid local://112624 for: {'library_id': 12, 'item_id': 115725, 'item_guid': 'local://112624', 'item_title': "Michael Schenker's Temple of Rock on a Mission Live in Madrid", 'item_year': 2016}
2019-02-17 12:06:09.990 | INFO     | utils.plex:get_movies:72 - Found 5414 movies from all movie sections
2019-02-17 12:06:09.992 | INFO     | __main__:run:103 - Processing movies from 2 sections

This would end up being, with the change requested:

2019-02-17 12:06:09.989 | TRACE    | utils.plex:get_movies:65 - Skipping row as unable to parse guid local://112573 for: {'library_id': 12, 'item_id': 115667, 'item_guid': 'local://112573', 'item_title': 'Von Grey Poison in the Water Demo 06', 'item_year': None}
2019-02-17 12:06:09.990 | TRACE    | utils.plex:get_movies:65 - Skipping row as unable to parse guid local://112624 for: {'library_id': 12, 'item_id': 115725, 'item_guid': 'local://112624', 'item_title': "Michael Schenker's Temple of Rock on a Mission Live in Madrid", 'item_year': 2016}
2019-02-17 12:06:09.990 | INFO     | utils.plex:get_movies:72 - Found 5414 movies from all movie sections
2019-02-17 12:06:09.992 | INFO     | __main__:run:103         - Processing movies from 2 sections

This way, if a log message prefix is longer than any of the previous ones, all further ones would use the same spacing to keep the log messages as aligned as possible.

why default info log INFO after has a tab

default style of INFO:

2019-01-29 20:15:39.455 | INFO     | __main__:main:105 - Epoch: 11, iter: 3, lossA: 0.15654006600379944, lossB: 0.13806544244289398

The INFO back has a tab, make it too long.... why this design?

How to properly configure LOG_LEVEL

Hey there

I am trying to figure out how to configure LOG_LEVEL for all output (not just .log file)

Assuming I have set the LOG_LEVEL to ERROR, I am expecting only .error and .critical messages to appear in the console. When executing this sample, the info written to log file is correct (as I would assume based on logger.add) but how I also make sure only .error and .critical is written to console

...
logger.add("{0}sample_{1}.log".format(LOG_PATH, ts), rotation="12:00", level="ERROR")

logger.error("Error log...")
logger.debug("Debug log...")
logger.info("Info log...")
logger.warning("Warning log...")
logger.critical("Critical log...")

Dispatch different level messages to different log files

i should say that i faced logging task in the very first time, and i could be completely ignorant about whole concept of "severity levels" idea, so I beg your pardon for silliness of this question

code sample:

from loguru import logger

LOG_FOLDER = '//var/log/will_crash/'

logger.add(LOG_FOLDER + 'info.log', level="INFO")
logger.add(LOG_FOLDER + 'error.log', level="ERROR")
logger.add(LOG_FOLDER + 'traceback.log', level="TRACE", backtrace=True)

def will_crash(var):
    # this should go to info.log
    logger.info('Here we go!')
    try:
        raise Exception
    except Exception:
        # this should go to error.log
        logger.error('crashed with var= ', var)

        # this should go to traceback.log
        logger.exception('Exception traceback  ↓ ↓ ↓')

var = 'foo'
will_crash(var)

console output seems perfect

2019-01-29 18:16:25.829 | INFO     | __main__:will_crash:13 - Here we go!
2019-01-29 18:16:25.830 | ERROR    | __main__:will_crash:18 - crashed with var = foo
2019-01-29 18:16:25.830 | ERROR    | __main__:will_crash:21 - Exception traceback  ↓ ↓ ↓
Traceback (most recent call last):

  File "/home/developer/Projects/test/test_logger_3.py", line 24, in <module>
    will_crash(var)
    │          └ 'foo'
    └ <function will_crash at 0x7f5a1f85fea0>

> File "/home/developer/Projects/test/test_logger_3.py", line 15, in will_crash
    raise Exception
          └ <class 'Exception'>

Exception

but then..

info.log

2019-01-29 18:35:52.030 | INFO     | __main__:will_crash:13 - Here we go!
2019-01-29 18:35:52.030 | ERROR    | __main__:will_crash:18 - crashed with var = foo
2019-01-29 18:35:52.031 | ERROR    | __main__:will_crash:21 - Exception traceback  ↓ ↓ ↓
Traceback (most recent call last):

  File "/home/developer/Projects/test/test_logger_3.py", line 24, in <module>
    will_crash(var)
    │          └ 'foo'
    └ <function will_crash at 0x7f6c9b4cbea0>

> File "/home/developer/Projects/test/test_logger_3.py", line 15, in will_crash
    raise Exception
          └ <class 'Exception'>

Exception

i'm aware why traceback.log and error.log get same output, they got the same level: 'ERROR'
but why info.log got 'ERROR' level messages, isn't it completely wrong?

I mean, why levels aren't separated but stacked upon each other
if i'll always got full traceback in my info.log i'll not be able to read anything helpful

And if answer would be "because that's exactly how levels supposed to work!", how i could reach "channels"-like behaviour

when logger.%level_name%('message') dispatch log to file which linked here
-> logger.add(LOG_FOLDER + '%level_name%.log', level="%LEVEL_NAME%")

P.S. i read readme and docs several times, but still can't get this done

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.