Code Monkey home page Code Monkey logo

fast_log's Introduction

fast_log

Build Status GitHub release

A log implementation for extreme speed, using Crossbeam/channel ,once Batch write logs,fast log date, Appender architecture, appender per thread

  • High performance,Low overhead, logs auto merge, Full APPEND mode file writing
  • Built-in ZIP,LZ4 compression
  • Support use log::logger().flush() method wait to flush disk
  • Support custom file(impl Trait)
  • Support rolling log(ByDate,BySize,ByDuration)
  • Support Keep log(All,KeepTime,KeepNum) Delete old logs,Prevent logs from occupying the disk
  • uses #![forbid(unsafe_code)] 100% Safe Rust.
              -----------------
log data->    | main channel(crossbeam)  |   ->          
              ----------------- 
                                        ----------------                                    ----------------------
                                  ->    |thread channel)|  -> background thread  |    appender1  |
                                        ----------------                                    ----------------------

                                        ----------------                                    ----------------------
                                  ->    |thread channel)|  -> background thread  |    appender2  |
                                        ----------------                                    ----------------------

                                        ----------------                                    ----------------------
                                  ->    |thread channel)|  -> background thread  |    appender3  |
                                        ----------------                                    ----------------------

                                        ----------------                                    ----------------------
                                  ->    |thread channel)|  -> background thread  |    appender4  |
                                        ----------------                                    ----------------------


  • How fast is?

  • no flush(chan_len=1000000) benches/log.rs

//MACOS(Apple M1MAX-32GB)
test bench_log ... bench:          85 ns/iter (+/- 1,800)
  • all log flush into file(chan_len=1000000) example/bench_test_file.rs
//MACOS(Apple M1MAX-32GB)
test bench_log ... bench:          323 ns/iter (+/- 0)
  • how to use?
log = "0.4"
fast_log = { version = "1.7" }

or enable zip/lz4/gzip Compression library

log = "0.4"
# "lz4","zip","gzip"
fast_log = { version = "1.7", features = ["lz4", "zip", "gzip"] }

Performance optimization(important)

  • use chan_len(Some(100000)) Preallocating channel memory reduces the overhead of memory allocation,for example:
use log::{error, info, warn};
fn main() {
    fast_log::init(Config::new().file("target/test.log").chan_len(Some(100000))).unwrap();
    log::info!("Commencing yak shaving{}", 0);
}

Use Log(Console)

use log::{error, info, warn};
fn main() {
    fast_log::init(Config::new().console().chan_len(Some(100000))).unwrap();
    log::info!("Commencing yak shaving{}", 0);
}

Use Log(Console Print)

use log::{error, info, warn};
fn main() {
    fast_log::init(Config::new().console().chan_len(Some(100000))).unwrap();
    fast_log::print("Commencing print\n".into());
}

Use Log(File)

use fast_log::{init_log};
use log::{error, info, warn};
fn main() {
    fast_log::init(Config::new().file("target/test.log").chan_len(Some(100000))).unwrap();
    log::info!("Commencing yak shaving{}", 0);
    info!("Commencing yak shaving");
}

Split Log(ByLogDate)

use fast_log::config::Config;
use fast_log::plugin::file_split::{RollingType, KeepType, DateType, Rolling};
use std::thread::sleep;
use std::time::Duration;
use fast_log::plugin::packer::LogPacker;
fn main() {
    fast_log::init(Config::new().chan_len(Some(100000)).console().file_split(
        "target/logs/",
        Rolling::new(RollingType::ByDate(DateType::Day)),
        KeepType::KeepNum(2),
        LogPacker {},
    ))
        .unwrap();
    for _ in 0..60 {
        sleep(Duration::from_secs(1));
        log::info!("Commencing yak shaving");
    }
    log::logger().flush();
    println!("you can see log files in path: {}", "target/logs/")
}

Split Log(ByLogSize)

use fast_log::config::Config;
use fast_log::consts::LogSize;
use fast_log::plugin::file_split::{RollingType, KeepType, Rolling};
use fast_log::plugin::packer::LogPacker;
fn main() {
    fast_log::init(Config::new().chan_len(Some(100000)).console().file_split(
        "target/logs/",
        Rolling::new(RollingType::BySize(LogSize::KB(500))),
        KeepType::KeepNum(2),
        LogPacker {},
    ))
        .unwrap();
    for _ in 0..40000 {
        log::info!("Commencing yak shaving");
    }
    log::logger().flush();
    println!("you can see log files in path: {}", "target/logs/")
}
Custom Log(impl do_log method)
use fast_log::appender::{FastLogRecord, LogAppender};
use fast_log::config::Config;
use fastdate::DateTime;
use log::Level;

struct CustomLog {}

impl LogAppender for CustomLog {
    fn do_logs(&mut self, records: &[FastLogRecord]) {
        for record in records {
            let now = DateTime::from(record.now);
            let data;
            match record.level {
                Level::Warn | Level::Error => {
                    data = format!(
                        "{} {} {} - {}  {}\n",
                        now, record.level, record.module_path, record.args, record.formated
                    );
                }
                _ => {
                    data = format!(
                        "{} {} {} - {}\n",
                        &now, record.level, record.module_path, record.args
                    );
                }
            }
            print!("{}", data);
        }
    }
}

fn main() {
    fast_log::init(Config::new().custom(CustomLog {})).unwrap();
    log::info!("Commencing yak shaving");
    log::error!("Commencing error");
    log::logger().flush();
}

fast_log's People

Contributors

zhuxiujia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

fast_log's Issues

关于模块过滤器

你好!
在我的应用中使用了fast_log 1.6.16作为日志输出,用法非常接近以前在java中习惯用的log4j,体验很棒,但最近有两个小问题反馈一下。

  1. 通过ModuleFilter过滤tracing包的日志,在linux环境下的表现与windows不同。在windows下,被过滤了就不会输出任何级别的日志,而在linux下似乎没起作用。在linux下,被过滤时还会受LevelFilter的影响,只要低于设定级别就会输出日志。
    下面是配置代码:
let filter = ModuleFilter::new();
filter.modules.push("tracing::span".to_string());

fast_log::init(fast_log::Config::new()
        .console()
        .chan_len(Some(100000))
        .level(LevelFilter::Debug)
        .add_filter(filter)
        .file_split(&log_path, LogSize::MB(5), RollingType::All, LogPacker {})
    ).unwrap();

linux下运行时仍然会输出日志信息:

2024-04-17 17:50:08.078557605 [INFO] socket reader;
  1. 第二个问题,通过ModuleFilter过滤很好,但是应用依赖了很多层,层,层的第三方的包,运行时突然蹦出来的日志信息都不知道是哪里来的,找起来一点头绪都没有,很是头大。我只好在调试时换个日志框架simple_logger先找出捣乱的包名,记下来,再切换回fast_log加入过滤器。有没有更直接的方法,可以直接看到是哪个包输出了日志?

Different level on console and file?

Does fast_log support outputting logs of different levels in the console and file?
For example, outputting "info" in the console and "trace" in the file

Issue with running programming.

Archive.zip

Cannot run compile appears to happen on logger flush. Let me know if im doing something wrong.

thread '' panicked at 'overflow when subtracting duration from instant', library/std/src/time.rs:600:31
stack backtrace:
0: rust_begin_unwind
at /rustc/1.62.1/library/std/src/panicking.rs:584:5
1: core::panicking::panic_fmt
at /rustc/1.62.1/library/core/src/panicking.rs:142:14
2: core::panicking::panic_display
at /rustc/1.62.1/library/core/src/panicking.rs:72:5
3: core::panicking::panic_str
at /rustc/1.62.1/library/core/src/panicking.rs:56:5
4: core::option::expect_failed
at /rustc/1.62.1/library/core/src/option.rs:1854:5
5: core::option::Option::expect
at /rustc/1.62.1/library/core/src/option.rs:718:21
6: <std::time::SystemTime as core::ops::arith::Subcore::time::Duration>::sub
at /rustc/1.62.1/library/std/src/time.rs:600:9
7: fastdate::datetime::DateTime::now
8: <fast_log::appender::FastLogFormat as fast_log::appender::RecordFormat>::do_format
note: Some details are omitted, run with RUST_BACKTRACE=full for a verbose backtrace.

bus error 当使用压缩时使用 flush 方法

    init_split_log(
        "logs/",
        LogSize::MB(100),
        RollingType::KeepNum(10),
        log::Level::Info,
        None,
        Box::new(LZ4Packer {}),
        false,
    ).expect("Failed to init log");
    fast_log::flush().expect("Failed to flush log");

GZipPacker, ZipPacker 会出现 bus error 错误。 LogPackerLZ4Packer 工作正常。

怎样配置只打印SQL 执行语句,不打印数据输出

只打印粗体部分,不打印斜体部分

**2020-07-10T21:28:40.073506+08:00 INFO rbatis::rbatis - [rbatis] Query ==> SELECT create_time,delete_flag,h5_banner_img,h5_link,id,name,pc_banner_img,pc_link,remark,sort,status,version FROM biz_activity WHERE delete_flag = ? LIMIT 0,20

2020-07-10T21:28:40.073506+08:00 INFO rbatis::rbatis - [rbatis] Args ==> [1]**

_2020-07-10T21:28:40.076506500+08:00 INFO rbatis::rbatis - [rbatis] Total <== 5

```json
{
	"records": [{
		"id": "12312",
		"name": "null",
		"pc_link": "null",
		"h5_link": "null",
		"pc_banner_img": "null",
		"h5_banner_img": "null",
		"sort": "null",
		"status": 1,
		"remark": "null",
		"create_time": "2020-02-09T00:00:00+00:00",
		"version": 1,
		"delete_flag": 1
	}],
	"total": 5,
	"size": 20,
	"current": 1,
	"serch_count": true
}
```_

不能输出debug日志

请问大佬 无论怎么设置日志等级,都不能输出info以下的日志, 怎么才能输出debug日志

关于文件日志级别的问题

在正式环境中,如何改为Info,但是Error也能输出,而且info跟error分两个文件输入,要能配置文件大小,比如每个文件最大1M,跟java的log4j一样可以配置这些参数呢

console appender uses stdout instead of stderr

I have been using fast_log crate for my logging, as it happened to be used in an example code I started with. Was good enough for me until my code had to generate some output on stdout. Unexpectedly the logging messages got mixed with the output.

Expected behaviour: 'console' logging goes to stderr, regular program output (including println!() or explicit writes to stdout) go to stdout, so consumers can differentiate those.

Actual behaviour: console logging goes to stdout, where the actual program output belongs.

Note: changing this now could be a breaking change for anyone who relies on current (broken) behaviour, e.g. when calling binaries built with this crate in a shell scripts and parsing its logging output.

DEMO 例子运行无打印显示

use log::{error, info, warn}; fn main(){ fast_log::init(Config::new().console()).unwrap(); log::info!("Commencing yak shaving{}", 0); }

手动保存接口

有方法在 CTRL+C 时保存未写入的文件吗?还有手动 rotate 的方法。这边希望每日 00:00 的时候 rotate 一下。

add_sub_sec 方法缺失

1.5.51 版本 add_sub_sec 方法缺失,是不是DateTime升级了没跟进呢?

no method named `add_sub_sec` found for struct `DateTime` in the current scope
  --> /Users/jiashiwen/.cargo/registry/src/github.com-1ecc6299db9ec823/fast_log-1.5.51/src/formats.rs:25:59
   |
25 |                         fastdate::DateTime::from(arg.now).add_sub_sec(fastdate::offset_sec() as i64)
   |                                                           ^^^^^^^^^^^ method not found in `DateTime`

error[E0599]: no method named `add_sub_sec` found for struct `DateTime` in the current scope
  --> /Users/jiashiwen/.cargo/registry/src/github.com-1ecc6299db9ec823/fast_log-1.5.51/src/formats.rs:84:59
   |
84 |                         fastdate::DateTime::from(arg.now).add_sub_sec(fastdate::offset_sec() as i64)
   |                                                           ^^^^^^^^^^^ method not found in `DateTime`

For more information about this error, try `rustc --explain E0599`.

为啥日志输出时必须使用flush才能写/输出日志?

在使用时发现必须在我自己的代码结束时加上一行log::logger().flush()才能正常的在控制台或者文件中输出日志,不然无效。
在你的介绍中也看到了这个功能的介绍,但不是所有用户都喜欢这种设定的,也请支持无需要设置flush的日志输出方式。

use log::LevelFilter;
fn main() {
    fast_log::init(Config::new().console().level(LevelFilter::Info)).unwrap();
    log::info!("----test2---------");
}

Full flush and wait support

Hi, I am very happy to use this crate but there are some problems with flush. Since logging is happened in another thread we can lose some data when main thread terminated (if there are data in channel or non flushed data in appenders). In my opinion log::logger.flush() should wait until channel is empty and every appender flush data.

ModuleFilter usage

Hello,
in the older version from fast_log I used:

let mut mail_config = Config::new()
    .chan_len(Some(100000))
    .level(level)
    .filter(ModuleFilter::new_exclude(vec![
        "rustls".to_string(),
        "hyper".to_string(),
        "mio".to_string(),
    ]))
    .format(LogFormat::new().set_display_line_level(CONFIG.log_level));

But this function is changed. So I try something like:

let filter = ModuleFilter::new();
filter.modules.pushes(vec![
    "rustls".to_string(),
    "hyper".to_string(),
    "mio".to_string(),
]);

let mut mail_config = Config::new()
    .chan_len(Some(100000))
    .level(level)
    .add_filter(filter)
    .format(LogFormat::new().set_display_line_level(CONFIG.log_level));

But this didn't work. Can you explain, how to solve this?

support multiple process use same log file

let mut config = fast_log::Config::new()
        .custom(ConsoleErrAppender {})
        .file_split(
            file_path,
            fast_log::consts::LogSize::MB(log_file_mb),
            fast_log::plugin::file_split::RollingType::KeepNum(log_file_keep_num.into()),
            LogPacker {},
        )
        .level(log::LevelFilter::Info);
    config.format = Box::new(MyLogFormat {});
    fast_log::init(config).expect("failed to init log");

Currently fast_log will open file with read + write, and seek to the file end, and write logs to file.
When multiple process use same log file, it will result in repeated overwriting same position.
Opening the file using append can solve this problem.
But additional code may be needed to determine when to roll the file and synchronize this state between multiple processes.

lost logs

just test the following code:

use std::time::Duration;

use fast_log::config::Config;
use fast_log::consts::LogSize;
use fast_log::plugin::file_split::RollingType;
use fast_log::plugin::packer::LogPacker;
use fast_log::sleep;
use log::info;

pub fn main() {
fast_log::init(Config::new().file_split(
"target/logs/",
LogSize::MB(100),
RollingType::All,
LogPacker {},
))
.unwrap();
for i in 0..100_0000 {
info!("Commencing yak shaving {}", i);
}
sleep(Duration::from_secs(1));
}

1, Using sleep, not get. 1000000 logs.
2,using fast_log::flush().unwrap(), not either.

support rolling log file?

as features below?

<appender name="roll-by-size" class="org.apache.log4j.RollingFileAppender">
    <param name="file" value="target/log4j/roll-by-size/app.log" />
    <param name="MaxFileSize" value="5KB" />
    <param name="MaxBackupIndex" value="2" />
        <layout class="org.apache.log4j.PatternLayout">
            <param name="ConversionPattern" value="%d{yyyy-MM-dd HH:mm:ss} %-5p %m%n" />
        </layout>
</appender>

Inconsistent slash directions on Windows when line numbers shown

In my app, which is meant to primarily run on Windows targets, I'm using fast_log and have been mostly happy with it.

However, I've noticed that when line numbers are shown, the slash direction is very inconsistent.

2024-08-05 18:33:42.7056249 [INFO] [src\app.rs:303] Sending shutdown signal to threads!
2024-08-05 18:33:42.7056667 [DEBUG] [src\app.rs:328] Joining OSC thread
2024-08-05 18:33:42.7056977 [INFO] [src\heart_rate_dummy.rs:60] Shutting down Dummy thread!
2024-08-05 18:33:42.7057416 [INFO] [src\osc.rs:326] Shutting down OSC thread!
2024-08-05 18:33:42.7058752 [DEBUG] [src\app.rs:342] Joining Dummy thread
2024-08-05 18:33:42.7058867 [INFO] [src/main.rs:83] Shutting down gracefully...
2024-08-05 18:37:57.8477734 [INFO] [src/main.rs:63] Starting app...
2024-08-05 18:37:57.8482919 [DEBUG] [src/app.rs:277] Spawning Dummy thread
2024-08-05 18:37:57.8483503 [DEBUG] [src/app.rs:251] Spawning OSC thread
2024-08-05 18:38:00.0461669 [INFO] [src/app.rs:303] Sending shutdown signal to threads!

It seems like it could be fixed with a simple .replace("\\", "/") in formats.rs, but I'm not sure if that's the ideal solution. :)

rolling does not work?

run examples , found that when temp.log reach the max temp size , it will be cleared ,but no rolled file generated?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.