Code Monkey home page Code Monkey logo

fluent-plugin-systemd's Introduction

systemd plugin for Fluentd

Build Status Maintainability Gem Version

Overview

  • systemd input plugin to read logs from the systemd journal
  • systemd filter plugin for basic manipulation of systemd journal entries

Support

Join the #plugin-systemd channel on the Fluentd Slack

Requirements

fluent-plugin-systemd fluentd td-agent ruby
> 0.1.0 >= 0.14.11, < 2 3 >= 2.1
0.0.x ~> 0.12.0 2 >= 1.9
  • The 1.x.x series is developed from this branch (master)
  • The 0.0.x series (compatible with fluentd v0.12, and td-agent 2) is maintained on the 0.0.x branch

Installation

Simply use RubyGems:

gem install fluent-plugin-systemd -v 1.0.3

or

td-agent-gem install fluent-plugin-systemd -v 1.0.3

Upgrading

If you are upgrading to version 1.0 from a previous version of this plugin take a look at the upgrade documentation. A number of deprecated config options were removed so you might need to update your configuration.

Input Plugin Configuration

<source>
  @type systemd
  tag kubelet
  path /var/log/journal
  matches [{ "_SYSTEMD_UNIT": "kubelet.service" }]
  read_from_head true

  <storage>
    @type local
    path /var/log/fluentd-journald-kubelet-cursor.json
  </storage>

  <entry>
    fields_strip_underscores true
    fields_lowercase true
  </entry>
</source>

<match kubelet>
  @type stdout
</match>

<system>
  root_dir /var/log/fluentd
</system>

path

Path to the systemd journal, defaults to /var/log/journal

filters

This parameter name is depreciated and should be renamed to matches

matches

Expects an array of hashes defining desired matches to filter the log messages with. When this property is not specified, this plugin will default to reading all logs from the journal.

See matching details for a more exhaustive description of this property and how to use it.

storage

Configuration for a storage plugin used to store the journald cursor.

read_from_head

If true reads all available journal from head, otherwise starts reading from tail, ignored if cursor exists in storage (and is valid). Defaults to false.

entry

Optional configuration for an embedded systemd entry filter. See the Filter Plugin Configuration for config reference.

tag

Required

A tag that will be added to events generated by this input.

Filter Plugin Configuration

<filter kube-proxy>
  @type systemd_entry
  field_map {"MESSAGE": "log", "_PID": ["process", "pid"], "_CMDLINE": "process", "_COMM": "cmd"}
  field_map_strict false
  fields_lowercase true
  fields_strip_underscores true
</filter>

Note that the following configurations can be embedded in a systemd source block, within an entry block, you only need to use a filter directly for more complicated workflows.

field_map

Object / hash defining a mapping of source fields to destination fields. Destination fields may be existing or new user-defined fields. If multiple source fields are mapped to the same destination field, the contents of the fields will be appended to the destination field in the order defined in the mapping. A field map declaration takes the form of:

{
  "<src_field1>": "<dst_field1>",
  "<src_field2>": ["<dst_field1>", "<dst_field2>"],
  ...
}

Defaults to an empty map.

field_map_strict

If true, only destination fields from field_map are included in the result. Defaults to false.

fields_lowercase

If true, lowercase all non-mapped fields. Defaults to false.

fields_strip_underscores

If true, strip leading underscores from all non-mapped fields. Defaults to false.

Filter Example

Given a systemd journal source entry:

{
  "_MACHINE_ID": "bb9d0a52a41243829ecd729b40ac0bce"
  "_HOSTNAME": "arch"
  "MESSAGE": "this is a log message",
  "_PID": "123"
  "_CMDLINE": "login -- root"
  "_COMM": "login"
}

The resulting entry using the above sample configuration:

{
  "machine_id": "bb9d0a52a41243829ecd729b40ac0bce"
  "hostname": "arch",
  "msg": "this is a log message",
  "pid": "123"
  "cmd": "login"
  "process": "123 login -- root"
}

Common Issues

When I look at fluentd logs, everything looks fine but no journal logs are read ?

This is commonly caused when the user running fluentd does not have the correct permissions to read the systemd journal.

According to the systemd documentation:

Journal files are, by default, owned and readable by the "systemd-journal" system group but are not writable. Adding a user to this group thus enables her/him to read the journal files.

How can I deal with multi-line logs ?

Ideally you want to ensure that your logs are saved to the systemd journal as a single entry regardless of how many lines they span.

It is possible for applications to naively support this (but only if they have tight integration with systemd it seems) see: systemd/systemd#5188.

Typically you would not be able to this, so another way is to configure your logger to replace newline characters with something else. See this blog post for an example configuring a Java logging library to do this https://fabianlee.org/2018/03/09/java-collapsing-multiline-stack-traces-into-a-single-log-event-using-spring-backed-by-logback-or-log4j2/

Another strategy would be to use a plugin like fluent-plugin-concat to combine multi line logs into a single event, this is more tricky though because you need to be able to identify the first and last lines of a multi line message with a regex.

How can I use this plugin inside of a docker container ?

  • Install the systemd dependencies if required
  • You can use an offical fluentd docker image as a base, (choose the debian based version, as alpine linux doesn't support systemd).
  • Bind mount /var/log/journal into your container.

I am seeing lots of logs being generated very rapidly!

This commonly occurs when a loop is created when fluentd is logging to STDOUT, and the collected logs are then written to the systemd journal. This could happen if you run fluentd as a systemd serivce, or as a docker container with the systemd log driver.

Workarounds include:

  • Use another fluentd output
  • Don't read every message from the journal, set some matches so you only read the messages you are interested in.
  • Disable the systemd log driver when you launch your fluentd docker container, e.g. by passing --log-driver json-file

Example

For an example of a full working setup including the plugin, take a look at the fluentd kubernetes daemonset

Dependencies

This plugin depends on libsystemd

On Debian or Ubuntu you might need to install the libsystemd0 package:

apt-get install libsystemd0

On AlmaLinux or RHEL you might need to install the systemd package:

yum install -y systemd

If you want to do this in a AlmaLinux docker image you might first need to remove the fakesystemd package.

yum remove -y fakesystemd

Running the tests

To run the tests with docker on several distros simply run rake

For systems with systemd installed you can run the tests against your installed libsystemd with rake test

License

Apache-2.0

Contributions

Issues and pull requests are very welcome.

If you want to make a contribution but need some help or advice feel free to message me @errm on the Fluentd Slack, or send me an email [email protected]

We have adopted the Contributor Covenant and thus expect anyone interacting with contributors, maintainers and users of this project to abide by it.

Maintainer

Contributors

Many thanks to our fantastic contributors

fluent-plugin-systemd's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fluent-plugin-systemd's Issues

Not able to ship logs even after adding td-agent user to systemd-journal group

Hi @errm we are facing issue in shipping logs from systemd. We have followed all the instructions and added "td-agent" user to "systemd-journal" group. Ours is Centos box and we have installed td-agent version 3 (td-agent.x86_64 3.8.0-0.el7). Looks like permission issue and when we add "td-agent" user to "root" group it works. Do we really need to provide root access to td-agent service for this plugin to work? Can you please help us in this?

Regards,
Sri Ramanathan

Duplicate entries from journald in 0.0.11

Our team have found the issue in one of our fluentd images. Steps to reproduce:

  1. Create file /etc/fluent/fluent.conf with content
<source>
    type systemd
    filters [{ "_SYSTEMD_UNIT": "docker.service" }]
    pos_file /var/log/gcp-journald-docker.pos
    read_from_head true
    tag docker
</source>

<match docker>
    @type stdout
</match>
  1. Run command: docker run -ti --rm -v /etc/fluent/fluent.conf:/etc/fluent/fluent.conf -v /usr/lib64:/host/lib -v /var/log:/var/log --network=host gcr.io/google-containers/fluentd-gcp:2.0.18

It's going to print latest message from docker every second.

If use version 2.0.17 instead we don't see the same problem. Biggest difference between those two images is systemd gem (0.0.11 vs 0.0.9)

systemd plugin fails to recognise journal file rotation

RHEL 7.3
systemd plugin 0.1.0

It looks like the plugin stops feeding the entries on journal file rotation.

How to reproduce:

update /etc/systemd/journald.conf with the config entries (just for demo sake only):

[Journal]
Storage=persistent
SystemMaxUse=8M
SystemMaxFileSize=1M
SystemMaxFiles=3
MaxRetentionSec=1d
MaxFileSec=5m

reload systemd config:
systemctl force-reload systemd-journald

Once the file is rotated after 5 min the logs stop flowing.

Example configuration issues with td-agent 3

Just wanted to let you know I had some issues getting this plugin to run with the config example because the fluentd config parser (td-agent 3) failed to parse the matches field.

To resolve the issue I had to escape the quotes, so my configuration ended up looking like this

matches "[{ \"_SYSTEMD_UNIT\": \"consul.service\" }]"

In addition, I had trouble getting the storage configuration used in the example to work. I was only able to get it running once I changed <storage in-systemd-consul> to <storage>.

systemd plugin causes immediate, silent crash

2016-08-16 17:12:07 -0400 [info]: reading config file path="/etc/td-agent/td-agent.conf"
2016-08-16 17:12:07 -0400 [info]: starting fluentd-0.12.26
2016-08-16 17:12:07 -0400 [info]: gem 'fluent-mixin-config-placeholders' version '0.4.0'
2016-08-16 17:12:07 -0400 [info]: gem 'fluent-mixin-plaintextformatter' version '0.2.6'
2016-08-16 17:12:07 -0400 [info]: gem 'fluent-plugin-concat' version '0.5.0'
2016-08-16 17:12:07 -0400 [info]: gem 'fluent-plugin-forest' version '0.3.1'
2016-08-16 17:12:07 -0400 [info]: gem 'fluent-plugin-mongo' version '0.7.13'
2016-08-16 17:12:07 -0400 [info]: gem 'fluent-plugin-multi-format-parser' version '0.0.2'
2016-08-16 17:12:07 -0400 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '1.5.5'
2016-08-16 17:12:07 -0400 [info]: gem 'fluent-plugin-s3' version '0.6.8'
2016-08-16 17:12:07 -0400 [info]: gem 'fluent-plugin-sar' version '0.0.4'
2016-08-16 17:12:07 -0400 [info]: gem 'fluent-plugin-scribe' version '0.10.14'
2016-08-16 17:12:07 -0400 [info]: gem 'fluent-plugin-secure-forward' version '0.4.3'
2016-08-16 17:12:07 -0400 [info]: gem 'fluent-plugin-systemd' version '0.0.3'
2016-08-16 17:12:07 -0400 [info]: gem 'fluent-plugin-tail-multiline' version '0.1.5'
2016-08-16 17:12:07 -0400 [info]: gem 'fluent-plugin-td' version '0.10.28'
2016-08-16 17:12:07 -0400 [info]: gem 'fluent-plugin-td-monitoring' version '0.2.2'
2016-08-16 17:12:07 -0400 [info]: gem 'fluent-plugin-webhdfs' version '0.4.2'
2016-08-16 17:12:07 -0400 [info]: gem 'fluentd' version '0.12.26'
2016-08-16 17:12:07 -0400 [info]: gem 'fluentd' version '0.10.61'
2016-08-16 17:12:07 -0400 [info]: adding filter in @DEFAULT pattern="**" type="stdout"
2016-08-16 17:12:07 -0400 [info]: adding match in @DEFAULT pattern="**" type="copy"
2016-08-16 17:12:07 -0400 [info]: adding filter pattern="**" type="record_transformer"
2016-08-16 17:12:07 -0400 [info]: adding filter pattern="fluent.{fatal,error,warn,info,debug,trace}.**" type="record_transformer"
2016-08-16 17:12:07 -0400 [info]: adding filter pattern="docker-container.elasticsearch-soraka.**" type="concat"
2016-08-16 17:12:07 -0400 [info]: adding filter pattern="docker-container.fluentd-soraka.**" type="concat"
2016-08-16 17:12:07 -0400 [info]: adding filter pattern="docker-container.nmap-scanner.**" type="concat"
2016-08-16 17:12:07 -0400 [info]: adding match pattern="filter.service.fluentd-forwarder.**" type="rewrite_tag_filter"
2016-08-16 17:12:07 -0400 [info]: adding rewrite_tag_filter rule: rewriterule1 ["log", /^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} [-+]\d{4} \[\w+?\]: Timeout flush:/, "", "skip.${tag}"]
2016-08-16 17:12:07 -0400 [info]: adding rewrite_tag_filter rule: rewriterule2 ["log", /.*/, "", "${tag}"]
2016-08-16 17:12:07 -0400 [info]: adding match pattern="skip.service.fluentd-forwarder.**" type="rewrite_tag_filter"
2016-08-16 17:12:07 -0400 [info]: adding rewrite_tag_filter rule: rewriterule1 ["log", /.*/, "", "${tag}"]
2016-08-16 17:12:07 -0400 [info]: adding filter pattern="service.fluentd-forwarder.**" type="concat"
2016-08-16 17:12:07 -0400 [info]: adding match pattern="**" type="relabel"
2016-08-16 17:12:07 -0400 [info]: adding source type="dummy"
2016-08-16 17:12:07 -0400 [info]: adding source type="forward"
2016-08-16 17:12:07 -0400 [info]: adding source type="systemd"
2016-08-16 17:12:07 -0400 [info]: adding source type="tail"
2016-08-16 17:12:07 -0400 [info]: using configuration file: <ROOT>
  <source>
    @type dummy
    tag fluent.heartbeat.buffered
    rate 1
    dummy {"message":"heartbeat","level":"debug"}
  </source>
  <source>
    @type forward
    bind 0.0.0.0
    port 24224
  </source>
  <source>
    @type systemd
    tag systemd.docker-engine
    @log_level trace
    path /var/log/journal/
    filters [{"_SYSTEMD_UNIT":"docker.service"}]
    strip_underscores true
    pos_file /var/lib/td-agent/pos/docker-engine.pos
    read_from_head true
  </source>
  <source>
    @type tail
    tag filter.service.fluentd-forwarder
    path /var/log/td-agent/td-agent.log
    pos_file /var/lib/td-agent/pos/td-agent.pos
    read_from_head true
    format /(?<log>^((?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} [-+]\d{4}) \[(?<level>\w+?)\])?.*$)/
    keep_time_key false
  </source>
  <filter **>
    @type record_transformer
    <record>
      hostname ${hostname}
    </record>
  </filter>
  <filter fluent.{fatal,error,warn,info,debug,trace}.**>
    @type record_transformer
    <record>
      level ${tag_parts[1]}
    </record>
  </filter>
  <filter docker-container.elasticsearch-soraka.**>
    @type concat
    key log
    stream_identity_key source
    multiline_start_regexp /^\[\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3}\]/
    flush_interval 10
    timeout_label @DEFAULT
  </filter>
  <filter docker-container.fluentd-soraka.**>
    @type concat
    key log
    stream_identity_key source
    multiline_start_regexp /^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}/
    flush_interval 10
    timeout_label @DEFAULT
  </filter>
  <filter docker-container.nmap-scanner.**>
    @type concat
    key log
    stream_identity_key source
    multiline_start_regexp /^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}:/
    flush_interval 120
    timeout_label @DEFAULT
  </filter>
  <match filter.service.fluentd-forwarder.**>
    @type rewrite_tag_filter
    remove_tag_prefix filter
    rewriterule1 log ^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} [-+]\d{4} \[\w+?\]: Timeout flush:  skip.${tag}
    rewriterule2 log .* ${tag}
  </match>
  <match skip.service.fluentd-forwarder.**>
    @type rewrite_tag_filter
    @label @DEFAULT
    remove_tag_prefix skip
    rewriterule1 log .* ${tag}
  </match>
  <filter service.fluentd-forwarder.**>
    @type concat
    key log
    multiline_start_regexp /^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}/
    flush_interval 10
    timeout_label @DEFAULT
  </filter>
  <match **>
    @type relabel
    @label @DEFAULT
  </match>
  <label @DEFAULT>
    <filter **>
      @type stdout
    </filter>
    <match **>
      @type copy
      <store>
        @type secure_forward
        buffer_type file
        buffer_path /var/spool/td-agent/buf/fek.*.buffer
        secure true
        shared_key xxxxxx
        self_hostname [REDACTED]
        ca_cert_path /etc/td-agent/ssl/certs/ca_cert.pem
        <server>
          host [REDACTED]
          port 24284
        </server>
      </store>
      <store>
        @type forest
        subtype copy
        <case docker-container.**>
          <store>
            @type file
            path /var/log/td-agent/docker-containers/${tag_parts[1..-1]}
            time_slice_format %Y%m%d
            time_slice_wait 10m
            time_format %Y%m%d@%H%M%S%z
            utc
            compress gzip
          </store>
        </case>
      </store>
    </match>
  </label>
</ROOT>
2016-08-16 17:12:07 -0400 [info]: listening fluent socket on 0.0.0.0:24224
2016-08-16 17:12:07 -0400 [info]: following tail of /var/log/td-agent/td-agent.log
2016-08-16 17:12:07 -0400 [info]: out_forest plants new output: copy for tag 'service.fluentd-forwarder'
2016-08-16 17:12:07 -0400 systemd.docker-engine: {"PRIORITY":"6","UID":"0","GID":"0","SYSTEMD_SLICE":"system.slice","BOOT_ID":"9bdaf4b9512f493c99a8be69ae697588","MACHINE_ID":"ab6857580005461cab71a95470ac18e6","HOSTNAME":"[REDACTED]","SYSLOG_FACILITY":"3","CAP_EFFECTIVE":"1fffffffff","TRANSPORT":"stdout","SYSLOG_IDENTIFIER":"docker","MESSAGE":"time=\"2016-07-28T10:24:03.361026938-04:00\" level=info msg=\"New containerd process, pid: 2089\\n\"","PID":"941","COMM":"docker","EXE":"/usr/bin/docker","CMDLINE":"/usr/bin/docker daemon -H fd://","SYSTEMD_CGROUP":"/system.slice/docker.service","SYSTEMD_UNIT":"docker.service","hostname":"[REDACTED]"}
2016-08-16 17:12:07 -0400 service.fluentd-forwarder: {"log":"2016-08-16 03:34:11 -0400 [info]: Timeout flush: service.fluentd-forwarder:default","level":"info","hostname":"[REDACTED]"}
2016-08-16 17:12:07 -0400 service.fluentd-forwarder: {"log":"2016-08-16 03:34:21 -0400 [info]: Timeout flush: service.fluentd-forwarder:default","level":"info","hostname":"[REDACTED]"}
2016-08-16 17:12:07 -0400 service.fluentd-forwarder: {"log":"2016-08-16 03:34:31 -0400 [info]: Timeout flush: service.fluentd-forwarder:default","level":"info","hostname":"[REDACTED]"}
2016-08-16 17:12:07 -0400 service.fluentd-forwarder: {"log":"2016-08-16 03:34:41 -0400 [info]: Timeout flush: service.fluentd-forwarder:default","level":"info","hostname":"[REDACTED]"}
2016-08-16 17:12:07 -0400 service.fluentd-forwarder: {"log":"2016-08-16 03:34:51 -0400 [info]: Timeout flush: service.fluentd-forwarder:default","level":"info","hostname":"[REDACTED]"}
2016-08-16 17:12:07 -0400 service.fluentd-forwarder: {"log":"2016-08-16 03:35:01 -0400 [info]: Timeout flush: service.fluentd-forwarder:default","level":"info","hostname":"[REDACTED]"}
2016-08-16 17:12:07 -0400 [info]: out_forest plants new output: copy for tag 'systemd.docker-engine'
2016-08-16 17:12:08 -0400 [info]: process finished code=134
2016-08-16 17:12:08 -0400 [warn]: process died within 1 second. exit.
  • Crash does not happen if the systemd <source> entry is removed from the .conf file
  • '/var/log/journal/' exists and is readable (previously, it existed but was not readable; crash did not happen but systemd plugin produced no events)
  • '/var/lib/td-agent/pos/docker-engine.pos' does not exist and is not created.

fluentd restart loop when reading journal logs

I'm running fluentd with the systemd-journal plugin. After running fine for few days, fluentd keeps crashing with the below error:

/opt/td-agent/embedded/lib/ruby/2.1.0/json/common.rb:128: warning: previous definition of UnparserError was here
2016-05-02 22:26:04 +0000 [info]: adding match pattern="**" type="aws-elasticsearch-service"
2016-05-02 22:26:05 +0000 [info]: adding source type="tail"
2016-05-02 22:26:06 +0000 [info]: adding source type="systemd"
2016-05-02 22:26:06 +0000 [error]: unexpected error error="Invalid argument"
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/systemd-journal-1.2.2/lib/systemd/journal/navigable.rb:106:in `seek'
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-systemd-0.0.2/lib/fluent/plugin/in_systemd.rb:23:in `configure'
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.20/lib/fluent/root_agent.rb:150:in `add_source'
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.20/lib/fluent/root_agent.rb:91:in `block in configure'
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.20/lib/fluent/root_agent.rb:88:in `each'
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.20/lib/fluent/root_agent.rb:88:in `configure'
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.20/lib/fluent/engine.rb:117:in `configure'
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.20/lib/fluent/engine.rb:91:in `run_configure'
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.20/lib/fluent/supervisor.rb:515:in `run_configure'
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.20/lib/fluent/supervisor.rb:146:in `block in start'
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.20/lib/fluent/supervisor.rb:352:in `call'
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.20/lib/fluent/supervisor.rb:352:in `main_process'
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.20/lib/fluent/supervisor.rb:325:in `block in supervise'
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.20/lib/fluent/supervisor.rb:324:in `fork'
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.20/lib/fluent/supervisor.rb:324:in `supervise'
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.20/lib/fluent/supervisor.rb:142:in `start'
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.20/lib/fluent/command/fluentd.rb:171:in `<top (required)>'
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/lib/ruby/site_ruby/2.1.0/rubygems/core_ext/kernel_require.rb:54:in `require'
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/lib/ruby/site_ruby/2.1.0/rubygems/core_ext/kernel_require.rb:54:in `require'
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.20/bin/fluentd:6:in `<top (required)>'
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/bin/fluentd:23:in `load'
  2016-05-02 22:26:06 +0000 [error]: /opt/td-agent/embedded/bin/fluentd:23:in `<top (required)>'
  2016-05-02 22:26:06 +0000 [error]: /usr/sbin/td-agent:7:in `load'
  2016-05-02 22:26:06 +0000 [error]: /usr/sbin/td-agent:7:in `<main>'
2016-05-02 22:26:06 +0000 [info]: process finished code=256
2016-05-02 22:26:06 +0000 [error]: fluentd main process died unexpectedly. restarting.

For reference here is our dockerfile:
https://github.com/Nordstrom/docker-fluentd-aws-elasticsearch/blob/master/Dockerfile

OS version: CoreOS beta 1010.1.0
Systemd version: 225

How can i use multiline ??

how can i use multiline ??


@test@ FATAL io.robustperception.java_examples.JavaSimple$ExampleServlet Fatal : You chose a number > 100
java.lang.NullPointerException: NullError
at io.robustperception.java_examples.JavaSimple$ExampleServlet.doGet(JavaSimple.java:35)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:735)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:648)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:365)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:485)
at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:926)
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:988)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:635)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:627)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:51)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)

Heavy/Rapid fluentd container logs are logged while the docker daemon logging driver is set to journald when used fluentd-plugin-systemd

Hi There,

Looks like the fluent-plugin-systemd logs rapid and huge container logs of fluentd when the logging driver of docker daemon is set to journald. Below is the example and also you see thousands of backslash charaters. My log files have grown in GB now. Things get back to normal when I set the docker logging driver to json. Is this expected behaviour?

May 16 11:01:09 ip-172-25-0-204 debfd82b3369: 2018-05-16 11:01:08.389579000 +0000 kube-proxy: {"priority":"6","container_name":"fluentd","container_partial_message":"true","transport":"journal","pid":"1
432","uid":"0","gid":"0","comm":"dockerd","exe":"/usr/bin/dockerd","cmdline":"/usr/bin/dockerd","cap_effective":"1fffffffff","systemd_cgroup":"/system.slice/docker.service","systemd_unit":"docker.servic
e","systemd_slice":"system.slice","selinux_context":"system_u:system_r:container_runtime_t:s0","boot_id":"bd7c6704228948fc9a8a7ee6f8795586","machine_id":"f073c429a7456b53ec3e2c53460c5c8f","hostname":"ip
-172-25-0-204","container_tag":"debfd82b3369","syslog_identifier":"debfd82b3369","container_id":"debfd82b3369","container_id_full":"debfd82b33696a2c9162d6afa6e6e2e5ebcdf104fa985f6e2430e8e82782c57e","mes
sage":"2018-05-16 11:01:07.925657000 +0000 kube-proxy: {"priority":"6","container_name":"fluentd","transport":"journal","pid":"1432","uid":"0","gid":"0","comm":"dockerd","ex
e":"/usr/bin/dockerd","cmdline":"/usr/bin/dockerd","cap_effective":"1fffffffff","systemd_cgroup":"/system.slice/docker.service","systemd_unit":"docker.service","systemd_slice":"sys
tem.slice","selinux_context":"system_u:system_r:container_runtime_t:s0","boot_id":"bd7c6704228948fc9a8a7ee6f8795586","machine_id":"f073c429a7456b53ec3e2c53460c5c8f","hostname":"ip-172-25
-0-204","container_tag":"debfd82b3369","syslog_identifier":"debfd82b3369","container_id":"debfd82b3369","container_id_full":"debfd82b33696a2c9162d6afa6e6e2e5ebcdf104fa985f6e2430e8e82782c
57e","message":"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"cap_effective\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\

Thanks,
Amey

How to get Gke node reboot and shutdown logs using journald Systemd service unit

What happened:
i have implemented the journald configuration in my splunk but still i could not see node reboot logs.

I want to know that my gke node got restarted or not by checking the logs in splunk and i am not sure which systemd service unit holds that data

What you expected to happen:

i want to see logs to be reflected wn ever reboot happen on that gke node

How to reproduce it (as minimally and precisely as possible):

i have gke cluster and have node running on os Container-optimized os and jst login to gke node and reboot that gke and in splunk we should get some logs stating gke node rebooted.

Anything else we need to know?:
configuration using to get journal configuration:

source.journald.conf: |-

This fluentd conf file contains configurations for reading logs from systemd journal.

@id journald-all
@type systemd
@Label @concat
tag journald.journal:all
path "/var/log/journal"
matches []
read_from_head true

@type local
persistent true
path /var/log/splunkd-fluentd-journald-all.pos.json

field_map {"MESSAGE": "log", "_SYSTEMD_UNIT": "source"}
field_map_strict true

Environment:

Kubernetes version (use kubectl version): 1.15.12-gke.2
Ruby version (use ruby --version):
OS (e.g: cat /etc/os-release): Container-Optimized OS (cos)

`checking for ffi.h... *** extconf.rb failed ***`

FROM fluent/fluentd:v1.10-debian

USER root 

RUN apt update
RUN apt install -y libffi-dev
RUN apt install -y libsystemd0

RUN gem install fluent-plugin-systemd -v 1.0.1 || true
RUN echo "test"
RUN cat /usr/local/bundle/extensions/x86_64-linux/2.6.0/ffi-1.13.1/gem_make.out 

USER fluent
 docker build -t fld .
Sending build context to Docker daemon  83.89MB
Step 1/9 : FROM fluent/fluentd:v1.10-debian
 ---> a8898cd41b95
Step 2/9 : USER root
 ---> Using cache
 ---> 96bf61600028
Step 3/9 : RUN apt update
 ---> Using cache
 ---> c81580e5433f
Step 4/9 : RUN apt install -y libffi-dev
 ---> Using cache
 ---> 8877b4ec056c
Step 5/9 : RUN apt install -y libsystemd0
 ---> Running in 1017ac975fed

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

Reading package lists...
Building dependency tree...
Reading state information...
The following packages will be upgraded:
  libsystemd0
1 upgraded, 0 newly installed, 0 to remove and 8 not upgraded.
Need to get 331 kB of archives.
After this operation, 1024 B of additional disk space will be used.
Get:1 http://deb.debian.org/debian buster/main amd64 libsystemd0 amd64 241-7~deb10u4 [331 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 331 kB in 0s (1557 kB/s)
(Reading database ... 8571 files and directories currently installed.)
Preparing to unpack .../libsystemd0_241-7~deb10u4_amd64.deb ...
Unpacking libsystemd0:amd64 (241-7~deb10u4) over (241-7~deb10u3) ...
Setting up libsystemd0:amd64 (241-7~deb10u4) ...
Processing triggers for libc-bin (2.28-10) ...
Removing intermediate container 1017ac975fed
 ---> f69ba7460617
Step 6/9 : RUN gem install fluent-plugin-systemd -v 1.0.1 || true
 ---> Running in 2aa1b03a4aac
Building native extensions. This could take a while...
ERROR:  Error installing fluent-plugin-systemd:
	ERROR: Failed to build gem native extension.

    current directory: /usr/local/bundle/gems/ffi-1.13.1/ext/ffi_c
/usr/local/bin/ruby -I /usr/local/lib/ruby/2.6.0 -r ./siteconf20200616-6-1uy72yx.rb extconf.rb
checking for ffi.h... *** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of necessary
libraries and/or headers.  Check the mkmf.log file for more details.  You may
need configuration options.

Provided configuration options:
	--with-opt-dir
	--without-opt-dir
	--with-opt-include
	--without-opt-include=${opt-dir}/include
	--with-opt-lib
	--without-opt-lib=${opt-dir}/lib
	--with-make-prog
	--without-make-prog
	--srcdir=.
	--curdir
	--ruby=/usr/local/bin/$(RUBY_BASE_NAME)
	--with-ffi_c-dir
	--without-ffi_c-dir
	--with-ffi_c-include
	--without-ffi_c-include=${ffi_c-dir}/include
	--with-ffi_c-lib
	--without-ffi_c-lib=${ffi_c-dir}/lib
	--enable-system-libffi
	--disable-system-libffi
	--with-libffi-config
	--without-libffi-config
	--with-pkg-config
	--without-pkg-config
/usr/local/lib/ruby/2.6.0/mkmf.rb:467:in `try_do': The compiler failed to generate an executable file. (RuntimeError)
You have to install development tools first.
	from /usr/local/lib/ruby/2.6.0/mkmf.rb:601:in `try_cpp'
	from /usr/local/lib/ruby/2.6.0/mkmf.rb:1109:in `block in have_header'
	from /usr/local/lib/ruby/2.6.0/mkmf.rb:959:in `block in checking_for'
	from /usr/local/lib/ruby/2.6.0/mkmf.rb:361:in `block (2 levels) in postpone'
	from /usr/local/lib/ruby/2.6.0/mkmf.rb:331:in `open'
	from /usr/local/lib/ruby/2.6.0/mkmf.rb:361:in `block in postpone'
	from /usr/local/lib/ruby/2.6.0/mkmf.rb:331:in `open'
	from /usr/local/lib/ruby/2.6.0/mkmf.rb:357:in `postpone'
	from /usr/local/lib/ruby/2.6.0/mkmf.rb:958:in `checking_for'
	from /usr/local/lib/ruby/2.6.0/mkmf.rb:1108:in `have_header'
	from extconf.rb:10:in `system_libffi_usable?'
	from extconf.rb:42:in `<main>'

To see why this extension failed to compile, please check the mkmf.log which can be found here:

  /usr/local/bundle/extensions/x86_64-linux/2.6.0/ffi-1.13.1/mkmf.log

extconf failed, exit code 1

Gem files will remain installed in /usr/local/bundle/gems/ffi-1.13.1 for inspection.
Results logged to /usr/local/bundle/extensions/x86_64-linux/2.6.0/ffi-1.13.1/gem_make.out
Removing intermediate container 2aa1b03a4aac
 ---> 12f73b2ac698
Step 7/9 : RUN echo "test"
 ---> Running in 17c67d27b2e8
test
Removing intermediate container 17c67d27b2e8
 ---> b36b9d804837
Step 8/9 : RUN cat /usr/local/bundle/extensions/x86_64-linux/2.6.0/ffi-1.13.1/gem_make.out
 ---> Running in c169523e07c5
current directory: /usr/local/bundle/gems/ffi-1.13.1/ext/ffi_c
/usr/local/bin/ruby -I /usr/local/lib/ruby/2.6.0 -r ./siteconf20200616-6-1uy72yx.rb extconf.rb
checking for ffi.h... *** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of necessary
libraries and/or headers.  Check the mkmf.log file for more details.  You may
need configuration options.

Provided configuration options:
	--with-opt-dir
	--without-opt-dir
	--with-opt-include
	--without-opt-include=${opt-dir}/include
	--with-opt-lib
	--without-opt-lib=${opt-dir}/lib
	--with-make-prog
	--without-make-prog
	--srcdir=.
	--curdir
	--ruby=/usr/local/bin/$(RUBY_BASE_NAME)
	--with-ffi_c-dir
	--without-ffi_c-dir
	--with-ffi_c-include
	--without-ffi_c-include=${ffi_c-dir}/include
	--with-ffi_c-lib
	--without-ffi_c-lib=${ffi_c-dir}/lib
	--enable-system-libffi
	--disable-system-libffi
	--with-libffi-config
	--without-libffi-config
	--with-pkg-config
	--without-pkg-config
/usr/local/lib/ruby/2.6.0/mkmf.rb:467:in `try_do': The compiler failed to generate an executable file. (RuntimeError)
You have to install development tools first.
	from /usr/local/lib/ruby/2.6.0/mkmf.rb:601:in `try_cpp'
	from /usr/local/lib/ruby/2.6.0/mkmf.rb:1109:in `block in have_header'
	from /usr/local/lib/ruby/2.6.0/mkmf.rb:959:in `block in checking_for'
	from /usr/local/lib/ruby/2.6.0/mkmf.rb:361:in `block (2 levels) in postpone'
	from /usr/local/lib/ruby/2.6.0/mkmf.rb:331:in `open'
	from /usr/local/lib/ruby/2.6.0/mkmf.rb:361:in `block in postpone'
	from /usr/local/lib/ruby/2.6.0/mkmf.rb:331:in `open'
	from /usr/local/lib/ruby/2.6.0/mkmf.rb:357:in `postpone'
	from /usr/local/lib/ruby/2.6.0/mkmf.rb:958:in `checking_for'
	from /usr/local/lib/ruby/2.6.0/mkmf.rb:1108:in `have_header'
	from extconf.rb:10:in `system_libffi_usable?'
	from extconf.rb:42:in `<main>'

To see why this extension failed to compile, please check the mkmf.log which can be found here:

  /usr/local/bundle/extensions/x86_64-linux/2.6.0/ffi-1.13.1/mkmf.log

extconf failed, exit code 1
Removing intermediate container c169523e07c5
 ---> 1f220ec273b6
Step 9/9 : USER fluent
 ---> Running in 0910aba9f78b
Removing intermediate container 0910aba9f78b
 ---> d1c1151d58d9
Successfully built d1c1151d58d9
Successfully tagged fld:latest

Plugin throughput too slow

Hi I'm using this plugin to get logs from journal and then save them to files splitting by container name.
The configuration I'm using looks like this:

# Logs from docker-systemd
<source>
  @type systemd
  @id in_systemd_docker
  matches [{ "_SYSTEMD_UNIT": "docker.service" }]
  <storage>
    @type local
    persistent true
    path /var/log/fluentd/journald-docker-cursor.json
  </storage>
  read_from_head false
  tag docker.systemd
</source>
<match docker.systemd>
      @type copy
      <store>
        @type file
        @id out_file_docker
        path /file-logs/${$.CONTAINER_TAG}/%Y/%m/%d/std${$.PRIORITY}
        append true
        <format>
          @type single_value 
          message_key MESSAGE
        </format>
        <buffer $.CONTAINER_TAG,$.PRIORITY,time>
          @type file
          path /var/log/fluentd/file-buffers/
          timekey 1d
          flush_thread_interval 10
          flush_mode interval
          flush_interval 10s
          flush_at_shutdown true
        </buffer>
      </store>
      <store>
        @type prometheus
        <metric>
          name fluentd_output_status_num_records_total
          type counter
          desc The total number of outgoing 
        </metric>
    </store>
</match>

With this setting I'm only getting a throughput of ~1000 lines per second while, according to https://docs.fluentd.org/deployment/performance-tuning-single-process, FluentD should be able to run 5000 lines per second.
A few additional details:

  • I'm running FluentD inside a Docker container with 4Gb and 4096 CPU shares

  • Tried with local storage as well as shared storage

  • Tried removing the file output and using only with metrics as out

How to use this plugin via fluentd docker?

I have tried to run fluentd-plugin-systemd via a docker, but it is not able to find systemd (dependency failures).

Is there a way for this plugin to work via the docker container?

License change request (Apache License v2.0)

answering @errm requirement

Fluentd organization and CNCF are happy to receive this plugin and hosted it under the official fluent/ organization. The only requirement is the license change to Apache License v2. If all authors agreed on that and the license is changed, we can start the transition.

How do you handle time keys?

I need to store timestamps under "time" key for Kibana. But journald logs appear with @timestamp key. Which results in empty search results in Kibana, until I remove the index and set it to @timestamp key.

No input from systemd

From td-agent.conf i @include ui_server.conf
There i have following:

<source>
  type systemd
  path /var/log/journal
  filters [{ "_EXE": "/home/xvampo01/server_devel/ui_server/ui_server" }]
  pos_file /var/log/td-agent/ui_server.log.pos
  tag ant-2.ui_server
  #read_from_head true
  strip_underscores true
</source>

<match ant-2.ui_server>
#  type stdout
  type file
  path var/log/td-agent/ant-2/ui_server/ui_server.log
</match>

Problem is I can't get any logs to td-agent.

Manually getting logs from journal works just fine:

 journalctl -f _EXE=/home/xvampo01/server_devel/ui_server/ui_server
 Mar 18 12:29:12 ant-2.fit.vutbr.cz ui_server[18054]: |7|~139801982396160:~18.3.2016~12:29:12:513~access granted
 Mar 18 12:29:12 ant-2.fit.vutbr.cz ui_server[18054]: |7|~139801982396160:~18.3.2016~12:29:12:513~DB:getCommunicationXml
 Mar 18 12:29:12 ant-2.fit.vutbr.cz ui_server[18054]: |5|~139801982396160:~18.3.2016~12:29:12:514~MSGOUT: <?xml version="1.0" encoding="UTF-8"?<response ver...

I can see it included when checking td-agent.log and following lines seem to indicate everything went well:

2016-03-18 12:35:27 +0000 [info]: adding match pattern="ant-2.ui_server" type="file"
2016-03-18 12:35:27 +0000 [info]: adding source type="systemd"

Plugin is also present in list of installed plugins:
2016-03-18 12:35:27 +0000 [info]: gem 'fluent-plugin-systemd' version '0.0.2'

Whilst everything seems to work just fine I cant see logs in stdout nor can is log file in specified path created. Previously I had problem with access right which are now:
drwxr-xr-x 2 td-agent td-agent 4096 Mar 17 21:30 ui_server

I have no idea what is wrong...

Hard dependency on libsystemd for filter_systemd_entry

Hi there,

I'm looking at using the filter_systemd_entry plugin independently of the in_systemd plugin, running in Docker. Currently, this requires pulling the Debian-based fluentd Docker image because the fluent-plugin-systemd gem has a hard dependency on systemd-journal which requires libsystemd, even though filter_systemd_entry doesn't use it.

The Alpine-based image is significantly smaller and works fine if I download and install the gem's files manually, as long as I don't use in_systemd, which does require libsystemd.

To expand on the use case for this, we use Fluent Bit to collect logs from the journal on all hosts, and forward those to fluentd aggregators running in Docker. It makes the most sense to centralise the filtering of these logs on the aggregators, and we'd like to use filter_systemd_entry for that.

I can see two possible solutions:

  1. Split filter_systemd_entry into its own gem with no dependency on systemd-journal.
  2. Remove the systemd-journal dependency from the existing gemspec, and document that it must be installed in order to use in_systemd. (This would likely complicate the most common use case, but it should work.)

Please let me know if there is anything I can do to help fix this.

Thanks,
Steven

Crash with Systemd::JournalError: Bad message

Plugin with version 0.0.5 crashes on starting. Version 0.0.4 works fine.
Backtrace

2016-11-10 12:42:33 +0000 [info]: reading config file path="/etc/td-agent/td-agent.conf"
2016-11-10 12:42:34 +0000 [info]: Connection opened to Elasticsearch cluster => {:host=>"elasticsearch-logging", :port=>9200, :scheme=>"http"}
2016-11-10 12:42:34 +0000 [info]: Template configured and already installed.
2016-11-10 12:42:34 +0000 [error]: unexpected error error_class=Systemd::JournalError error=#<Systemd::JournalError: Bad message>
2016-11-10 12:42:34 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/systemd-journal-1.2.3/lib/systemd/journal.rb:284:in enumerate_helper' 2016-11-10 12:42:34 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/systemd-journal-1.2.3/lib/systemd/journal.rb:106:in current_entry'
2016-11-10 12:42:34 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-systemd-0.0.5/lib/fluent/plugin/in_systemd.rb:76:in watch' 2016-11-10 12:42:34 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-systemd-0.0.5/lib/fluent/plugin/in_systemd.rb:58:in run'
2016-11-10 12:42:34 +0000 [warn]: unexpected error while shutting down input plugin plugin=Fluent::SystemdInput plugin_id="object:160bd68" error_class=Systemd::JournalError error=#<Systemd::JournalError: Bad message>
2016-11-10 12:42:34 +0000 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/systemd-journal-1.2.3/lib/systemd/journal.rb:284:in enumerate_helper' 2016-11-10 12:42:34 +0000 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/systemd-journal-1.2.3/lib/systemd/journal.rb:106:in current_entry'
2016-11-10 12:42:34 +0000 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-systemd-0.0.5/lib/fluent/plugin/in_systemd.rb:76:in watch' 2016-11-10 12:42:34 +0000 [warn]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-systemd-0.0.5/lib/fluent/plugin/in_systemd.rb:58:in run'
2016-11-10 12:42:34 +0000 [warn]: process died within 1 second. exit.

fluentd crashes if journal path is not available

Using fluent-plugin-systemd version 0.0.5 on fluentd-0.12.31 (Dockerized in K8s environment)

When starting fluentd with this plugin, in case the journal path is not available on the server it causes fluentd to crash.

2017-01-13 14:13:19 +0000 [info]: adding source type="systemd"
2017-01-13 14:13:19 +0000 [error]: unexpected error error="No such file or directory"
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/systemd-journal-1.2.3/lib/systemd/journal.rb:52:in `initialize'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-systemd-0.0.5/lib/fluent/plugin/in_systemd.rb:21:in `new'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-systemd-0.0.5/lib/fluent/plugin/in_systemd.rb:21:in `configure'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.31/lib/fluent/root_agent.rb:154:in `add_source'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.31/lib/fluent/root_agent.rb:95:in `block in configure'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.31/lib/fluent/root_agent.rb:92:in `each'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.31/lib/fluent/root_agent.rb:92:in `configure'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.31/lib/fluent/engine.rb:129:in `configure'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.31/lib/fluent/engine.rb:103:in `run_configure'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.31/lib/fluent/supervisor.rb:489:in `run_configure'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.31/lib/fluent/supervisor.rb:160:in `block in start'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.31/lib/fluent/supervisor.rb:366:in `call'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.31/lib/fluent/supervisor.rb:366:in `main_process'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.31/lib/fluent/supervisor.rb:339:in `block in supervise'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.31/lib/fluent/supervisor.rb:338:in `fork'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.31/lib/fluent/supervisor.rb:338:in `supervise'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.31/lib/fluent/supervisor.rb:156:in `start'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.31/lib/fluent/command/fluentd.rb:173:in `<top (required)>'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/lib/ruby/site_ruby/2.1.0/rubygems/core_ext/kernel_require.rb:54:in `require'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/lib/ruby/site_ruby/2.1.0/rubygems/core_ext/kernel_require.rb:54:in `require'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-0.12.31/bin/fluentd:5:in `<top (required)>'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/bin/fluentd:23:in `load'
  2017-01-13 14:13:19 +0000 [error]: /opt/td-agent/embedded/bin/fluentd:23:in `<top (required)>'
  2017-01-13 14:13:19 +0000 [error]: /usr/sbin/td-agent:7:in `load'
  2017-01-13 14:13:19 +0000 [error]: /usr/sbin/td-agent:7:in `<main>'
2017-01-13 14:13:19 +0000 [info]: process finished code=256
2017-01-13 14:13:19 +0000 [warn]: process died within 1 second. exit.

Any way to handle it more gracefully ?

Plugin @id or path for <storage> required when 'persistent' is true

I have a config like:

    <source>
      @type systemd
      filters [{ "_SYSTEMD_UNIT": "kubelet.service" }]
      pos_file /var/log/gcp-journald-kubelet.pos
      read_from_head true
      tag kubelet
      @id in-systemd-kubelet
      <storage>
        @type local
        @id in-systemd-kubelet-storage
        persistent true
      </storage>
    </source>

and getting the following error:

2017-09-13 09:18:25 +0000 [error]: config error file="/etc/fluent/fluent.conf" error_class=Fluent::ConfigError error="Plugin @id or path for <storage> required when 'persistent' is true"

If I remove the @id from the storage section and add the path it all works fine.

Exception emitting record: "\xC2" from ASCII-8BIT to UTF-8

Using fluent-plugin-systemd version 0.0.5 on fluentd-0.12.31 to read journald logs from kube-apiserver, kubelet, kube-proxy, etc... and getting the following errors form time to time:

Jan 19 07:58:49 ldpr-tga-kub01 docker[18358]: 2017-01-19 07:58:49 +0000 [error]: Exception emitting record: "\xC2" from ASCII-8BIT to UTF-8
Jan 19 07:58:49 ldpr-tga-kub01 docker[18358]: 2017-01-19 07:58:49 +0000 [warn]: suppressed same stacktrace
Jan 19 07:58:49 ldpr-tga-kub01 docker[18358]: 2017-01-19 07:58:49 +0000 [warn]: emit transaction failed: error_class=Encoding::UndefinedConversionError error="\"\\xC2\" from ASCII-8BIT to UTF-8" tag="system.kube-apiserver"

Any ideas why ?

I stumbled upon this while trying to figure out why I stop receiving events (I use splunkapi output plugin) exactly when it turns midnight ๐Ÿ˜•

I have other tail sources in the same setup and they work fine with the same output, I'm not sure above is related, but maybe ?

Segmentation fault reading from systemd

I'm seeing continuous segmentation faults on STDERR, seemingly when trying to initialize the journal:

/var/lib/gems/2.3.0/gems/systemd-journal-1.3.3/lib/systemd/journal.rb:230: [BUG] Segmentation fault at 0x00000000050b20
ruby 2.3.1p112 (2016-04-26) [x86_64-linux-gnu]

-- Control frame information -----------------------------------------------
c:0013 p:---- s:0056 e:000055 CFUNC  :sd_journal_open_directory
c:0012 p:0102 s:0050 e:000049 METHOD /var/lib/gems/2.3.0/gems/systemd-journal-1.3.3/lib/systemd/journal.rb:230
c:0011 p:0046 s:0042 e:000041 METHOD /var/lib/gems/2.3.0/gems/systemd-journal-1.3.3/lib/systemd/journal.rb:51 [FINISH]
c:0010 p:---- s:0034 e:000033 CFUNC  :new
c:0009 p:0027 s:0030 e:000029 METHOD /var/lib/gems/2.3.0/gems/fluent-plugin-systemd-1.0.1/lib/fluent/plugin/in_systemd.rb:68
c:0008 p:0009 s:0026 e:000025 METHOD /var/lib/gems/2.3.0/gems/fluent-plugin-systemd-1.0.1/lib/fluent/plugin/in_systemd.rb:107 [FINISH]
c:0007 p:---- s:0023 e:000022 IFUNC 
c:0006 p:0014 s:0021 e:000020 METHOD /var/lib/gems/2.3.0/gems/fluentd-1.2.5/lib/fluent/plugin_helper/timer.rb:80 [FINISH]
c:0005 p:---- s:0017 e:000016 CFUNC  :run_once
c:0004 p:0042 s:0013 e:000012 METHOD /var/lib/gems/2.3.0/gems/cool.io-1.5.3/lib/cool.io/loop.rb:88
c:0003 p:0033 s:0009 e:000008 BLOCK  /var/lib/gems/2.3.0/gems/fluentd-1.2.5/lib/fluent/plugin_helper/event_loop.rb:93
c:0002 p:0081 s:0006 e:000005 BLOCK  /var/lib/gems/2.3.0/gems/fluentd-1.2.5/lib/fluent/plugin_helper/thread.rb:78 [FINISH]
c:0001 p:---- s:0002 e:000001 (none) [FINISH]

-- Ruby level backtrace information ----------------------------------------
/var/lib/gems/2.3.0/gems/fluentd-1.2.5/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'
/var/lib/gems/2.3.0/gems/fluentd-1.2.5/lib/fluent/plugin_helper/event_loop.rb:93:in `block in start'
/var/lib/gems/2.3.0/gems/cool.io-1.5.3/lib/cool.io/loop.rb:88:in `run'
/var/lib/gems/2.3.0/gems/cool.io-1.5.3/lib/cool.io/loop.rb:88:in `run_once'
/var/lib/gems/2.3.0/gems/fluentd-1.2.5/lib/fluent/plugin_helper/timer.rb:80:in `on_timer'
/var/lib/gems/2.3.0/gems/fluent-plugin-systemd-1.0.1/lib/fluent/plugin/in_systemd.rb:107:in `run'
/var/lib/gems/2.3.0/gems/fluent-plugin-systemd-1.0.1/lib/fluent/plugin/in_systemd.rb:68:in `init_journal'
/var/lib/gems/2.3.0/gems/fluent-plugin-systemd-1.0.1/lib/fluent/plugin/in_systemd.rb:68:in `new'
/var/lib/gems/2.3.0/gems/systemd-journal-1.3.3/lib/systemd/journal.rb:51:in `initialize'
/var/lib/gems/2.3.0/gems/systemd-journal-1.3.3/lib/systemd/journal.rb:230:in `open_journal'
/var/lib/gems/2.3.0/gems/systemd-journal-1.3.3/lib/systemd/journal.rb:230:in `sd_journal_open_directory'

While STDOUT is showing the following repeatedly (with pid changing):

2018-11-01 00:18:00 +0000 [info]: fluent/log.rb:322:info: Worker 0 finished unexpectedly with signal SIGSEGV
2018-11-01 00:18:00 +0000 [info]: fluent/log.rb:322:info: gem 'fluent-plugin-systemd' version '1.0.1'
2018-11-01 00:18:00 +0000 [info]: fluent/log.rb:322:info: gem 'fluentd' version '1.2.5'
2018-11-01 00:18:00 +0000 [info]: fluent/log.rb:322:info: adding source type="systemd"
2018-11-01 00:18:00 +0000 [info]: #0 fluent/log.rb:322:info: starting fluentd worker pid=1304 ppid=25 worker=0

Source config:

<source>
  @type systemd
  filters [{ "_SYSTEMD_UNIT": "docker.service" }]
  <storage>
      @type local
      persistent true
      path /var/log/docker.log.pos
  </storage>
  tag docker
</source>

Let me know if there are any other details I can provide or experiments I can run.

pause reading from journal when there is back pressure

If the buffer queue is full, router.emit by default will throw an exception BufferOverflowError
https://github.com/fluent/fluentd/blob/master/lib/fluent/plugin/buffer.rb#L196
when this happens, the journal plugin will just throw away the record:
https://github.com/reevoo/fluent-plugin-systemd/blob/master/lib/fluent/plugin/in_systemd.rb#L95
Instead, the plugin should back off and allow the queue to drain, as is done by e.g. the in_tail plugin. Right now, you can get this same sort of behavior by using buffer_queue_full_action block, but you have to configure every output plugin with this. It would be better if the in_systemd plugin itself did the flow control.

Fluentd not reading journal logs

I'm unable to determine my fluentd is not reading my journal logs:

I'm using v0.2.0 of fluent-plugin-systemd and here's my fluentd config:

 <source>
      @type systemd
      tag systemd
      path /var/log/journal
      filters [{ "PRIORITY": [0,1,2,3,4,5,6] }]
      <storage>
        @type local
        persistent true
        path /var/log/systemd.pos
      </storage>
      read_from_head true
      strip_underscores true
    </source>

When I look at fluentd logs, everything looks fine but no journal logs are read:

2017-07-11 16:42:35 +0000 [info]: starting fluentd-0.14.18 pid=1
2017-07-11 16:42:35 +0000 [info]: spawn command to main:  cmdline=["/opt/td-agent/embedded/bin/ruby", "-Eascii-8bit:ascii-8bit", "/usr/sbin/td-agent", "--under-supervisor"]
2017-07-11 16:42:35 +0000 [info]: gem 'fluent-mixin-config-placeholders' version '0.4.0'
2017-07-11 16:42:35 +0000 [info]: gem 'fluent-mixin-plaintextformatter' version '0.2.6'
2017-07-11 16:42:35 +0000 [info]: gem 'fluent-plugin-aws-elasticsearch-service' version '0.1.6'
2017-07-11 16:42:35 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '1.9.5'
2017-07-11 16:42:35 +0000 [info]: gem 'fluent-plugin-kafka' version '0.5.5'
2017-07-11 16:42:35 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '0.27.0'
2017-07-11 16:42:35 +0000 [info]: gem 'fluent-plugin-mongo' version '0.8.0'
2017-07-11 16:42:35 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '1.5.5'
2017-07-11 16:42:35 +0000 [info]: gem 'fluent-plugin-s3' version '0.8.2'
2017-07-11 16:42:35 +0000 [info]: gem 'fluent-plugin-scribe' version '0.10.14'
2017-07-11 16:42:35 +0000 [info]: gem 'fluent-plugin-systemd' version '0.2.0'
2017-07-11 16:42:35 +0000 [info]: gem 'fluent-plugin-td' version '0.10.29'
2017-07-11 16:42:35 +0000 [info]: gem 'fluent-plugin-td-monitoring' version '0.2.2'
2017-07-11 16:42:35 +0000 [info]: gem 'fluent-plugin-webhdfs' version '0.4.2'
2017-07-11 16:42:35 +0000 [info]: gem 'fluentd' version '0.14.18'
2017-07-11 16:42:35 +0000 [info]: gem 'fluentd' version '0.12.35'
2017-07-11 16:42:35 +0000 [info]: adding match pattern="fluent.**" type="null"
2017-07-11 16:42:35 +0000 [info]: adding filter pattern="kubernetes.**" type="kubernetes_metadata"
2017-07-11 16:42:36 +0000 [info]: adding match pattern="**" type="aws-elasticsearch-service"
2017-07-11 16:42:36 +0000 [info]: adding source type="tail"
2017-07-11 16:42:36 +0000 [info]: adding source type="systemd"
2017-07-11 16:42:36 +0000 [info]: adding source type="tail"
2017-07-11 16:42:36 +0000 [info]: #0 starting fluentd worker pid=10 ppid=1 worker=0
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/apiserver/audit-2017-07-11T05-08-32.246.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/apiserver/audit-2017-07-11T11-36-07.066.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/apiserver/audit.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/kube-apiserver-691905349-mgv64_kube-system_kube-apiserver-7e51a4a7e0f62142ff2283baef2eb21d1dcbbeda9ea1d55690df5de41372bd2a.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/kube-apiserver-691905349-mgv64_kube-system_kube-apiserver-certs-ed10c15ac4d85549d283ca2d502df834c4dfecff36b01a7191b40e5654cc2515.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/kube-apiserver-691905349-mgv64_kube-system_kube-apiserver-e3a38cba0b83f173cfba79d0b6e932463db1480b2236376edeb73f6bb2426223.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/kube-scheduler-2843403865-st7rr_kube-system_kube-scheduler-f48d416a6137390862153285539c726d8b69209140d29f637c4df712daa6e76f.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/filebeat-j77lr_utils_mtail-5c9fd5e3da3f2f3481ef9dad8ff9e1519e40b15301ebec35d9ca554ca30c6933.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/kube-controller-manager-3483369422-sr70f_kube-system_kube-controller-manager-c0b8fc7726c2e061ff58ccc9a1440a1ebaf0fa859bf16670161e1ca98ece2735.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/kube2iam-fws2m_kube-system_kube2iam-a12483e11d47da4d9b7921e7e505c607a23dda582670483ac688da0e9975a810.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/checkpoint-installer-nzrv0_kube-system_checkpoint-installer-d3306302fd0d5913d12cecf47e6cf32d3c52a7fbee668b18c8ee9de0d72c3425.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/kube-flannel-78n4t_kube-system_install-cni-607280a9d91d4246175f4c7234a94475f78b1d9561570b71bb2f0a3653e56a15.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/kube2iam-fws2m_kube-system_kube2iam-214a2fad45a08f486658da11dda2ab5ec94a3fcca95bf9235e3bed92ae989026.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/node-problem-detector-ccn62_utils_node-problem-detector-e612875bf76304204081b5035720ea56a3bced13e5c6b73b5bd2039c38cbd404.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/kube-proxy-36h0q_kube-system_kube-proxy-certs-06cfa5cfc74820286d959185957e53132b3d34499df3635ac5f0912225f9fa6f.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/kube-controller-manager-3483369422-sr70f_kube-system_kube-controller-manager-077dbf430fe8d378e61345c0d407d379aafdca1948243c17f5ccf8c572e54412.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/fluentd-zhbwb_utils_fluentd-67d889e7f5e6f82176eb1a6963ed782d40f81f0917503b4d62f794e01f836366.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/node-exporter-etcd-79m4q_monitoring_node-exporter-9d9cff481824895361bc5383cfd5a570ec4fd0f4e02d4d3f472bb358ce72130b.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/kube-flannel-78n4t_kube-system_kube-flannel-59d6cc8529fee663dafcf41155d2e12d9e194c7c4029da4f41d7478c48d6b22d.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/filebeat-j77lr_utils_filebeat-310b2ce0175c39277e70a7b333a77220f902a7da7a233d0ec466cb09fe31ae36.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/kube-proxy-36h0q_kube-system_kube-proxy-bfd66f3b7263f07cc6d04253d39fd54dad2691170f50082fe7a5bef81eeb0765.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/kube-scheduler-2843403865-st7rr_kube-system_kube-scheduler-84b9e10f118eca2488a946c6d5e148416d913971d0d6487f9c2a51c3fa570e73.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/pod-checkpointer-ip-172-16-198-15.us-west-2.compute.internal_kube-system_checkpoint-2b38afa4eadc1b0367edc37482076a0f3faf476eacdbd40cd7736091e078e57a.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/fluentd-6zmqb_utils_fluentd-325cda21aa13232c1edc85817dc0d21e3c48acebf54bd164c14e3ea4ceacfa24.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/rescheduler-v0.3.0-712020263-fw8fg_kube-system_rescheduler-1ecb0a4df97c3c5e84aa6eb16b656275531ed952b4f3b44d18a85958f360cd7f.log
2017-07-11 16:42:36 +0000 [info]: #0 following tail of /var/log/containers/kube-apiserver-691905349-mgv64_kube-system_etcd-client-certs-4cb97b0572d0b61e71f7b942ef9c0e162ba00dd35eb8bc5967caaab310a5db7b.log
2017-07-11 16:42:36 +0000 [info]: #0 fluentd worker is now running worker=0
2017-07-11 16:42:41 +0000 [info]: #0 Connection opened to Elasticsearch cluster => {:host=>"search-xxxxx.us-west-2.es.amazonaws.com", :port=>443, :scheme=>"https", :aws_elasticsearch_service=>{:credentials=>#<Aws::Credentials access_key_id="xxxxxxx">, :region=>"us-west-2"}}
2017-07-11 16:42:47 +0000 [info]: #0 detected rotation of /var/log/containers/fluentd-zhbwb_utils_fluentd-67d889e7f5e6f82176eb1a6963ed782d40f81f0917503b4d62f794e01f836366.log; waiting 5 seconds

I'm running this on CoreOS v1437.3.0

Support customizing `pos_file` path

[Problem]
When specifying a customized pos_file for in_systemd plugin, the value gets ignored.

[Version]
gem 'fluent-plugin-systemd' version '1.0.2'

[Config]

  <source>
    @type systemd
    filters [{"_SYSTEMD_UNIT":"docker.service"}]
    pos_file /var/log/k8s-gcp-journald-docker.pos
    read_from_head true
    tag "docker"
  </source>

[Warnings in the log]

2019-07-25 16:22:39 +0000 [warn]: parameter 'pos_file' in <source>
  @type systemd
  filters [{"_SYSTEMD_UNIT":"docker.service"}]
  pos_file /var/log/k8s-gcp-journald-docker.pos
  read_from_head true
  tag "docker"
</source> is not used.

no patterns matched tag

Hello, I am trying to play with journal logs using the plugin, but I saw no patterns matched tag error. I am new to fluentd, not sure if I am missing anything?

My config is:

<source>
  type systemd
  path /run/log/journal
  filters [{ "_SYSTEMD_UNIT": "docker.service" }]
  tag docker
  read_from_head true
</source>
$ fluentd -c /tmp/conf/journal.conf 
2016-02-25 20:01:49 +0000 [info]: reading config file path="/tmp/conf/journal.conf"
2016-02-25 20:01:49 +0000 [info]: starting fluentd-0.12.20
2016-02-25 20:01:49 +0000 [info]: gem 'fluent-plugin-systemd' version '0.0.2'
2016-02-25 20:01:49 +0000 [info]: gem 'fluentd' version '0.12.20'
2016-02-25 20:01:49 +0000 [info]: adding source type="systemd"
2016-02-25 20:01:49 +0000 [info]: using configuration file: <ROOT>
  <source>
    type systemd
    path /run/log/journal
    filters [{"_SYSTEMD_UNIT":"docker.service"}]
    tag docker
    read_from_head true
  </source>
</ROOT>
2016-02-25 20:01:49 +0000 [warn]: no patterns matched tag="docker"
2016-02-25 20:01:49 +0000 [warn]: no patterns matched tag="docker"
2016-02-25 20:01:49 +0000 [warn]: no patterns matched tag="docker"
2016-02-25 20:01:49 +0000 [warn]: no patterns matched tag="docker"
2016-02-25 20:01:49 +0000 [warn]: no patterns matched tag="docker"
2016-02-25 20:01:49 +0000 [warn]: no patterns matched tag="docker"
2016-02-25 20:01:49 +0000 [warn]: no patterns matched tag="docker"
2016-02-25 20:01:49 +0000 [warn]: no patterns matched tag="docker"
2016-02-25 20:01:49 +0000 [warn]: no patterns matched tag="docker"
2016-02-25 20:01:49 +0000 [warn]: no patterns matched tag="docker"
2016-02-25 20:01:49 +0000 [warn]: no patterns matched tag="docker"
2016-02-25 20:01:50 +0000 [warn]: no patterns matched tag="docker"
2016-02-25 20:01:50 +0000 [warn]: no patterns matched tag="docker"
2016-02-25 20:01:50 +0000 [warn]: no patterns matched tag="docker"
2016-02-25 20:01:50 +0000 [warn]: no patterns matched tag="docker"
2016-02-25 20:01:50 +0000 [warn]: no patterns matched tag="docker"
2016-02-25 20:01:50 +0000 [warn]: no patterns matched tag="docker"

Thank you.

Problem Understanding Filters

Hello.

First off, thanks for this awesome plugin. It has me quite excited! That said, I'm totally ignorant about almost everything Ruby, so I'm having a hard time understanding how to construct filters.

I would like to filter everything that contains a _SYSTEMD_UNIT value of docker.service, as well as everything that contains any value in CONTAINER_NAME.

I understand how to do the first as per this site's example. I looked at the ruby docs you have a link to in the README, but it didn't help me understand how to chain multiple filters together, nor how to have any filter with a wildcarded value.

Any help would be appreciated.

Do not install the plugin

Hello,

I'm trying install plugin on CentOS7:

cat /etc/redhat-release

CentOS Linux release 7.8.2003 (Core)

See an error:
[root@kbn-wn-01 log]# gem install fluent-plugin-systemd -v 1.0.2
Building native extensions. This could take a while...
ERROR: Error installing fluent-plugin-systemd:
ERROR: Failed to build gem native extension.

/usr/bin/ruby extconf.rb

checking for ruby/st.h... *** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of necessary
libraries and/or headers. Check the mkmf.log file for more details. You may
need configuration options.

Provided configuration options:
--with-opt-dir
--without-opt-dir
--with-opt-include
--without-opt-include=${opt-dir}/include
--with-opt-lib
--without-opt-lib=${opt-dir}/lib64
--with-make-prog
--without-make-prog
--srcdir=.
--curdir
--ruby=/usr/bin/ruby
/usr/share/ruby/mkmf.rb:434:in try_do': The compiler failed to generate an executable file. (RuntimeError) You have to install development tools first. from /usr/share/ruby/mkmf.rb:565:in try_cpp'
from /usr/share/ruby/mkmf.rb:1038:in block in have_header' from /usr/share/ruby/mkmf.rb:889:in block in checking_for'
from /usr/share/ruby/mkmf.rb:340:in block (2 levels) in postpone' from /usr/share/ruby/mkmf.rb:310:in open'
from /usr/share/ruby/mkmf.rb:340:in block in postpone' from /usr/share/ruby/mkmf.rb:310:in open'
from /usr/share/ruby/mkmf.rb:336:in postpone' from /usr/share/ruby/mkmf.rb:888:in checking_for'
from /usr/share/ruby/mkmf.rb:1037:in have_header' from extconf.rb:3:in

'

Gem files will remain installed in /usr/local/share/gems/gems/msgpack-1.3.3 for inspection.
Results logged to /usr/local/share/gems/gems/msgpack-1.3.3/ext/msgpack/gem_make.out

The mkmf.log
"gcc -o conftest -I/usr/include -I/usr/include/ruby/backward -I/usr/include -I. -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -mtune=generic -fPIC conftest.c -L. -L/usr/lib64 -L. -Wl,-z,relro -fstack-protector -rdynamic -Wl,-export-dynamic -m64 -lruby -lpthread -lrt -ldl -lcrypt -lm -lc"
checked program was:
/* begin */
1: #include "ruby.h"
2:
3: int main(int argc, char *argv)
4: {
5: return 0;
6: }
/
end */

Please, help.

Systemd::JournalError: No such file or directory retrying in 1s

Hello, I'm trying out your plugin, but I'm stuck at this error.

#0 Systemd::JournalError: No such file or directory retrying in 1s

Configuration:

<source>
    @type systemd
    @log_level debug
    path /var/log/journal
    tag systemd
    read_from_head true
    path /var/fluentd/journal.pos
</source>

<match systemd>
  @type stdout
</match>

Directories (Fluentd is run as root):

 โ–ฒ com.anrisoftware/fluentd-elk/fluentd-elk-image docker exec -it fluentd-elk-debian bash
root@20ffbc8bdcd3:/home/fluent# cd /var/log/journal/
root@20ffbc8bdcd3:/var/log/journal# ls -l 
total 4
drwxr-xr-x 2 root root 4096 Apr 23 14:43 c956513c70744b3483ee4ea2b0d1b5c9
root@20ffbc8bdcd3:/var/log/journal# cd c956513c70744b3483ee4ea2b0d1b5c9/
root@20ffbc8bdcd3:/var/log/journal/c956513c70744b3483ee4ea2b0d1b5c9# ls -l
total 49156
-rw-r-----  1 root root 41943040 Apr 23 16:26 system.journal
-rw-r-----+ 1 root root  8388608 Apr 23 15:47 user-1000.journal

Changing path to /var/log/journal/c956513c70744b3483ee4ea2b0d1b5c9 or /var/log/journal/c956513c70744b3483ee4ea2b0d1b5c9/system.journal doesn't help.

Versions.

2017-04-23 16:25:12 +0000 [info]: reading config file path="/fluentd/etc/fluent.conf"
2017-04-23 16:25:12 +0000 [info]: starting fluentd-0.14.14 pid=1
2017-04-23 16:25:12 +0000 [info]: spawn command to main:  cmdline=["/usr/local/bin/ruby", "-Eascii-8bit:ascii-8bit", "/usr/local/bundle/bin/fluentd", "-c", "/fluentd/etc/fluent.conf", "-p", "/fluentd/plugins", "--under-supervisor"]
2017-04-23 16:25:12 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '1.9.3'
2017-04-23 16:25:12 +0000 [info]: gem 'fluent-plugin-kubernetes_metadata_filter' version '0.26.3'
2017-04-23 16:25:12 +0000 [info]: gem 'fluent-plugin-record-reformer' version '0.9.0'
2017-04-23 16:25:12 +0000 [info]: gem 'fluent-plugin-secure-forward' version '0.4.3'
2017-04-23 16:25:12 +0000 [info]: gem 'fluent-plugin-systemd' version '0.2.0'
2017-04-23 16:25:12 +0000 [info]: gem 'fluentd' version '0.14.14'

Systemd not able to ship logs if the instance is stopped and started

Hi

I am using the below s/w versions. I am shipping system logs generated by journal using fluentd syslog plugin to graylog.

td-agent --> 1.2.2
fluent-plugin-systemd --> 1.0.1

I am shipping systemd logs by reading journal using systemd plugin. It absolutely works fine without any issues. Below is my config.

<system>
  log_level debug
</system>

<source>
  @type systemd
  tag journal
  path /var/log/journal
  #matches [{ "docker-compose": "" }]
  read_from_head false
  <storage>
    @type local
    persistent false
    path /var/log/td-agent/docker-compose.pos
  </storage>
  <entry>
    fields_strip_underscores true
    fields_lowercase true
  </entry>
</source>

<match journal>
@type copy
  <store>
    @type gelf
    host x.x.x.x
    port 12201
    flush_interval 5s
  </store>
#  <store>
#    @type stdout
#  </store>
</match>

However looks like there is an issue. I have daemonised fluentd using systemd. Whenever fluentd instance is rebooted the fluentd reads the the journal file until the last line recorded by journal using last bootid. The new messages which are generated by the latest boot id gets un recognised.

Is this an issue?

I have to restart fluentd process in order to fix this issue.

Thanks,
Amey

1.0.0

This plugin has been used by an number of users e.g. https://github.com/fluent/fluentd-kubernetes-daemonset

It seems to be mostly stable so I am considering cutting a 1.0 branch and cleaning up a little of the backwards compatibility code e.g. https://github.com/reevoo/fluent-plugin-systemd/blob/master/lib/fluent/plugin/systemd/pos_writer.rb#L62

If there are any other issues/features that I need to look at before then please mention them now as I plan to favour stability over features once this goes 1.0

TIMELINE:

  • Cut a 1.0.0 branch (March 2018)
  • Replace reevoocop with rubocop
  • Ensure tests are run against latest fluentd/systemd
  • Clean up deprecated config options and migration code (March - April 2018)
  • Do some pre-release versions (April - May 2018)
  • Improve Documentation
  • Release 1.0.0 (by end of May 2018)

Corrupted pos file

Forced termination caused corruption of pos file. Mutations to pos file could be done atomically to increase reliability.

fluent-plugin-systemd version 1.0.1

From Logs

2018-07-05 07:02:41 +0000 [error]: [systemd.log] failed to read data from plugin storage file path="/var/log/systemd.log.pos" error_class=Yajl::ParseError error="lexical error: invalid char in json text.\n                                       s=97c17292555e41258bd6733e9836c\n                     (right here) ------^\n"
2018-07-05 07:02:41 +0000 [error]: config error file="/etc/fluent/fluent.conf" error_class=Fluent::ConfigError error="Unexpected error: failed to read data from plugin storage file: '/var/log/systemd.log.pos'"

#0 thread doesn't exit correctly (killed or other reason) plugin=Fluent::Plugin::SystemdInput

We are seeing this error happen on start up when using fluentd 0.14.16 and fluent-plugin-systemd 0.3.1. From what I can tell systemd events are getting processed as expected.

2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [error]: #0 Exception emitting record: undefined method `emit' for nil:NilClass
2017-10-19 16:55:58 +0000 [warn]: #0 killing existing thread thread=#<Thread:0x007f81502f9b48@/opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-0.14.16/lib/fluent/plugin_helper/thread.rb:70 sleep>
2017-10-19 16:55:58 +0000 [warn]: #0 thread doesn't exit correctly (killed or other reason) plugin=Fluent::Plugin::SystemdInput title=:event_loop thread=#<Thread:0x007f81502f9b48@/opt/td-agent/embedded/lib/ruby/gems/2.4.0/gems/fluentd-0.14.16/lib/fluent/plugin_helper/thread.rb:70 aborting> error=nil
2017-10-19 16:55:58 +0000 [info]: Worker 0 finished unexpectedly with status 0
2017-10-19 16:55:59 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '1.9.5'
2017-10-19 16:55:59 +0000 [info]: gem 'fluent-plugin-kafka' version '0.5.5'
2017-10-19 16:55:59 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '1.5.5'
2017-10-19 16:55:59 +0000 [info]: gem 'fluent-plugin-s3' version '1.0.0.rc3'
2017-10-19 16:55:59 +0000 [info]: gem 'fluent-plugin-secure-forward' version '0.4.5'
2017-10-19 16:55:59 +0000 [info]: gem 'fluent-plugin-systemd' version '0.3.1'
2017-10-19 16:55:59 +0000 [info]: gem 'fluent-plugin-td' version '1.0.0.rc1'
2017-10-19 16:55:59 +0000 [info]: gem 'fluent-plugin-td-monitoring' version '0.2.2'
2017-10-19 16:55:59 +0000 [info]: gem 'fluent-plugin-webhdfs' version '1.1.1'
2017-10-19 16:55:59 +0000 [info]: gem 'fluentd' version '0.14.16'
2017-10-19 16:55:59 +0000 [info]: adding filter pattern="*.**" type="record_transformer"
2017-10-19 16:55:59 +0000 [info]: adding match pattern="*.**" type="forward"
2017-10-19 16:55:59 +0000 [info]: #0 'flush_interval' is configured at out side of <buffer>. 'flush_mode' is set to 'interval' to keep existing behaviour
2017-10-19 16:55:59 +0000 [warn]: #0 secondary type should be same with primary one primary="Fluent::Plugin::ForwardOutput" secondary="Fluent::Plugin::FileOutput"
2017-10-19 16:55:59 +0000 [info]: #0 adding forwarding server '10.*.*.*:24224' host="one" port=24224 weight=60 plugin_id="object:3f817d56f188"
2017-10-19 16:55:59 +0000 [info]: #0 adding forwarding server '10.*.*.*:24224' host="two" port=24224 weight=60 plugin_id="object:3f817d56f188"
2017-10-19 16:55:59 +0000 [info]: #0 adding forwarding server '10.*.*.*:24224' host="three" port=24224 weight=60 plugin_id="object:3f817d56f188"
2017-10-19 16:55:59 +0000 [info]: adding source type="systemd"
2017-10-19 16:55:59 +0000 [info]: adding source type="forward"
2017-10-19 16:55:59 +0000 [info]: using configuration file: <ROOT>
  <source>
    @type systemd
    tag "systemd"
    path "/var/log/journal"
    read_from_head true
    <storage>
      @type "local"
      persistent true
      path "/var/log/td-agent/systemd.pos"
    </storage>
    <entry>
      field_map {"MESSAGE":"log","_PID":["process","pid"],"_CMDLINE":"process","_COMM":"cmd"}
      fields_strip_underscores true
      fields_lowercase true
    </entry>
</source>

Process finished code=134

hi,

I tried to use this plugin to read my app's log, but i had some issues.

First of all, there is a Dockerfile that a created to show my current progress:

FROM centos:7

RUN rpm --import https://packages.treasuredata.com/GPG-KEY-td-agent \
      && printf "[treasuredata]\nname=TreasureData\nbaseurl=http://packages.treasuredata.com/2/redhat/\$releasever/\$basearch\ngpgcheck=1\ngpgkey=https://packages.treasuredata.com/GPG-KEY-td-agent\n" > /etc/yum.repos.d/td.repo \
      && yum install -y td-agent make gcc-c++ systemd

ENV PATH /opt/td-agent/embedded/bin/:$PATH

RUN fluent-gem install fluent-plugin-systemd -v 0.0.4

#until now, nothing new. It's just a copy from test Dockerfile

#appending a systemd source with tag kube-proxy (it's is similar to documentation sample)
RUN printf "<source>\ntype systemd\npath /run/log/journal\nfilters [{ \"_SYSTEMD_UNIT\": \"td-agent.service\" }]\npos_file kube-proxy.pos\ntag kube-proxy\nread_from_head true\n</source>" >> /etc/td-agent/td-agent.conf

#Without that line the td-agent does not read any data from journey. It seems to be havving some prigilegy issue
RUN usermod -G systemd-journal td-agent
#With that line the td-agent seems to access the journal data, but dies after that.


CMD /usr/sbin/init

#docker run --privileged -t fluenttest:1 .
#docker exec -it  .... bash
#$ systemctl restart td-agent

Initially, i tried to follow the documentation and just add this in /etc/td-agent/td-agent.conf:

<source>
  type systemd
  path /var/log/journal
  filters [{ "_SYSTEMD_UNIT": "kube-proxy.service" }]
  pos_file kube-proxy.pos
  tag kube-proxy
  read_from_head true
</source>

It didn't work and i realized that the journalctl was putting the data in /run/log/journal, because some persistence property.

then a update the code to:

<source>
  type systemd
  path /run/log/journal
  filters [{ "_SYSTEMD_UNIT": "kube-proxy.service" }]
  pos_file kube-proxy.pos
  tag kube-proxy
  read_from_head true
</source>

it didn't work yet!

Then, i saw the td-agent service is executed by a user called td-agent and i tried test if this user has privileges to access those files.
I run:

root@localhost$ sudo -u td-agent ls -la /run/log/journal/fe65ef0463ab46989303f72fcbc58aef
ls: cannot open directory '/run/log/journal/fe65ef0463ab46989303f72fcbc58aef': Permission denied

Then i included td-agent into systemd-journal group (owner of /run/log/journal).

After that, the td-agent service seems to be reading the journal, but dies after the first read.

There is the log:

2016-09-21 16:59:42 +0000 [info]: reading config file path="/etc/td-agent/td-agent.conf"
2016-09-21 16:59:42 +0000 [info]: starting fluentd-0.12.26
2016-09-21 16:59:42 +0000 [info]: gem 'fluent-mixin-config-placeholders' version '0.4.0'
2016-09-21 16:59:42 +0000 [info]: gem 'fluent-mixin-plaintextformatter' version '0.2.6'
2016-09-21 16:59:42 +0000 [info]: gem 'fluent-plugin-mongo' version '0.7.13'
2016-09-21 16:59:42 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '1.5.5'
2016-09-21 16:59:42 +0000 [info]: gem 'fluent-plugin-s3' version '0.6.8'
2016-09-21 16:59:42 +0000 [info]: gem 'fluent-plugin-scribe' version '0.10.14'
2016-09-21 16:59:42 +0000 [info]: gem 'fluent-plugin-systemd' version '0.0.4'
2016-09-21 16:59:42 +0000 [info]: gem 'fluent-plugin-td' version '0.10.28'
2016-09-21 16:59:42 +0000 [info]: gem 'fluent-plugin-td-monitoring' version '0.2.2'
2016-09-21 16:59:42 +0000 [info]: gem 'fluent-plugin-webhdfs' version '0.4.2'
2016-09-21 16:59:42 +0000 [info]: gem 'fluentd' version '0.12.26'
2016-09-21 16:59:42 +0000 [info]: adding match pattern="td.*.*" type="tdlog"
2016-09-21 16:59:42 +0000 [info]: adding match pattern="debug.**" type="stdout"
2016-09-21 16:59:42 +0000 [info]: adding source type="forward"
2016-09-21 16:59:42 +0000 [info]: adding source type="http"
2016-09-21 16:59:42 +0000 [info]: adding source type="debug_agent"
2016-09-21 16:59:42 +0000 [info]: adding source type="systemd"
2016-09-21 16:59:42 +0000 [info]: using configuration file: <ROOT>
  <match td.*.*>
    @type tdlog
    apikey xxxxxx
    auto_create_table 
    buffer_type file
    buffer_path /var/log/td-agent/buffer/td
    <secondary>
      @type file
      path /var/log/td-agent/failed_records
      buffer_path /var/log/td-agent/failed_records.*
    </secondary>
  </match>
  <match debug.**>
    @type stdout
  </match>
  <source>
    @type forward
  </source>
  <source>
    @type http
    port 8888
  </source>
  <source>
    @type debug_agent
    bind 127.0.0.1
    port 24230
  </source>
  <source>
    type systemd
    path /run/log/journal
    filters [{"_SYSTEMD_UNIT":"td-agent.service"}]
    pos_file kube-proxy.pos
    tag kube-proxy
    read_from_head true
  </source>
</ROOT>
2016-09-21 16:59:42 +0000 [info]: listening fluent socket on 0.0.0.0:24224
2016-09-21 16:59:42 +0000 [info]: listening dRuby uri="druby://127.0.0.1:24230" object="Engine"
2016-09-21 16:59:42 +0000 [warn]: no patterns matched tag="kube-proxy"  -----> It did read something from journal!!!! <-----
2016-09-21 16:59:42 +0000 [info]: process finished code=134
2016-09-21 16:59:42 +0000 [warn]: process died within 1 second. exit.

I have no idea what is wrong...

Issue with missing dates stamps (logs arrive in logstash-1970-01-01 index)

I'm running a simple config:

<source>
  @type http
  port 8686
  bind 127.0.0.1
  body_size_limit 32m
  keepalive_timeout 10s
</source>

<source>
  @type systemd
  tag systemd
  path /var/log/journal
  read_from_head true
  <storage>
    @type local
    persistent true
    path systemd.pos
  </storage>
  <entry>
    field_map {"MESSAGE": "log", "_PID": ["process", "pid"], "_CMDLINE": "process", "_COMM": "cmd"}
    fields_strip_underscores true
    fields_lowercase true
  </entry>
</source>

<filter **>
  @type parser
  key_name log
  reserve_data true
  emit_invalid_record_to_error false
  <parse>
    @type json
  </parse>
</filter>

<match **>
  @type stdout
</match>

<match **>
  @type elasticsearch

  host elasticsearch.metrics-01.prod.harrow.io
  password for-side-experiment-capital
  port 443
  scheme https
  user harrow

  logstash_format true
  include_tag_key true
  reconnect_on_error true
  request_timeout 60s
  type_name fluentd
  @log_level debug
</match>

I'm on Ubuntu, so /var/log/journal didn't exist, and I had to create it (when it was immediately picked up)

And seeing issues that the elasticsearch index being created is named logstash-1970-01-01 not hugely surprising given the records according to the out_stdout plugin:

1970-01-01 00:33:37.000000000 +0000 systemd: {"log":"{\"time\":\"2017-11-03T19:00:48Z\",\"level\":\"info\",\"harrow\":\"projector\",\"messag......e":"alcohol","systemd_invocation_id":"b813077e938347a990eba809f6439f7c","level":"info","harrow":"projector","message":"seen=315000"}
1970-01-01 00:33:37.000000000 +0000 systemd: {"log":"{\"time\":\"2017-11-03T19:00:48Z\",\"leve.......09f6439f7c","level":"info","harrow":"projector","message":"operation.started=261086"}

I may be missing a piece of the mental model, but I'd have expected this plugin to have taken one of the __TIMESTAMP_REALTIME or __TIMESTAMP_MONOTONIC.

Journal files are kept open after rotation (deletion)

I'm using fluent-plugin-systemd with log rotation and old (deleted) log files are kept open.

 lsof -a +L1
COMMAND    PID     USER   FD   TYPE DEVICE  SIZE/OFF NLINK     NODE NAME
ruby    111608 td-agent   10w   REG  202,1     37751     0 26637960 /var/log/td-agent/td-agent.log-20170504 (deleted)
ruby    111613 td-agent   11r   REG  202,1 134033408     0 26609659 /var/log/journal/0af04d3c78a943ae8f3cc26602e374f2/system@a8dc4a3e32d2413aa4d673232c12f111-00000000000cf94f-00054ea107e4faf7.journal (deleted)
ruby    111613 td-agent   12r   REG  202,1 134033408     0 26609660 /var/log/journal/0af04d3c78a943ae8f3cc26602e374f2/system@4488e800e4f740b381f1a39009f9d7b4-0000000000051325-00054dd86a499acc.journal (deleted)
ruby    111613 td-agent   13r   REG  202,1 134033408     0 26609667 /var/log/journal/0af04d3c78a943ae8f3cc26602e374f2/system@4488e800e4f740b381f1a39009f9d7b4-0000000000079cec-00054de0df71286f.journal (deleted)
ruby    111613 td-agent   15r   REG  202,1  67108864     0 26609668 /var/log/journal/0af04d3c78a943ae8f3cc26602e374f2/[email protected]~ (deleted)
ruby    111613 td-agent   16r   REG  202,1 134033408     0 26609669 /var/log/journal/0af04d3c78a943ae8f3cc26602e374f2/system@a8dc4a3e32d2413aa4d673232c12f111-0000000000000001-00054e2a6bb784fe.journal (deleted)
ruby    111613 td-agent   17r   REG  202,1   8388608     0 26609633 /var/log/journal/0af04d3c78a943ae8f3cc26602e374f2/user-33917@c7e514a07ad4430baefc241e8b1fd515-000000000000045e-00054e2b42965f10.journal (deleted)
ruby    111613 td-agent   18r   REG  202,1 134033408     0 26609674 /var/log/journal/0af04d3c78a943ae8f3cc26602e374f2/system@a8dc4a3e32d2413aa4d673232c12f111-0000000000029b97-00054e566e8d4d37.journal (deleted)
ruby    111613 td-agent   19r   REG  202,1 134033408     0 26609662 /var/log/journal/0af04d3c78a943ae8f3cc26602e374f2/system@a8dc4a3e32d2413aa4d673232c12f111-0000000000053e52-00054e704583341b.journal (deleted)
ruby    111613 td-agent   20r   REG  202,1 134033408     0 26609651 /var/log/journal/0af04d3c78a943ae8f3cc26602e374f2/system@a8dc4a3e32d2413aa4d673232c12f111-000000000007e03f-00054e8a0ceeabc9.journal (deleted)
ruby    111613 td-agent   21r   REG  202,1 134033408     0 26609641 /var/log/journal/0af04d3c78a943ae8f3cc26602e374f2/system@a8dc4a3e32d2413aa4d673232c12f111-00000000000a70a3-00054e972f909dd8.journal (deleted)
ruby    111613 td-agent   22r   REG  202,1 134033408     0 26609636 /var/log/journal/0af04d3c78a943ae8f3cc26602e374f2/system@a8dc4a3e32d2413aa4d673232c12f111-00000000000f8dd9-00054eb679545b67.journal (deleted)
ruby    111613 td-agent   23r   REG  202,1   8388608     0 26608856 /var/log/journal/0af04d3c78a943ae8f3cc26602e374f2/user-33917@c7e514a07ad4430baefc241e8b1fd515-00000000000cec2f-00054ea0d69c6a7d.journal (deleted)
ruby    111613 td-agent   24r   REG  202,1   8388608     0 26609664 /var/log/journal/0af04d3c78a943ae8f3cc26602e374f2/user-33917@c7e514a07ad4430baefc241e8b1fd515-00000000000d29ef-00054ea1c93d215f.journal (deleted)
ruby    111613 td-agent   25r   REG  202,1 134033408     0 26609665 /var/log/journal/0af04d3c78a943ae8f3cc26602e374f2/system@a8dc4a3e32d2413aa4d673232c12f111-0000000000122863-00054ecd501586e2.journal (deleted)
ruby    111613 td-agent   27r   REG  202,1   8388608     0 26637962 /var/log/journal/0af04d3c78a943ae8f3cc26602e374f2/user-33917@c7e514a07ad4430baefc241e8b1fd515-0000000000000000-0000000000000000.journal (deleted)

This is my configuration:

      <source>
        @type systemd
        path /var/log/journal
        filters [{"SYSLOG_IDENTIFIER": "sshd" }, {"SYSLOG_IDENTIFIER": "sudo" }]
        <storage>
          @type local
          persistent true
          path /etc/td-agent/cursor/login_audit
        </storage>
        tag login_audit
        read_from_head true
      </source>

The deleted files are still occupied

fluentd    2472                  root  125r      REG              252,1 134217728     539602 /var/log/journal/a87d37f9c4d74e9eb672b97b5dd0c818/system@d059ec38963543fb95c6390453c1f247-0000000007057712-00059fa76e02b10d.journal (deleted)
fluentd    2472                  root  159r      REG              252,1 134217728     539602 /var/log/journal/a87d37f9c4d74e9eb672b97b5dd0c818/system@d059ec38963543fb95c6390453c1f247-0000000007057712-00059fa76e02b10d.journal (deleted)
fluentd    2472  2497            root  125r      REG              252,1 134217728     539602 /var/log/journal/a87d37f9c4d74e9eb672b97b5dd0c818/system@d059ec38963543fb95c6390453c1f247-0000000007057712-00059fa76e02b10d.journal (deleted)
fluentd    2472  2497            root  159r      REG              252,1 134217728     539602 /var/log/journal/a87d37f9c4d74e9eb672b97b5dd0c818/system@d059ec38963543fb95c6390453c1f247-0000000007057712-00059fa76e02b10d.journal (deleted)
fluentd    2472  2498            root  125r      REG              252,1 134217728     539602 /var/log/journal/a87d37f9c4d74e9eb672b97b5dd0c818/system@d059ec38963543fb95c6390453c1f247-0000000007057712-00059fa76e02b10d.journal (deleted)
fluentd    2472  2498            root  159r      REG              252,1 134217728     539602 /var/log/journal/a87d37f9c4d74e9eb672b97b5dd0c818/system@d059ec38963543fb95c6390453c1f247-0000000007057712-00059fa76e02b10d.journal (deleted)
fluentd    2472  2503            root  125r      REG              252,1 134217728     539602 /var/log/journal/a87d37f9c4d74e9eb672b97b5dd0c818/system@d059ec38963543fb95c6390453c1f247-0000000007057712-00059fa76e02b10d.journal (deleted)
fluentd    2472  2503            root  159r      REG              252,1 134217728     539602 /var/log/journal/a87d37f9c4d74e9eb672b97b5dd0c818/system@d059ec38963543fb95c6390453c1f247-0000000007057712-00059fa76e02b10d.journal (deleted)

I'm using the latest version 1.0.2
and systemd-journal is 1.3.3

High CPU usage with v0.0.7

We observed a huge increase in CPU usage between version v0.0.6 and v0.0.7 using the following config:

    # Logs from systemd-journal for interesting services.
    <source>
      type systemd
      filters [{ "_SYSTEMD_UNIT": "docker.service" }]
      pos_file /var/log/gcp-journald-docker.pos
      read_from_head true
      tag docker
    </source>
    <source>
      type systemd
      filters [{ "_SYSTEMD_UNIT": "kubelet.service" }]
      pos_file /var/log/gcp-journald-kubelet.pos
      read_from_head true
      tag kubelet
    </source>

It looks like there was only a single commit between those versions: bcbd53e

See kubernetes/kubernetes#42515 for more background on how the issue manifested in our setup.

I'm happy to provide more details or repro steps if it would help.

Can't install version "0.0.8"

Environment

$cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS
$/opt/td-agent/embedded/bin/ruby -v
ruby 2.1.10p492 (2016-04-01 revision 54464) [x86_64-linux]
$td-agent --version
td-agent 0.12.40

Detail
I've tried $td-agent-gem install fluent-plugin-systemd -v 0.0.8 the above environment.

The result is bellow.

$td-agent-gem install fluent-plugin-systemd -v 0.0.8
Building native extensions.  This could take a while...
ERROR:  Error installing fluent-plugin-systemd:
	ERROR: Failed to build gem native extension.

    current directory: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/ffi-1.9.18/ext/ffi_c
/opt/td-agent/embedded/bin/ruby -r ./siteconf20171006-9010-thi6l0.rb extconf.rb
checking for ffi.h... *** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of necessary
libraries and/or headers.  Check the mkmf.log file for more details.  You may
need configuration options.

Provided configuration options:
	--with-opt-dir
	--with-opt-include
	--without-opt-include=${opt-dir}/include
	--with-opt-lib
	--without-opt-lib=${opt-dir}/lib
	--with-make-prog
	--without-make-prog
	--srcdir=.
	--curdir
	--ruby=/opt/td-agent/embedded/bin/ruby
	--with-ffi_c-dir
	--without-ffi_c-dir
	--with-ffi_c-include
	--without-ffi_c-include=${ffi_c-dir}/include
	--with-ffi_c-lib
	--without-ffi_c-lib=${ffi_c-dir}/lib
	--with-libffi-config
	--without-libffi-config
	--with-pkg-config
	--without-pkg-config
/opt/td-agent/embedded/lib/ruby/2.1.0/mkmf.rb:467:in `try_do': The compiler failed to generate an executable file. (RuntimeError)
You have to install development tools first.
	from /opt/td-agent/embedded/lib/ruby/2.1.0/mkmf.rb:598:in `try_cpp'
	from /opt/td-agent/embedded/lib/ruby/2.1.0/mkmf.rb:1072:in `block in have_header'
	from /opt/td-agent/embedded/lib/ruby/2.1.0/mkmf.rb:923:in `block in checking_for'
	from /opt/td-agent/embedded/lib/ruby/2.1.0/mkmf.rb:351:in `block (2 levels) in postpone'
	from /opt/td-agent/embedded/lib/ruby/2.1.0/mkmf.rb:321:in `open'
	from /opt/td-agent/embedded/lib/ruby/2.1.0/mkmf.rb:351:in `block in postpone'
	from /opt/td-agent/embedded/lib/ruby/2.1.0/mkmf.rb:321:in `open'
	from /opt/td-agent/embedded/lib/ruby/2.1.0/mkmf.rb:347:in `postpone'
	from /opt/td-agent/embedded/lib/ruby/2.1.0/mkmf.rb:922:in `checking_for'
	from /opt/td-agent/embedded/lib/ruby/2.1.0/mkmf.rb:1071:in `have_header'
	from extconf.rb:16:in `<main>'

To see why this extension failed to compile, please check the mkmf.log which can be found here:

  /opt/td-agent/embedded/lib/ruby/gems/2.1.0/extensions/x86_64-linux/2.1.0/ffi-1.9.18/mkmf.log

extconf failed, exit code 1

Gem files will remain installed in /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/ffi-1.9.18 for inspection.
Results logged to /opt/td-agent/embedded/lib/ruby/gems/2.1.0/extensions/x86_64-linux/2.1.0/ffi-1.9.18/gem_make.out

This is mkmf.log.

$cat /opt/td-agent/embedded/lib/ruby/gems/2.1.0/extensions/x86_64-linux/2.1.0/ffi-1.9.18/mkmf.log
package configuration for libffi is not found
"gcc -o conftest -I/opt/td-agent/embedded/include/ruby-2.1.0/x86_64-linux -I/opt/td-agent/embedded/include/ruby-2.1.0/ruby/backward -I/opt/td-agent/embedded/include/ruby-2.1.0 -I. -I/opt/td-agent/embedded/include -O2 -O3 -g -pipe -I/opt/td-agent/embedded/include   -I/opt/td-agent/embedded/include -O2 -O3 -g -pipe -fPIC conftest.c  -L. -L/opt/td-agent/embedded/lib -Wl,-R/opt/td-agent/embedded/lib -L/opt/td-agent/embedded/lib -Wl,-R/opt/td-agent/embedded/lib -L. -Wl,-rpath,/opt/td-agent/embedded/lib -fstack-protector -L/opt/td-agent/embedded/lib -rdynamic -Wl,-export-dynamic -L/opt/td-agent/embedded/lib  -Wl,-R/opt/td-agent/embedded/lib      -Wl,-R -Wl,/opt/td-agent/embedded/lib -L/opt/td-agent/embedded/lib -lruby  -lpthread -ldl -lcrypt -lm   -lc"
checked program was:
/* begin */
1: #include "ruby.h"
2:
3: int main(int argc, char **argv)
4: {
5:   return 0;
6: }
/* end */

According to the log, I think I should install some libraries and set to correct configuration. What should I do as a next step?

Corrupted journal can still cause crashes

Hello,

We noticed today that our fluentd was stuck in a crash loop due to a corrupted journal:

2018-08-10 13:56:42 +0000 [warn]: Error reading from Journal: Systemd::JournalError: Bad message
2018-08-10 14:00:24 +0000 [warn]: Error reading from Journal: Systemd::JournalError: Bad message
2018-08-10 14:02:41 +0000 [warn]: Error reading from Journal: Systemd::JournalError: Bad message
2018-08-10 14:04:26 +0000 [warn]: Error reading from Journal: Systemd::JournalError: Bad message
2018-08-10 14:07:33 +0000 [warn]: Error reading from Journal: Systemd::JournalError: Bad message
2018-08-10 14:09:20 +0000 [warn]: Error reading from Journal: Systemd::JournalError: Bad message
2018-08-10 14:19:15 +0000 [warn]: Error reading from Journal: Systemd::JournalError: Bad message
2018-08-10 14:24:31 +0000 [warn]: Error reading from Journal: Systemd::JournalError: Bad message
2018-08-10 14:29:28 +0000 [warn]: Error reading from Journal: Systemd::JournalError: Bad message
2018-08-10 14:39:27 +0000 [warn]: Error reading from Journal: Systemd::JournalError: Bad message
2018-08-10 14:44:28 +0000 [warn]: Error reading from Journal: Systemd::JournalError: Bad message
2018-08-10 14:59:15 +0000 [warn]: Error reading from Journal: Systemd::JournalError: Bad message
2018-08-10 15:09:25 +0000 [warn]: Error reading from Journal: Systemd::JournalError: Bad message
2018-08-10 15:21:36 +0000 [warn]: Error reading from Journal: Systemd::JournalError: Bad message
2018-08-10 15:29:15 +0000 [error]: Unexpected error raised. Stopping the timer. title=:in_systemd_emit_worker error_class=Systemd::JournalError error="Bad message"
  2018-08-10 15:29:15 +0000 [error]: /var/lib/gems/2.3.0/gems/systemd-journal-1.3.2/lib/systemd/journal/navigable.rb:44:in `move_next'
  2018-08-10 15:29:15 +0000 [error]: /var/lib/gems/2.3.0/gems/fluent-plugin-systemd-1.0.1/lib/fluent/plugin/in_systemd.rb:131:in `watch'
  2018-08-10 15:29:15 +0000 [error]: /var/lib/gems/2.3.0/gems/fluent-plugin-systemd-1.0.1/lib/fluent/plugin/in_systemd.rb:109:in `run'
  2018-08-10 15:29:15 +0000 [error]: /var/lib/gems/2.3.0/gems/fluentd-0.14.25/lib/fluent/plugin_helper/timer.rb:77:in `on_timer'
  2018-08-10 15:29:15 +0000 [error]: /var/lib/gems/2.3.0/gems/cool.io-1.5.3/lib/cool.io/loop.rb:88:in `run_once'
  2018-08-10 15:29:15 +0000 [error]: /var/lib/gems/2.3.0/gems/cool.io-1.5.3/lib/cool.io/loop.rb:88:in `run'
  2018-08-10 15:29:15 +0000 [error]: /var/lib/gems/2.3.0/gems/fluentd-0.14.25/lib/fluent/plugin_helper/event_loop.rb:84:in `block in start'
  2018-08-10 15:29:15 +0000 [error]: /var/lib/gems/2.3.0/gems/fluentd-0.14.25/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'
2018-08-10 15:29:15 +0000 [error]: Timer detached. title=:in_systemd_emit_worker
2018-08-10 15:37:11 +0000 [info]: fluentd worker is now stopping worker=0
2018-08-10 15:37:11 +0000 [info]: shutting down fluentd worker worker=0
2018-08-10 15:37:11 +0000 [info]: shutting down input plugin type=:systemd plugin_id="object:3f90cf00897c"
2018-08-10 15:37:11 +0000 [info]: shutting down input plugin type=:systemd plugin_id="object:3f90d088561c"
2018-08-10 15:37:11 +0000 [info]: shutting down input plugin type=:systemd plugin_id="object:3f90cf114d5c"
2018-08-10 15:37:11 +0000 [info]: shutting down output plugin type=:rewrite_tag_filter plugin_id="object:3f90d2bb4f5c"
2018-08-10 15:37:11 +0000 [info]: shutting down output plugin type=:google_cloud plugin_id="object:3f90cf07dc40"
2018-08-10 15:37:11 +0000 [info]: shutting down filter plugin type=:kubernetes_metadata plugin_id="object:3f90d08c567c"
2018-08-10 15:37:11 +0000 [info]: shutting down filter plugin type=:record_transformer plugin_id="object:3f90d09b3cf0"
2018-08-10 15:37:11 +0000 [info]: shutting down filter plugin type=:parser plugin_id="object:3f90ce8b1804"
2018-08-10 15:37:11 +0000 [info]: shutting down output plugin type=:null plugin_id="object:3f90d27cbc38"
2018-08-10 15:37:11 +0000 [info]: shutting down output plugin type=:null plugin_id="object:3f90d2baa3a4
$ journalctl --verify
# ...many files redacted
PASS: /var/log/journal/2889e3617ecbc9faa794ad67c5d2e065/user-1000@0da0f119c85e4374a2e8e71427bc7fa0-0000000009ff5577-000573171be6b567.journal
75eff50: Invalid data object at hash entry 5544 of 233016                 
File corruption detected at /var/log/journal/2889e3617ecbc9faa794ad67c5d2e065/system.journal:75ed1b0 (of 125829120 bytes, 98%).
FAIL: /var/log/journal/2889e3617ecbc9faa794ad67c5d2e065/system.journal (Bad message)
PASS: /var/log/journal/2889e3617ecbc9faa794ad67c5d2e065/system@36b99449ae7042c2904d3eee68a72cf7-0000000009d630aa-000573152a476352.journal

I see that #16 fixed an issue like this - my guess is that there's another code path that can be affected by this issue but that does not catch the error. We're running version 1.0.1 with the following other moving pieces:

gem 'fluentd', '~>0.14.24'
gem 'fluent-plugin-systemd', '~>1.0.1'
gem 'fluent-plugin-google-cloud', '~>0.6.21'
gem 'fluent-plugin-kubernetes_metadata_filter', '~>2.1.2'
gem 'fluent-plugin-rewrite-tag-filter', '~>2.0.2'

Dmesg logs

Hi is there any way to collect dmesg (journalctl -k) logs with fluent-plugin-systemd?

If not this is somewhere in the roadmap? :)

Exceptions in emit cause stall

When emit throws an exception, it looks like the thread that reads from systemd just stops. There should likely be a rescue in there, or, alternatively, you could set Thread::abort_on_exception=true. That's not the best behavior though.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.