fluent / fluent-bit-docs Goto Github PK
View Code? Open in Web Editor NEWFluent Bit - Official Documentation
Home Page: https://docs.fluentbit.io
Fluent Bit - Official Documentation
Home Page: https://docs.fluentbit.io
Streams_File is mentioned in sample StreamProcessor page but not described in Service Page:
[SERVICE]
Flush 1
Log_Level info
Streams_File stream_processor.conf
could include things like: options.
specifying a config file perhaps.
you know, little itty bitty things like that which folks may want to know if they were trying to run a program
[root@k8s-master src]# docker run -ti fluent/fluent-bit:1.3 /fluent-bit/bin/fluent-bit -i cpu -o stdout -f 1
: Unsupported system page size
: Unsupported system page size
: Unsupported system page size
: Unsupported system page size
: Unsupported system page size
[2020/02/10 01:24:27] [error] [src/flb_config.c:128 errno=12] Cannot allocate memory
[root@k8s-master src]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (AltArch)
Looks like there was a copy/paste error when refactoring the documentation structure for 1.4: the input forward plugin docs have now the old output forward plugin contents.
https://github.com/fluent/fluent-bit-docs/blob/master/pipeline/inputs/forward.md
The documentation at https://fluentbit.io/documentation/0.13/parser/decoder.html has 0.12 in the title (from book.json).
Hi, can you give some light on dns resolving question.
We have a fluentd with dns name like fluentd.our.domain (ip x.x.x.x). this dns name is set in fluent-bit config, so everything works flawlessly.
Now we want to switch ip to a new one. do we have to restart fluent-bit in order to re-resolve to new ip or it will just resolve as soon as the dns will propagate changes? Or does it take system-wide resolving info?
For filters, very briefly described in https://docs.fluentbit.io/manual/getting_started/filter -- it is unclear what happens when a filter does not match a record. Does it mean the record is dropped or passed along unaltered?
Specifc example: if I use only the kubernetes filter with Match kube.*
-- will records not having a tag starting with kube.
be dropped?
Once this is clear to me I am happy to document that.
Hi everyone,
Is this configuration supposed to work?
[PARSER]
Name my_parser`
Format regex`
Regex ${MY_REGEX}
I'm trying to inject the regex via environment variable but it does not seem to work.
export MY_REGEX='^(?<dummy>.+)$'
/fluent-bit/bin/fluent-bit -R /fluent-bit/etc/parsers.conf -i forward -F parser -p 'Match=*' -p 'Key_Name=log' -p 'Parser=my_parser' -o stdout -p 'Match=*'
If I hardcode the regex the parser works just fine:
[PARSER]
Name my_parser
Format regex
Regex ^(?<dummy>.+)$
Thanks in advance.
Just FYI.
Recently, the units (a power of 1024) are defined as "*bibyte(*iB)".
e.g. http://physics.nist.gov/cuu/Units/binary.html
1024 byte = 1 kibibyte (KiB)
1024 * 1024 byte = 1 mebibyte (MiB)
1024 * 1024 * 1024 byte = 1 gibibyte (GiB)
Of course, we traditionally use "kilo"byte as 1024bytes and a lot of people may be familiar with it.
Do we need to fix them ?
Docker input plugin landed in v1.3 release but it misses documentation:
I collect verbose log and want to write in to a file. Default(out_file) and plain format work for me, but I don't want to change the format(no tag and timestamp). I try to use template format. However, it cannot show the original content.
My config of output part:
[OUTPUT]
Name file
Format template
#Template {message} (default: '{time} {message}')
Path result.txt
result.txt
1573528176.966284 {message}
1573528176.966291 {message}
1573528176.966298 {message}
1573528176.966305 {message}
1573528176.966311 {message}
For some search terms on Google the documentation of version 0.12 hosted at https://fluentbit.io/documentation/0.12/ ranks above the recent documentation. Would be nice to have a non-intrusive banner explaining that the documentation is for an outdated version, with a link to the most recent version.
From:
https://docs.fluentbit.io/manual/configuration/file#config_input
These make very little sense in English. So it's not clear what is trying to be conveyed.
Match
It sets a pattern to match certain records Tag. It's case sensitive and support the star (*) character as a wildcard.
Match_Regex
It sets a pattern to match certain records Tag.
The page about monitoring https://github.com/fluent/fluent-bit-docs/blob/master/configuration/monitoring.md contains information about how to access the metrics, but it doesn't explain what each of the numbers actually means.
For some of them, it is fairly obvious (e.g. input record count). I got a bit confused about the output metrics though. The errors and retries I assume have the same meaning as https://github.com/fluent/fluent-bit-docs/blob/master/configuration/scheduler.md (would be nice if this page was linked from the metrics page).
What is fluentbit_output_retries_total
though? Number of all retries, or just successful retries (given that there is a separate number for failed retries)? What about fluentbit_output_proc_records_total
? Does it include the records that were not successfully delivered?
Does the number of failed retries mean number of records for which the retry limit was reached, or is it the number of the individual retry attemps that have failed? Is there some way to see or calculate the total number of undelivered (given up on) records?
The monitoring page doesn't answer any of these questions.
Documentation says that the Parser Filter plugin has Unescape_Key option, I could't get it working nor find it's presence in the code.
The link to "TLS/SSL" in the section of the same name in page https://docs.fluentbit.io/manual/output/splunk leads to a 404 on Github.
I didn't find a proper repo for the fluent-bit ubuntu package (td-agent-bit), so posting here.
Packages (debian and ubuntu atleast, but probably other distros too), should not install stuff in /opt
.
Instead /usr/share
, /usr/lib
, /usr/bin
etc should be used. It would be great if the td-agent
and td-agent-bit
packages would adhere to this principle.
dpkg-query -L td-agent-bit
will list the current install locations. Can also be used to understand where other packages are storing their files.
Is it possible to disable/enable the tail plugin based on an evironment varibale or an other setting?
I want to be do this for the tail plugin to customize the log tailing.
in_xbee is deprecated from v0.12.
So how do we treat in_xbee?
Ignore me, this doesn't belong here. Sorry for bothering.
Under "Configuration Parameters" for the forward plugin, fluent-bit suggests setting Buffer_Max_Size
and Buffer_Chunk_Size
.
However, the "Configuration File" example later on suggests setting Chunk_Size
and Buffer_Size
.
The relevant documentation page is here: https://github.com/fluent/fluent-bit-docs/blob/1.0/input/forward.md
Which one is it? If these are different variables why are these not listed in the Parameters section?
The following parameter is supported (and quite usefull) but not documented
Parser | Specify the name of a parser to interpret the entry as a structured message
When I was trying to write my own unit test I found that the unit test documentation is out of date.
First I had to set -DFLB_TESTS_RUNTIME=ON
instead of -DFLB_TESTS=ON
and after that, I had to run make
before entering make test
because the binaries were missing
The "Standard Output" output plugin supports three parameters: format
, json_date_format
, and json_date_key
(see https://github.com/fluent/fluent-bit/blob/master/plugins/out_stdout/stdout.c).
Neither of these are mentioned in the docs: https://docs.fluentbit.io/manual/output/stdout.
If we talk specifically about systemd
as input plugin and es
as ouput pluging. Below are all the fields that being written to the elastic search cluster
@timestamp
CONTAINER_ID
CONTAINER_ID_FULL
CONTAINER_NAME
CONTAINER_TAG
MESSAGE
PRIORITY
_BOOT_ID
_CAP_EFFECTIVE
_CMDLINE
_COMM
_EXE
_GID
_HOSTNAME
_MACHINE_ID
_PID
_SELINUX_CONTEXT
_SOURCE_REALTIME_TIMESTAMP
_SYSTEMD_CGROUP
_TRANSPORT
_UID
_id
_index
_score
_type
among the fields above below are all the fields that are being generated by journald
itself
CONTAINER_ID
CONTAINER_ID_FULL
CONTAINER_NAME
CONTAINER_TAG
MESSAGE
Now my suggestion is to have more details on how the other fields are being generated and how can be add or modify one, and we should have it documented on the docs somewhere.
I've used repo "https://packages.fluentbit.io/ubuntu/xenial xenial main" and expect version v1.0 but it was installed v0.14.9
`
root@jenkins:/tmp# service td-agent-bit status
● td-agent-bit.service - TD Agent Bit
Loaded: loaded (/lib/systemd/system/td-agent-bit.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2018-12-14 04:22:38 PST; 5min ago
Main PID: 44848 (td-agent-bit)
CGroup: /system.slice/td-agent-bit.service
└─44848 /opt/td-agent-bit/bin/td-agent-bit -c /etc//td-agent-bit/td-agent-bit.conf
Dec 14 04:22:38 jenkins systemd[1]: Started TD Agent Bit.
Dec 14 04:22:38 jenkins td-agent-bit[44848]: Fluent-Bit v0.14.9
Dec 14 04:22:38 jenkins td-agent-bit[44848]: Copyright (C) Treasure Data
Dec 14 04:22:38 jenkins td-agent-bit[44848]: [2018/12/14 04:22:38] [ info] [engine] started (pid=44848)
Dec 14 04:22:38 jenkins td-agent-bit[44848]: [2018/12/14 04:22:38] [ info] [in_syslog] UDP buffer size set to 32768 bytes
root@jenkins:/tmp#
`
The section which describes the configuration file omits the following important information -
fluent-bit
executableAdd description and examples of poosible Time_Format values
We experienced situations where fluent-bit OOM'd with the systemd input plugin due to lots of data in the systemd journal.
When we looked at the code for fluent-bit we noticed that the systemd plugin supports the Mem_Buf_Limit
option, but it is not documented.
Adding Mem_Buf_Limit
with appropriate value to our fluent-bit config solved our OOM problems.
Any reasons this option has been left out of the systemd plugin docs?
From getting_started/parser.md#L28
Parsers are fully configurable, for more details please refer to the [Parsers](../parser/README.md) section.
The file ../parser/README.md
does not exist.
On the service documentation page there is a documented option Config_Watch
that does not appear to work from my testing. It does appear to be a requested option though as I know I would love it for watching kubernetes configmap changes: fluent/fluent-bit#365
I did a bit of research and while my C isn't very good, I don't think that this config value is actually defined or implemented in the code: https://github.com/fluent/fluent-bit/blob/master/src/flb_config.c
I assume this should be removed, but I'm not familiar enough with the project to know for sure.
Examples in http://fluentbit.io/documentation/0.12/output/forward.html don't cover the newly added tls functionality of fluentd 1.0's in_forward
The example in the documentation shows:
[OUTPUT]
Name counter
Match *
But how can we send the result of a Counter to a file?
The documentation for how to use the preserve_key and reserve_data options for the parser filter incorrectly show configuring the actual parser with these options and not the filter parser which is where they actually belong.
"TD Agent Bit" is linked in SUMMARY.md.
However, that page is not found.
I have included the following in the conf file
[FILTER]
Name grep
Match *
Exclude log [0-9]*\(space)
Exclude log 2019
The filter does not seem to do anything. It passes everything through. What is the right way to use the grep filter. Does it support standard regex patterns or a specific set of custom regex?
From discussions in fluent/fluent-bit#375, it seems that tail
has worked with logrotate
copytruncate
option with fluent-bit version 0.12.15
The current documentation (https://docs.fluentbit.io/manual/v/1.2/input/tail, accessed on 2019-06-28 17:21:52 ET) seems to indicate that logrotate
copytruncate
mode is not supported.
I could not find any mention of trunc
in the release announcements since 0.12.16:
for v in v0.12.{16..19} v0.13.{0..8} v0.14.{0..9} v.1.0.{0..6} v1.1.{0..3} v.1.2.0; do
curl -s https://fluentbit.io/announcements/$v/ | grep trunc ;
done
git log -p input/tail.md
seems to indicate that the copytruncate
line has never been updated to reflect that it was supported with version v0.12.15.
If logrotate
copytruncate
is supported now, may we update the documentation?
A document about filter_stdout is missing.
I can't download window
https://fluentbit.io/releases/1.3/td-agent-bit-1.3.5-win64.exe
The requested URL was not found on this server.
According to the accouncements page, there is a new Stream Processor input.
It links to a page that doesn't exist (https://docs.fluentbit.io/manual/input/stream_processor/).
Take the Regex parser for example.
There's no list of all the possible parameters (aka, settings). There should be.
The plugins that do have full lists do not indicate in a separate column which parameters are required and which are optional. Docs should have that.
Documentation fails to say what happens if a Regex doesn't match. Is the record dropped? Is it kept under a certain key? Something else entirely? Who knows?
The documentation of disk input lacks info about read_size
and write_size
. Can any provide some clarification on unit? Then I may create table similiar to one in cpu input and submit pull requests.
Hi team,
There's no possibility to download td-agent-bit package for Windows either from GitHub
(https://github.com/fluent/fluent-bit-docs/blob/1.3/installation/windows.md)
or from fluenbit.io site.
The new property is just Merge_Log
, but your documentation still has the old Merge_JSON_Log
. Even your Kubernetes example uses the new property.
this link is an absolute link to 0.12/, ignoring whatever version the reader is currently reading:
for [backpressure](http://fluentbit.io/documentation/0.12/configuration/backpressure.html)
Docker image:
FROM fluent/fluent-bit:1.2-debug
output:
[OUTPUT]
Name firehose
Match *
delivery_stream stream-name
region us-east-1
Now I'm not complaining that it works, it works to transmit, just that i need extra configuration and there's no documentation on what the output params are.
The default parsers.conf
available in /etc/td-agent-bit/
now contains a new entry crio
which fails validation.
Starting td-agent-bit with the following params throws an error:
/opt/td-agent-bit/bin/td-agent-bit -c fluentbit.conf -R /etc/td-agent-bit/parsers.conf
[2018/07/30 07:11:31] [error] [parser:crio] Invalid format Regex
The certificate for fluentbit.io has expired. Please consider automating renewal.
/tmp » date /tmp
Mon 10 Dec 2018 06:12:55 GMT
/tmp » openssl s_client -connect packages.fluentbit.io:443 -showcerts /tmp
CONNECTED(00000005)
depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
verify return:1
depth=0 CN = fluentbit.io
verify error:num=10:certificate has expired
notAfter=Dec 10 03:18:55 2018 GMT
verify return:1
depth=0 CN = fluentbit.io
notAfter=Dec 10 03:18:55 2018 GMT
verify return:1
The Generate_ID
property of the Elasticsearch output plugin is not documented. The default value is false
, which was causing duplicates in my logging indices from the ksmg
input plugin, and ultimately led me to the discovery of this option in the source.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.