Code Monkey home page Code Monkey logo

docker-filebeat's People

Contributors

bargenson avatar thiagoalmeidasa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-filebeat's Issues

bump to 5.0.0

Will be great to bump the image ot use latest version

Problema with docker-filebeat with logstash add_field input

Hi,
I make a container with docker-filebeat and send the logs to logstash. In logstash I tried add_field in the input configuration but does not work.

Mi config file is above , I need add_field if the event in on port 5055

input {
beats {
port => 5044
id => "comex"
}
beats {
port => 5055
add_field => {
"origen" => "docker"
}
}
}

output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "docker2"
}
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "comex-%{+YYYY.MM.dd}"
}

file {
path => "/tmp/logstash.txt"
}
}

The output event is:

{"fields":null,"tags":["beats_input_codec_plain_applied"],"count":1,"beat":{"name":"docker01.bbva.internal","hostname":"e080a9066ea7"},"@Version":"1","offset":7073267,"source":"-","type":"filebeat-docker-logs","message":"[competent_yonath] \u0002\u0000[competent_yonath] \u0000[competent_yonath] \u0000[competent_yonath] \u0000[competent_yonath] \u0000[competent_yonath] \u0000[competent_yonath] @2018/02/15 17:14:39.531213 publish.go:104: INFO Events sent: 17","host":"e080a9066ea7","@timestamp":"2018-02-15T17:14:46.009Z","input_type":"stdin"}

Thanks.

Support for SSL certificates

I really like the idea, but for me to deploy something like this, I need to specify the SSL options.

Both the CA, the client cert and client key needs to be configureable.

No JSON Object Could be decoded

I just tried to use your image on my swarm, but I got the following result.

Here is my service configuration

  docker-beats:
    image: bargenson/filebeat
    environment:
      - LOGSTASH_HOST=log
      - LOGSTASH_PORT=5044
      - SHIPPER_NAME=docker
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
    deploy:
      mode: global

Here's the log output (it just cycles)

support_docker-beats.0.vpfv01m3fgig@docker-engine    | Initializing Filebeat...
support_docker-beats.0.vpfv01m3fgig@docker-engine    | Traceback (most recent call last):
support_docker-beats.0.vpfv01m3fgig@docker-engine    |   File "<string>", line 3, in <module>
support_docker-beats.0.vpfv01m3fgig@docker-engine    |   File "/usr/lib/python2.7/json/__init__.py", line 339, in loads
support_docker-beats.0.vpfv01m3fgig@docker-engine    | 2018/12/15 04:35:54.223218 geolite.go:24: INFO GeoIP disabled: No paths were set under output.geoip.paths
support_docker-beats.0.vpfv01m3fgig@docker-engine    |     return _default_decoder.decode(s)
support_docker-beats.0.vpfv01m3fgig@docker-engine    |   File "/usr/lib/python2.7/json/decoder.py", line 364, in decode
support_docker-beats.0.vpfv01m3fgig@docker-engine    |     obj, end = self.raw_decode(s, idx=_w(s, 0).end())
support_docker-beats.0.vpfv01m3fgig@docker-engine    |   File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode
support_docker-beats.0.vpfv01m3fgig@docker-engine    | 2018/12/15 04:35:54.223345 logstash.go:106: INFO Max Retries set to: 3
support_docker-beats.0.vpfv01m3fgig@docker-engine    |     raise ValueError("No JSON object could be decoded")
support_docker-beats.0.vpfv01m3fgig@docker-engine    | ValueError: No JSON object could be decoded
support_docker-beats.0.vpfv01m3fgig@docker-engine    | 2018/12/15 04:35:54.224152 outputs.go:119: INFO Activated logstash as output plugin.
support_docker-beats.0.vpfv01m3fgig@docker-engine    | 2018/12/15 04:35:54.224207 publish.go:288: INFO Publisher name: docker
support_docker-beats.0.vpfv01m3fgig@docker-engine    | 2018/12/15 04:35:54.224316 async.go:78: INFO Flush Interval set to: 1s
support_docker-beats.0.vpfv01m3fgig@docker-engine    | 2018/12/15 04:35:54.224321 async.go:84: INFO Max Bulk Size set to: 2048
support_docker-beats.0.vpfv01m3fgig@docker-engine    | 2018/12/15 04:35:54.224342 beat.go:147: INFO Init Beat: filebeat; Version: 1.1.1
support_docker-beats.0.vpfv01m3fgig@docker-engine    | 2018/12/15 04:35:54.224938 beat.go:173: INFO filebeat sucessfully setup. Start running.
support_docker-beats.0.vpfv01m3fgig@docker-engine    | 2018/12/15 04:35:54.224962 registrar.go:66: INFO Registry file set to: /opt/.filebeat
support_docker-beats.0.vpfv01m3fgig@docker-engine    | 2018/12/15 04:35:54.224986 prospector.go:127: INFO Set ignore_older duration to 24h0m0s
support_docker-beats.0.vpfv01m3fgig@docker-engine    | 2018/12/15 04:35:54.225017 prospector.go:127: INFO Set scan_frequency duration to 10s
support_docker-beats.0.vpfv01m3fgig@docker-engine    | 2018/12/15 04:35:54.225050 prospector.go:87: INFO Input type set to: stdin
support_docker-beats.0.vpfv01m3fgig@docker-engine    | 2018/12/15 04:35:54.225056 prospector.go:127: INFO Set backoff duration to 1s
support_docker-beats.0.vpfv01m3fgig@docker-engine    | 2018/12/15 04:35:54.225059 prospector.go:127: INFO Set max_backoff duration to 10s
support_docker-beats.0.vpfv01m3fgig@docker-engine    | 2018/12/15 04:35:54.225062 prospector.go:107: INFO force_close_file is disabled
support_docker-beats.0.vpfv01m3fgig@docker-engine    | 2018/12/15 04:35:54.225097 prospector.go:137: INFO Starting prospector of type: stdin
support_docker-beats.0.vpfv01m3fgig@docker-engine    | 2018/12/15 04:35:54.225122 log.go:113: INFO Harvester started for file: -
support_docker-beats.0.vpfv01m3fgig@docker-engine    | 2018/12/15 04:35:54.225167 spooler.go:77: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
support_docker-beats.0.vpfv01m3fgig@docker-engine    | 2018/12/15 04:35:54.225175 crawler.go:78: INFO All prospectors initialised with 0 states to persist
support_docker-beats.0.vpfv01m3fgig@docker-engine    | 2018/12/15 04:35:54.225180 registrar.go:83: INFO Starting Registrar
support_docker-beats.0.vpfv01m3fgig@docker-engine    | 2018/12/15 04:35:54.225187 publish.go:88: INFO Start sending events to output

Multiple container name preprended to logs

Hi,
All my logs are preprended by the container name 7 times ...
If i run the command executed in the docker-entrypoint.sh:

curl -s --no-buffer -XGET --unix-socket /var/run/docker.sock "http:/containers/ct1/logs?stderr=1&stdout=1&tail=1&follow=1" | sed "s;^;[ct1] ;"

on ubuntu with zsh, i get the following output:

[ct1] My log line

and on the ash inside the container, i get the following output:

[ct1] [ct1] [ct1] [ct1] [ct1] [ct1] [ct1] My log line

Did you see this before ?

Thanks ?

Enable custom filebeat.yml config file

I am trying the following to replace the filebeat.yml:

./filebeat.yml:/opt/filebeat-1.1.1-x86_64/filebeat.yml

And getting this error when try to run the container:

filebeat         | sed: can't move '/opt/filebeat-1.1.1-x86_64/filebeat.ymlcgJcEM' to '/opt/filebeat-1.1.1-x86_64/filebeat.yml': Resource busy
filebeat exited with code 1

Am I doing something wrong?

Any way to filter what containers get logged?

Hey

Got this working with separate images for ELK. Notice that the filebeat container obviously picks up all the containers (including itself!).

Is there anyway to filter what containers filebeat picks up?

Cheers,
R.

Improve the image size

If we base the image on Alpine Linux or Busybox, it could improve a lot the current size of the image.

Error publishing events: EOF

I can't seem to be able to troubleshoot this one. Here is what the logs look like for the container whenever I try and run it:

8/22/2016 9:25:16 AMInitializing Filebeat...
8/22/2016 9:25:16 AM2016/08/22 14:25:16.392179 geolite.go:24: INFO GeoIP disabled: No paths were set under output.geoip.paths
8/22/2016 9:25:16 AM2016/08/22 14:25:16.392598 logstash.go:106: INFO Max Retries set to: 3
8/22/2016 9:25:16 AM2016/08/22 14:25:16.395099 outputs.go:119: INFO Activated logstash as output plugin.
8/22/2016 9:25:16 AM2016/08/22 14:25:16.395269 publish.go:288: INFO Publisher name: $(hostname)
8/22/2016 9:25:16 AM2016/08/22 14:25:16.395477 async.go:78: INFO Flush Interval set to: 1s
8/22/2016 9:25:16 AM2016/08/22 14:25:16.395490 async.go:84: INFO Max Bulk Size set to: 2048
8/22/2016 9:25:16 AM2016/08/22 14:25:16.395541 beat.go:147: INFO Init Beat: filebeat; Version: 1.1.1
8/22/2016 9:25:16 AM2016/08/22 14:25:16.396217 beat.go:173: INFO filebeat sucessfully setup. Start running.
8/22/2016 9:25:16 AM2016/08/22 14:25:16.396256 registrar.go:66: INFO Registry file set to: /opt/.filebeat
8/22/2016 9:25:16 AM2016/08/22 14:25:16.396312 prospector.go:127: INFO Set ignore_older duration to 24h0m0s
8/22/2016 9:25:16 AM2016/08/22 14:25:16.396326 prospector.go:127: INFO Set scan_frequency duration to 10s
8/22/2016 9:25:16 AM2016/08/22 14:25:16.396639 prospector.go:87: INFO Input type set to: stdin
8/22/2016 9:25:16 AM2016/08/22 14:25:16.396657 prospector.go:127: INFO Set backoff duration to 1s
8/22/2016 9:25:16 AM2016/08/22 14:25:16.396669 prospector.go:127: INFO Set max_backoff duration to 10s
8/22/2016 9:25:16 AM2016/08/22 14:25:16.396676 prospector.go:107: INFO force_close_file is disabled
8/22/2016 9:25:16 AM2016/08/22 14:25:16.396695 prospector.go:137: INFO Starting prospector of type: stdin
8/22/2016 9:25:16 AM2016/08/22 14:25:16.396733 log.go:113: INFO Harvester started for file: -
8/22/2016 9:25:16 AM2016/08/22 14:25:16.396644 spooler.go:77: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
8/22/2016 9:25:16 AM2016/08/22 14:25:16.396950 crawler.go:78: INFO All prospectors initialised with 0 states to persist
8/22/2016 9:25:16 AM2016/08/22 14:25:16.397269 registrar.go:83: INFO Starting Registrar
8/22/2016 9:25:16 AM2016/08/22 14:25:16.397307 publish.go:88: INFO Start sending events to output
8/22/2016 9:25:16 AMProcessing 3187e88fe1c49c4e94d07b67b708527737dea583485b8944f07747290691b453...
8/22/2016 9:25:16 AMProcessing 9961b8bea626736595688dda457a3f614d8b2e7fa123d9412d709808750f4e52...
8/22/2016 9:25:16 AMProcessing e4d87447cc46411b3b25766970848061d3d4936b6583c038ac163c8bdef45178...
8/22/2016 9:25:16 AMProcessing f364dbd8f6b61c5d9baf888c3aacae683e2dbe556e7bc5278c6d6bb429f3b92b...
8/22/2016 9:25:18 AM2016/08/22 14:25:18.899529 single.go:76: INFO Error publishing events (retrying): EOF
8/22/2016 9:25:18 AM2016/08/22 14:25:18.899564 single.go:152: INFO send fail
8/22/2016 9:25:18 AM2016/08/22 14:25:18.899579 single.go:159: INFO backoff retry: 1s
8/22/2016 9:25:19 AM2016/08/22 14:25:19.908694 single.go:76: INFO Error publishing events (retrying): EOF
8/22/2016 9:25:19 AM2016/08/22 14:25:19.908714 single.go:152: INFO send fail
8/22/2016 9:25:19 AM2016/08/22 14:25:19.908720 single.go:159: INFO backoff retry: 2s
8/22/2016 9:25:21 AM2016/08/22 14:25:21.941730 single.go:76: INFO Error publishing events (retrying): EOF
8/22/2016 9:25:21 AM2016/08/22 14:25:21.941763 single.go:152: INFO send fail

No logs in Kibana

Hi

i am trying to use your project to collect logs from our docker containers

I start the container as follows:

docker run -d -v /var/run/docker.sock:/tmp/docker.sock -e LOGSTASH_HOST=96.x.x.A -e LOGSTASH_PORT=5044 --name filebeat bargenson/filebeat

I can load up kibana UI but i can't see any logs .. so i tried the following from the machine running docker-filebeat:

nc -w1 96.x.x.A 5044 <<< "testing again from my home machine"

And the machine having the log stash forwarder, i see the following...

{:timestamp=>"2016-03-09T03:30:18.611000+0000", :message=>"Beats Input: Remote connection closed", :peer=>"96.x.x.B:58768", :exception=>#<Lumberjack::Beats::Connection::ConnectionClosed: Lumberjack::Beats::Connection::ConnectionClosed wrapping: Lumberjack::Beats::Parser::UnsupportedProtocol, unsupported protocol 116>, :level=>:warn}

How to resolve this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.