Code Monkey home page Code Monkey logo

docker-logstash-forwarder's People

Contributors

digital-wonderland avatar josselin-c avatar kr avatar wallies avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

docker-logstash-forwarder's Issues

ca.crt path

Was just looking at this file to figure out how to handle certificates.
https://github.com/digital-wonderland/docker-logstash-forwarder/blob/master/forwarder/config/config.go

I found this function and was wondering, is it ok, that the SslCa is pointing at the same file as the
SslCertificate?

// NewFromDefault returns a new default config.
func NewFromDefault(logstashEndpoint string) *LogstashForwarderConfig {
    network := Network{
        Servers:        strings.Split(logstashEndpoint, ","),
        SslCertificate: "/mnt/logstash-forwarder/logstash-forwarder.crt",
        SslKey:         "/mnt/logstash-forwarder/logstash-forwarder.key",
        SslCa:          "/mnt/logstash-forwarder/logstash-forwarder.crt",
        Timeout:        15,
    }

    config := &LogstashForwarderConfig{
        Network: network,
        Files:   []File{},
    }

    return config
}

Custom docker folder

You can change the location of the docker folder from /var/lib/docker to wherever you want by editing /etc/default/docker, and adding:

DOCKER_OPTS="-g /path/to/docker/data"

It would be nice if this wasn't hard coded here. Maybe read from an environment variable to make it easy to set the location?

Cool tool, BTW. :)

Couple of questions

  1. Is there any way to define a more complex network section of the logstash-forwarder config? E.g can we add multiple logstash servers for redundancy and can we define settings like the network time out.

  2. Can we define fields to be added to the logstash-forwarder config? E.g for each container we would want to sent a field with the container name for easy identification.

Logs for short lived containers

The readme states that after a restart, it only looks for running containers to start ingesting logs. Due to the way the laziness toggle works, this means if you have a short lived container (starts, completes a task, exits), it is possible for docker-logstash-forwarder to not see it after a restart as it is no longer running. This means you have some gaps in the exported logs of all containers for a server.

Would love to see a solution or workaround to this, open to suggestions or feedback :)

No timestamp when using docker-compose

Hi,

Thanks again for the great project. I am running into an issue when starting this image via docker-compose. To give you some background, when I start the container manually, via the following command, it works.

docker run -it --rm -v /var/lib/docker:/var/lib/docker -v /var/run/docker.sock:/var/run/docker.sock -v /mnt/share/logstash-forwarder:/mnt/logstash-forwarder -e LOGSTASH_HOST=logstash.service.consul:5043 -e SERVICE_NAME=logstash-forwarder logstash-forwarder

When I start the container this way, the log gets created and sent to logstash as expected. I am able to configure an index pattern (based on either @timestamp or time field names) as I would expect.

However, if I start this image via docker-compose using the following config, it doesn't seem to work.

logstash-forwarder:
  image: logstash-forwarder:latest
  volumes:
    - /var/lib/docker:/var/lib/docker
    - /var/run/docker.sock:/var/run/docker.sock
    - /mnt/share/logstash-forwarder:/mnt/logstash-forwarder
  environment:
    LOGSTASH_HOST:          "logstash.service.consul:5043"
    SERVICE_NAME:            "logstash-forwarder"
  restart: always

When I start the container this way, the log is sent to logstash correctly, but there are no timestamps in it. That is, within Kibana I can see the logstash-* log, however, there is no way to configure an index based on time (as the @timestamp and time field names don't exist).

Do you have any ideas why this is happening? I'm not sure if it's a limitation with docker-compose, or something else.

Thanks!

The output of a container stopping in a few seconds after starting is ignored

As an example, I have a container that is started by cron to do backup using rsync. When there are no changes to update the backup, the containers stops in a couple of seconds. The output of this container is not forwarder and is effectivelly ignored. The container is not removed, so the output should still be available.

Otherwise idle logstash-forwarder keeps logging its own logging

The logstash-forwarder is continuously logging its own logging:

...
2015/04/20 12:20:39.718170 Registrar: processing 1 events
2015/04/20 12:20:44.718054 Registrar: processing 1 events
2015/04/20 12:20:52.226353 Registrar: processing 1 events
2015/04/20 12:20:59.716337 Registrar: processing 1 events
2015/04/20 12:21:04.717699 Registrar: processing 1 events
2015/04/20 12:21:12.216756 Registrar: processing 1 events
2015/04/20 12:21:17.221396 Registrar: processing 1 events
2015/04/20 12:21:22.217041 Registrar: processing 1 events
2015/04/20 12:21:29.720323 Registrar: processing 1 events
2015/04/20 12:21:34.726230 Registrar: processing 1 events
2015/04/20 12:21:39.727031 Registrar: processing 1 events
2015/04/20 12:21:47.220128 Registrar: processing 1 events
....

As a workaround, I'm filtering these out in logstash:

  # Weed out any log messages coming from logstash-forwarder docker image itself
  if [docker.name] == "/logstash-forwarder" {
    drop { }
  }

.. but it would be nice if there was an option to not even send these from the logstash-forwarder container in the first place.

Incremental tags

Hello,
Can you use incremental tags instead tag every version with the latest string on docker hub ?
Is this a problem for you?

Cheers

How to best handle custom log file configurations?

After setting up the full docker-logstash-forwarder stack, I am now at a point where I need to tell it which containers have which log files to monitor.

After reading the README, I am aware of the /etc/logstash-forwarder.conf file. But now I am left wondering how to best configure this on a per-container basis, without having to write custom wrapper Dockerfiles for each container (f.e. the Postgres container).

I've also been considering using some kind of logstash-configs data-container to share specific configs with specific containers.

Do you have any thoughts on this? How do you handle having 10+ containers and having to configure logging for each of them, while making things flexible and manageable?

vagrant up doesn't run any containers

Hey,

I've tried to run your vagrant machine using vagrant up and unfortunately it doesn't start any containers. Did I forget something? I just started vagrant up (as described in the readme). Would be great to get some help here.

Thanks,
Andre

[aufs] Not picking up /etc/logstash-forwarder.conf within containers

Hello,

I have a simple docker image (called webapp:latest) which is nothing more than a tomcat app server with a simple web app inside. My Dockerfile is pretty straight forward - it only copies in the webapp and the logstash-forwarder.conf

FROM tomcat:8.0.28-jre8

MAINTAINER [email protected]
COPY etc/ /etc
COPY webapps/ /usr/local/tomcat/webapps

Within the etc folder, I've got a logstash-forwarder.conf with the following configuration:

{
  "files": [ { "paths": [ "/usr/local/tomcat/logs/*.log" ], "fields": { "type": "webapp"} } ]
}

I've confirmed that when I start the container, I can see the logstash-forwarder.conf within.

When I start the docker-logstash-forwarder image, I can see the configuration pick up the logs from docker. However, I can't see anything with the type of webpp (as defined in my logstash-forwarder.conf):

2015/11/12 19:14:21.767613 {
  "network": {
    "servers": [
      "logstash.service.consul:5043"
    ],
    "ssl certificate": "/mnt/logstash-forwarder/logstash-forwarder.crt",
    "ssl key": "/mnt/logstash-forwarder/logstash-forwarder.key",
    "ssl ca": "/mnt/logstash-forwarder/logstash-forwarder.crt",
    "timeout": 15
  },
  "files": [
    {
      "paths": [
        "/var/lib/docker/containers/e33bad245200a4b970f986aad34b58be3812e36035fd62f58a9d7157c04df705/e33bad245200a4b970f986aad34b58be3812e36035fd62f58a9d7157c04df705-json.log"
      ],
      "fields": {
        "codec": "json",
        "docker/hostname": "e33bad245200",
        "docker/id": "e33bad245200a4b970f986aad34b58be3812e36035fd62f58a9d7157c04df705",
        "docker/image": "webapp",
        "docker/name": "/webapp",
        "type": "docker"
      }
    },
    {
      "paths": [
        "/var/lib/docker/containers/8f189e657162185240f7f2abcd683e39628f65acd50636a1a18987e2265ff55a/8f189e657162185240f7f2abcd683e39628f65acd50636a1a18987e2265ff55a-json.log"
      ],
      "fields": {
        "codec": "json",
        "docker/hostname": "8f189e657162",
        "docker/id": "8f189e657162185240f7f2abcd683e39628f65acd50636a1a18987e2265ff55a",
        "docker/image": "logstash-forwarder",
        "docker/name": "/elegant_aryabhata",
        "type": "docker"
      }
    }
  ]
}

When I stop and start the webapp container and monitor the running docker-logstash-forwarder container, I can see that logstash-forwarder reloads the configuration. However, it only adds back the 2 existing (docker) logs, and never the webapp one:

2015/11/12 19:14:21.769518 Loading registrar data from //.logstash-forwarder
2015/11/12 19:14:21.769662 Waiting for 2 prospectors to initialise
2015/11/12 19:14:21.769737 Launching harvester on new file: /var/lib/docker/containers/e33bad245200a4b970f986aad34b58be3812e36035fd62f58a9d7157c04df705/e33bad245200a4b970f986aad34b58be3812e36035fd62f58a9d7157c04df705-json.log
2015/11/12 19:14:21.769898 harvest: "/var/lib/docker/containers/e33bad245200a4b970f986aad34b58be3812e36035fd62f58a9d7157c04df705/e33bad245200a4b970f986aad34b58be3812e36035fd62f58a9d7157c04df705-json.log" (offset snapshot:0)
2015/11/12 19:14:21.769964 Registrar will re-save state for /var/lib/docker/containers/8f189e657162185240f7f2abcd683e39628f65acd50636a1a18987e2265ff55a/8f189e657162185240f7f2abcd683e39628f65acd50636a1a18987e2265ff55a-json.log
2015/11/12 19:14:21.770036 Resuming harvester on a previously harvested file: /var/lib/docker/containers/8f189e657162185240f7f2abcd683e39628f65acd50636a1a18987e2265ff55a/8f189e657162185240f7f2abcd683e39628f65acd50636a1a18987e2265ff55a-json.log
2015/11/12 19:14:21.770662 harvest: "/var/lib/docker/containers/8f189e657162185240f7f2abcd683e39628f65acd50636a1a18987e2265ff55a/8f189e657162185240f7f2abcd683e39628f65acd50636a1a18987e2265ff55a-json.log" position:31738 (offset snapshot:31738)
2015/11/12 19:14:21.770753 All prospectors initialised with 2 states to persist
2015/11/12 19:14:21.771123 Loading client ssl certificate: /mnt/logstash-forwarder/logstash-forwarder.crt and /mnt/logstash-forwarder/logstash-forwarder.key
2015/11/12 19:14:21.995714 Setting trusted CA from file: /mnt/logstash-forwarder/logstash-forwarder.crt
2015/11/12 19:14:30.011208 Connecting to [172.20.13.177]:5043 (logstash.service.consul)
2015/11/12 19:14:30.126830 Connected to 172.20.13.177
2015/11/12 19:14:32.276032 Registrar: processing 160 events
2015/11/12 19:14:36.269787 Registrar: processing 4 events
2015/11/12 19:14:41.776004 Registrar: processing 1 events

As you can see, it only re-registers the two existing logs (from Docker), and never includes the one with a type of webapp.

Do you have any ideas why this is happening?

Thanks in advance!

/etc/logstash-forwarder.conf ignored with aufs

I'm building a custom docker image where I use ADD to include a custom /etc/logstash-forwarder.conf to the container image. This custom configuration is however ignored as it is not found by NewFromContainer. I tried to understand the code a little bit and it looks like calculateFilePath returns a wrong path.

Currently calculateFilePath uses /var/lib/docker/aufs/diff/{containerId} as base path, but as I understand the layout of /var/lib/docker/aufs/diff, this directory contains the diff to the container's image but not the image itself.

A simple test (and currently my workaround) can be performed by simply doing a touch on /etc/logstash-forwarder.conf in the CMD of the Dockerfile, as this causes aufs to copy-on-write the file into /var/lib/docker/aufs/diff/{containerId}

Here is an example Dockerfile that has this problem:

FROM jboss/wildfly

ADD logstash-forwarder.conf /etc/logstash-forwarder.conf

CMD /opt/jboss/wildfly/bin/standalone.sh

If I simply do a touch before calling the real start command, the /etc/logstash-forwarder.conf is not ignored:

CMD touch /etc/logstash-forwarder.conf && /opt/jboss/wildfly/bin/standalone.sh

I have the feeling that calculateFilePath should actually go through the containers image and parent images and test for existence of the config file in each of them and return the first found file. I really would like to implement that by myself, but I never did development with Go...

digitalwonderland/logstash-forwarder doesn't show actual logstash-forwarder logs in 'docker logs'

Cannot find any log entry from logstash-forwarder in logs of digitalwonderland/logstash-forwarder.

Cost me half of a day to figure out that my certs were invalid.

Actual logstash forwarder told me:

2015/04/24 14:41:49.132812 Loading client ssl certificate: /etc/sysconfig/logstash-forwarder/lumberjack.crt and /etc/sysconfig/logstash-forwarder/logstash-forwarder.key
2015/04/24 14:41:49.292452 Failed loading client ssl certificate: crypto/tls: private key does not match public key

Started docker-logstash-forwarder even with -debug=true -quiet=false ... didn't help.

Maybe things get screwed up with #12.

P.S.: Please consider versioned tags when pushing to docker hub...

exec format error on build bnrnhag4ywope6cdg8nmyh6

There seems to be an issue with the latest docker hub build of docker-logstash-forwarder ( bnrnhag4ywope6cdg8nmyh6, see: https://registry.hub.docker.com/u/digitalwonderland/logstash-forwarder/builds_history/36101/ ):

$ sudo docker pull digitalwonderland/logstash-forwarder
Pulling repository digitalwonderland/logstash-forwarder
a109f51d9aea: Download complete 
511136ea3c5a: Download complete 
5b12ef8fd570: Download complete 
dade6cb4530a: Download complete 
99498c96837e: Download complete 
6b6354717759: Download complete 
51916556d23e: Download complete 
d35b1bb69550: Download complete 
Status: Downloaded newer image for digitalwonderland/logstash-forwarder:latest

$ sudo docker run digitalwonderland/logstash-forwarder:latest
Error response from daemon: Cannot start container 6699ffb5473f365ea0ae35775515a3738f1d0d7caa31496b8814d28cf0a3d3e6: [8] System error: exec format error

(I know more arguments are normally needed but that is not the real issue here.)

We pull down new images fairly regularly, so I am about 90% sure the previous build did not have this issue.

$ sudo docker version
Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 786b29d
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64

Support overlay2

Getting this in the docker-logstash-forwarder logs when using overlay2 as a storage driver for the docker daemon:

2017/07/21 00:35:22.152238 ERROR [TriggerRefresh] Unable to look for logstash-forwarder config in df3b70f99a08d6283882f17024b7d1155c41c5b006d1fa4628860be517ebd62a: Unable to calculate file path with unknown driver [overlay2]

Looks like in here overlay is supported but not overlay2.

Vagrant setup missing `http.cors.allow-origin` for Elasticsearch 1.4?

Not sure where to post this (your logstash docker repo, Kibana repo, or some other place), but I copy/pasted your Vagrant configurations (or rather, I transferred them into vagrant docker provisioners), and ended up with the following error when visiting localhost:8888:

Elasticsearch 1.4 ships with a security setting that prevents
Kibana from connecting. You will need to set http.cors.allow-
origin in your elasticsearch.yml to the correct protocol,
hostname, and port (if not 80) that your access Kibana from.
Note that if you are running Kibana in a sub-url, you should
exclude the sub-url path and only include the protocol, hostname
and port. For example, http://mycompany.com:8080, not
http://mycompany.com:8080/kibana.

I haven't delved into the code yet, and haven't found how to specifically configure elasticsearch.yml, but I thought i'd give you a heads-up that the current setup doesn't seem to be working (unless I missed some specific configuration that sets the http.cors.allow-origin key in your Vagrantfile)

logstash-forwarder.conf not found with aufs driver

Hello!

You've done great job to make ELK stack running under docker!
I'm trying to make use of it, got working elasticsearch and logstash, but stuck on forwarder.
It seems to scan for /etc/logstash-forwarder.conf on the following path:

/var/lib/docker/%s/subvolumes/%s/etc/logstash-forwarder.conf

But it finds nothing on my machine, because the correct paths are:

/var/lib/docker# find . -name "logstash-forwarder.conf"
./aufs/mnt/80f6f8ad6feaa68c4fa5e67022a2e3eb82868d688243a4f7a0b09d1a46cff848/etc/logstash-forwarder.conf
./aufs/mnt/218207076e56f02d33d08dce9ef192d2a7ed9093557bf44b8dbd04a4eab92d82/etc/logstash-forwarder.conf
./aufs/mnt/3fc18eb4f58d9c525e68a4d4e1fcf32d7e934cfe59aaf5eaf3a86b3e99f98b89/etc/logstash-forwarder.conf
./aufs/mnt/44dc64c189c9ed8b8c522956f0d1e083e542a1bc548d0654b8b502cda490dd32/tmp/logstash-forwarder.conf

It's not subvolumes, it's mnt.

I have docker version:

Client version: 1.1.2
Client API version: 1.13
Go version (client): go1.2.1
Git commit (client): d84a070
Server version: 1.1.2
Server API version: 1.13
Go version (server): go1.2.1
Git commit (server): d84a070

What docker version were you targeting? Is it safe for me to simply replace subvolumes with mnt and then rebuild and use your image?

Log Rotation

Would be nice to add some option to rotate Docker Containers logs. Because restart of docker-logstash-forwarder leads to sending all logs again in one pack.

Not finding `/etc/logstash-forwarder.conf` in my container

Hello there, thank you for the nice containers! I've setup elasticsearch, logstash, and logstash-forwarder and everything appears to be working and talking together.

I'm having an issue with the logstash-forwarder not finding my /etc/logstash-forwarder.conf however. I have three containers running:

[root@lemon-tart seanadkinson]# docker ps
CONTAINER ID        IMAGE                                         COMMAND                CREATED             STATUS              PORTS                          NAMES
277318f5989e        example_core:latest                           "/usr/bin/supervisor   17 minutes ago      Up 17 minutes       172.31.41.248:7500->7500/tcp   example_core            
d58fb45f06bb        example_auth:latest                           "/usr/bin/supervisor   22 minutes ago      Up 22 minutes       172.31.41.248:7506->7506/tcp   example_auth            
f879680e663a        digitalwonderland/logstash-forwarder:latest   "/var/lib/golang/bin   30 minutes ago      Up 30 minutes                                      logstash-forwarder   

I've verified that /etc/logstash-forwarder.conf exists in the containers:

[root@lemon-tart seanadkinson]# docker exec -it example_core /bin/bash
root@277318f5989e:/# cat /etc/logstash-forwarder.conf 
{
  "files": [
    {
      "paths": [
        "/var/log/example-core/*.log"
      ],
      "fields": {
        "type": "java",
        "app": "example-core"
      }
    }
  ]
}

But everything being delivered to my logstash server is just the docker logs, not the application-specific logs (which should have type: java and app: example-core from above).

Here is the relevant part of the logstash-forwarder logs, as far as I can tell:

2014/12/18 19:01:20 Triggering refresh...
2014/12/18 19:01:20 Generating configuration...
2014/12/18 19:01:20 Found 3 containers:
2014/12/18 19:01:20 1. 277318f5989edf505c47830009df82938460b814e28a47bb0eb5c7f49217b7c7
2014/12/18 19:01:20 2. d58fb45f06bb398b20212fc0b9a545acf47b0eea4bb6e6e76ca5acb0bcc29d9f
2014/12/18 19:01:20 3. f879680e663a5cb76e475aec3531ad9bab5ea8648c2954b4cf0cc98ac1d9700e
2014/12/18 19:01:20 Wrote logstash-forwarder config to /tmp/logstash-forwarder.conf
2014/12/18 19:01:20 Waiting for logstash-forwarder to stop
2014/12/18 19:01:20 Stopped logstash-forwarder
2014/12/18 19:01:20 Starting logstash-forwarder...
2014/12/18 19:01:20 Config generation took 43.288329ms
2014/12/18 19:01:20.118651  --- options -------
2014/12/18 19:01:20.118735  config-arg:          /tmp/logstash-forwarder.conf
2014/12/18 19:01:20.118854  idle-timeout:        5s
2014/12/18 19:01:20.118973  spool-size:          1024
2014/12/18 19:01:20.119067  harvester-buff-size: 16384
2014/12/18 19:01:20.119180  --- flags ---------
2014/12/18 19:01:20.119295  tail (on-rotation):  false
2014/12/18 19:01:20.119388  use-syslog:          false
2014/12/18 19:01:20.119502  verbose:             false
2014/12/18 19:01:20.119595  debug:               false
2014/12/18 19:01:20.119735 {
  "network": {
    "servers": [
      "logstash.example.com:5043"
    ],
    "ssl certificate": "/mnt/logstash-forwarder/logstash-forwarder.crt",
    "ssl key": "/mnt/logstash-forwarder/logstash-forwarder.key",
    "ssl ca": "/mnt/logstash-forwarder/logstash-forwarder.crt",
    "timeout": 15
  },
  "files": [
    {
      "paths": [
        "/var/lib/docker/containers/277318f5989edf505c47830009df82938460b814e28a47bb0eb5c7f49217b7c7/277318f5989edf505c47830009df82938460b814e28a47bb0eb5c7f49217b7c7-json.log"
      ],
      "fields": {
        "codec": "json",
        "docker.hostname": "277318f5989e",
        "docker.id": "277318f5989edf505c47830009df82938460b814e28a47bb0eb5c7f49217b7c7",
        "docker.image": "example_core",
        "docker.name": "/example_core",
        "type": "docker"
      }
    },
    {
      "paths": [
        "/var/lib/docker/containers/d58fb45f06bb398b20212fc0b9a545acf47b0eea4bb6e6e76ca5acb0bcc29d9f/d58fb45f06bb398b20212fc0b9a545acf47b0eea4bb6e6e76ca5acb0bcc29d9f-json.log"
      ],
      "fields": {
        "codec": "json",
        "docker.hostname": "d58fb45f06bb",
        "docker.id": "d58fb45f06bb398b20212fc0b9a545acf47b0eea4bb6e6e76ca5acb0bcc29d9f",
        "docker.image": "example_auth",
        "docker.name": "/example_auth",
        "type": "docker"
      }
    },
    {
      "paths": [
        "/var/lib/docker/containers/f879680e663a5cb76e475aec3531ad9bab5ea8648c2954b4cf0cc98ac1d9700e/f879680e663a5cb76e475aec3531ad9bab5ea8648c2954b4cf0cc98ac1d9700e-json.log"
      ],
      "fields": {
        "codec": "json",
        "docker.hostname": "f879680e663a",
        "docker.id": "f879680e663a5cb76e475aec3531ad9bab5ea8648c2954b4cf0cc98ac1d9700e",
        "docker.image": "digitalwonderland/logstash-forwarder",
        "docker.name": "/logstash-forwarder",
        "type": "docker"
      }
    }
  ]
}
2014/12/18 19:01:20.136309 Loading registrar data from //.logstash-forwarder
2014/12/18 19:01:20.136477 Waiting for 3 prospectors to initialise
2014/12/18 19:01:20.137108 Launching harvester on new file: /var/lib/docker/containers/277318f5989edf505c47830009df82938460b814e28a47bb0eb5c7f49217b7c7/277318f5989edf505c47830009df82938460b814e28a47bb0eb5c7f49217b7c7-json.log
2014/12/18 19:01:20.137398 harvest: "/var/lib/docker/containers/277318f5989edf505c47830009df82938460b814e28a47bb0eb5c7f49217b7c7/277318f5989edf505c47830009df82938460b814e28a47bb0eb5c7f49217b7c7-json.log" (offset snapshot:0)
2014/12/18 19:01:20.137471 Registrar will re-save state for /var/lib/docker/containers/d58fb45f06bb398b20212fc0b9a545acf47b0eea4bb6e6e76ca5acb0bcc29d9f/d58fb45f06bb398b20212fc0b9a545acf47b0eea4bb6e6e76ca5acb0bcc29d9f-json.log
2014/12/18 19:01:20.137600 Registrar will re-save state for /var/lib/docker/containers/f879680e663a5cb76e475aec3531ad9bab5ea8648c2954b4cf0cc98ac1d9700e/f879680e663a5cb76e475aec3531ad9bab5ea8648c2954b4cf0cc98ac1d9700e-json.log
2014/12/18 19:01:20.137798 Resuming harvester on a previously harvested file: /var/lib/docker/containers/d58fb45f06bb398b20212fc0b9a545acf47b0eea4bb6e6e76ca5acb0bcc29d9f/d58fb45f06bb398b20212fc0b9a545acf47b0eea4bb6e6e76ca5acb0bcc29d9f-json.log
2014/12/18 19:01:20.137949 Resuming harvester on a previously harvested file: /var/lib/docker/containers/f879680e663a5cb76e475aec3531ad9bab5ea8648c2954b4cf0cc98ac1d9700e/f879680e663a5cb76e475aec3531ad9bab5ea8648c2954b4cf0cc98ac1d9700e-json.log
2014/12/18 19:01:20.138094 harvest: "/var/lib/docker/containers/d58fb45f06bb398b20212fc0b9a545acf47b0eea4bb6e6e76ca5acb0bcc29d9f/d58fb45f06bb398b20212fc0b9a545acf47b0eea4bb6e6e76ca5acb0bcc29d9f-json.log" position:1635 (offset snapshot:1635)
2014/12/18 19:01:20.138210 All prospectors initialised with 2 states to persist
2014/12/18 19:01:20.138357 harvest: "/var/lib/docker/containers/f879680e663a5cb76e475aec3531ad9bab5ea8648c2954b4cf0cc98ac1d9700e/f879680e663a5cb76e475aec3531ad9bab5ea8648c2954b4cf0cc98ac1d9700e-json.log" position:72018 (offset snapshot:72018)
2014/12/18 19:01:20.138518 Loading client ssl certificate: /mnt/logstash-forwarder/logstash-forwarder.crt and /mnt/logstash-forwarder/logstash-forwarder.key
2014/12/18 19:01:20.477381 Setting trusted CA from file: /mnt/logstash-forwarder/logstash-forwarder.crt
2014/12/18 19:01:20.482561 Connecting to [54.148.125.53]:5043 (logstash.example.com) 
2014/12/18 19:01:20.549543 Connected to 54.148.125.53
2014/12/18 19:01:25.404266 Registrar: precessing 98 events
2014/12/18 19:01:30.144705 Registrar: precessing 1 events
2014/12/18 19:01:37.643012 Registrar: precessing 1 events

Based on this function from the source code, I'd expect to see "Found logstash-forwarder config" somewhere in my logs, but I don't see it.

I looked into the calculateFilePath function here, and I wonder about the switch statement. Based on that code, I would expect to see either /var/lib/docker/aufs or /var/lib/docker/btrfs in the logstash-forwarder container, but I don't see either:

[root@lemon-tart seanadkinson]# docker exec -it logstash-forwarder /bin/bash
bash-4.2# ls /var/lib/docker/
containers  devicemapper  execdriver  graph  init  linkgraph.db  repositories-devicemapper  tmp  trust  vfs  volumes

Am I missing something, or perhaps have the wrong version of some library?

In that same function, it looks like it first checks if the docker container has any volumes, so I wonder if I declare /etc/logstash-forwarder.conf as a volume, if it will find it. I'll try that next, but wanted to bring this up in case you had any pointers.

Thanks again for the nice code!

Detailed logs

Hello,

I was wondering how I might configure this to show the information that I get
from 'docker logs --details '. There doesn't seem to be anything
in the wiki.

For example:

2017-09-12T19:06:44.571815000Z  npm info lifecycle [email protected]~poststart: [email protected]
2017-09-12T19:06:44.580673000Z  sending message to pub-sub.status-exchange
2017-09-12T19:06:44.710087000Z  npm info ok 
2017-09-12T19:06:52.050633000Z  npm info it worked if it ends with ok
2017-09-12T19:06:52.051144000Z  npm info using [email protected]
2017-09-12T19:06:52.051607000Z  npm info using [email protected]
2017-09-12T19:06:54.391474000Z  npm info lifecycle [email protected]~prestart: [email protected]
2017-09-12T19:06:54.425010000Z  > [email protected] start /usr/src/app
2017-09-12T19:06:54.425474000Z  > node .
2017-09-12T19:06:54.426294000Z  npm info lifecycle [email protected]~start: [email protected]
2017-09-12T19:07:02.777779000Z  booted
2017-09-12T19:07:02.864732000Z  listening for req-res.v1.bobs.read on req-res.bobs-read
2017-09-12T19:07:02.879187000Z  listening for req-res.v1.bobs-service.status on req-res.bobs-service-status
2017-09-12T19:07:02.879577000Z  receiving from send-rec.bobs-upsert send-rec.v1.bobs.upsert
2017-09-12T19:07:02.902273000Z  listening for req-res.v1.bobs.delete on req-res.bobs-delete

Docker/hostname question

Hello,

I was wondering how the docker/hostname field is calculated. It seems like on some of my running containers I get the real hostname (the hostname of the machine the container started on). But other times I get the hostname of the docker container itself. Here are two examples:

With internal docker hostname:

logstash-forwarder_1 |       "fields": {
logstash-forwarder_1 |         "codec": "json",
logstash-forwarder_1 |         "docker/hostname": "3aebf47bfd1d",
logstash-forwarder_1 |         "docker/id": "3aebf47bfd1dea7e76e7462fb0cbb31202c719ac9557f849b14616986e851073",
logstash-forwarder_1 |         "docker/image": "hsip:latest",
logstash-forwarder_1 |         "docker/name": "/mesos-9b712f4f-1427-4d72-a5cd-56aadbc257e0-S1.3b852b44-0625-4cd1-9a2b-82b977db9f1a",
logstash-forwarder_1 |         "type": "docker"
logstash-forwarder_1 |       }
logstash-forwarder_1 |     },

With real hostname:

logstash-forwarder_1 |       "fields": {
logstash-forwarder_1 |         "codec": "json",
logstash-forwarder_1 |         "docker/hostname": "centOS1",
logstash-forwarder_1 |         "docker/id": "c9496b4947e86151e82425775248432cfd63b29a5031702063c655f6eead4446",
logstash-forwarder_1 |         "docker/image": "paas-4-saas:latest",
logstash-forwarder_1 |         "docker/label/com-docker-compose-config-hash": "7dcfe961eade91ed9c7bf190cc3b91fd3a80b6367de25ad8cfd2c5f679eb437b",
logstash-forwarder_1 |         "docker/label/com-docker-compose-container-number": "1",
logstash-forwarder_1 |         "docker/label/com-docker-compose-oneoff": "False",
logstash-forwarder_1 |         "docker/label/com-docker-compose-project": "paas4saas",
logstash-forwarder_1 |         "docker/label/com-docker-compose-service": "PaaS-4-SaaS",
logstash-forwarder_1 |         "docker/label/com-docker-compose-version": "1.5.1",
logstash-forwarder_1 |         "docker/name": "/paas4saas_PaaS-4-SaaS_1",
logstash-forwarder_1 |         "type": "docker"
logstash-forwarder_1 |       }
logstash-forwarder_1 |     }
logstash-forwarder_1 |   ]
logstash-forwarder_1 | }

Periods in labels cause eleasticsearch to break

When starting the docker-logstash-forwarder, periods appear in the labels as shown below:

"files": [
 {
   "paths": [
     "/var/lib/docker/containers/b6776b586eadf879fc5722e33adb0b569efb0fa846801e5137f43f1c64643e4f/b6776b586eadf879fc5722e33adb0b569efb0fa846801e5137f43f1c64643e4f-json.log"
   ],
   "fields": {
     "codec": "json",
     "docker/hostname": "b6776b586ead",
     "docker/id": "b6776b586eadf879fc5722e33adb0b569efb0fa846801e5137f43f1c64643e4f",
     "docker/image": "acook/logstash-forwarder:latest",
     "docker/label/License": "GPLv2",
     "docker/label/Vendor": "CentOS",
     "docker/label/com.docker.compose.config-hash": "2131f95c162c74958e7f131bac9dd20f1795e5d5e96d12045675f5e0d83d85a6",
     "docker/label/com.docker.compose.container-number": "1",
     "docker/label/com.docker.compose.oneoff": "False",
     "docker/label/com.docker.compose.project": "dockerlogstashforwarder",
     "docker/label/com.docker.compose.service": "logstash-forwarder",
     "docker/label/com.docker.compose.version": "1.5.1",
     "docker/name": "/dockerlogstashforwarder_logstash-forwarder_1",
     "type": "docker"
   }
 },

This causes elasticsearch to throw errors. Same as issue #20

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.