Code Monkey home page Code Monkey logo

docker-logging-plugin's Introduction

What does Splunk Connect for Docker do?

Splunk Connect for Docker is a plug-in that extends and expands Docker's logging capabilities so that customers can push their Docker and container logs to their Splunk on-premise or cloud deployment.

Splunk Connect for Docker is a supported open source product. Customers with an active Splunk support contract receive Splunk Extension support under the Splunk Support Policy, which can be found at https://www.splunk.com/en_us/legal/splunk-software-support-policy.html.

See the Docker Engine managed plugin system documentation at https://docs.docker.com/engine/extend/ on support for Microsoft Windows and other platforms. See the Prerequisites in this document for more information about system requirements.

Prerequisites

Before you install Splunk Connect for Docker, make sure your system meets the following minimum prerequisites:

  • Docker Engine: Version 17.05 or later. If you plan to configure Splunk Connect for Docker via 'daemon.json', you must have the Docker Community Edition (Docker-ce) 18.03 equivalent or later installed.
  • Splunk Enterprise, Splunk Light, or Splunk Cloud version 6.6 or later. Splunk Connect for Docker plugin is not currently supported on Windows.
  • For customers deploying to Splunk Cloud, HEC must be enabled and a token must be generated by Splunk Support before logs can be ingested.
  • Configure an HEC token on Splunk Enterprise or Splunk Light (either single instance or distributed environment). Refer to the set up and use HTTP Event Collector documentation for more details.
  • Operating System Platform support as defined in Docker Engine managed plugin system documentation.

Install and configure Splunk Connect for Docker

Step 1: Get an HTTP Event Collector token

You must configure the Splunk HTTP Event Collector (HEC) to send your Docker container logging data to Splunk Enterprise or Splunk Cloud. HEC uses tokens as an alternative to embedding Splunk Enterprise or Splunk Cloud credentials in your app or supporting files. For more about how the HTTP event collector works, see http://docs.splunk.com/Documentation/Splunk/7.0.3/Data/UsetheHTTPEventCollector

  1. Enable your HTTP Event collector: http://docs.splunk.com/Documentation/Splunk/7.0.3/Data/HECWalkthrough#Enable_HEC
  2. Create an HEC token: http://docs.splunk.com/Documentation/Splunk/7.0.3/Data/UsetheHTTPEventCollector http://docs.splunk.com/Documentation/Splunk/7.0.3/Data/UseHECusingconffiles

Note the following when you generate your token:

  • Make sure that indexer acknowledgement is disabled for your token.
  • Optionally, enable the indexer acknowledgement functionality by clicking the Enable indexer management checkbox.
  • Do not generate your token using the default TLS cert provided by Splunk. The default certificates are not secure. For information about configuring Splunk to use self-signed or third-party certs, see http://docs.splunk.com/Documentation/Splunk/7.0.3/Security/AboutsecuringyourSplunkconfigurationwithSSL.
  • Splunk Cloud customers must file a support request in order to have a token generated.

Step 2: Install the plugin

There are multiple ways to install Splunk Connect for Docker, Splunk recommends installing from Docker Store (option 1) to ensure the most current and stable build.

Install the Plugin from Docker Store

  1. Pull the plugin from docker hub
$ docker plugin install splunk/docker-logging-plugin:latest --alias splunk-logging-plugin
  1. Enable the plugin if needed:
$ docker plugin enable splunk-logging-plugin

Install the plugin from the tar file

  1. Clone the repository and check out release branch
$ git clone  https://github.com/splunk/docker-logging-plugin.git
  1. Create the plugin package
$ cd docker-logging-plugin
$ make package # this creates a splunk-logging-plugin.tar.gz
  1. unzip the package
$ tar -xzf splunk-logging-plugin.tar.gz
  1. Create the plugin
$ docker plugin create splunk-logging-plugin:latest splunk-logging-plugin/
  1. Verify that the plugin is installed by running the following command:
$ docker plugin ls
  1. Enable the plugin
$ docker plugin enable splunk-logging-plugin:latest

Step 3: Run containers with the plugin installed

Splunk Connect for Docker continually listens for logs, but your containers must also be running so that the container logs are forwarded to Splunk Connect for Docker. The following examples describe how to configure containers to run with Splunk Connect for Docker.

To start your containers, refer to the Docker Documentation found at:

https://docs.docker.com/config/containers/logging/configure/
https://docs.docker.com/config/containers/logging/configure/#configure-the-delivery-mode-of-log-messages-from-container-to-log-driver

Examples

This sample daemon.json command configures Splunk Connect for Docker for all containers on the docker engine. Splunk Software recommends that when working in a production environment, you pass your HEC token through daemon.json as opposed to the command line.

{
  "log-driver": "splunk-logging-plugin",
  "log-opts": {
    "splunk-url": "<splunk_hec_endpoint>",
    "splunk-token": "<splunk-hec-token>",
    "splunk-insecureskipverify": "true"
  }
}

This sample command configures Splunk Connect for Docker for a single container.

$ docker run --log-driver=splunk-logging-plugin --log-opt splunk-url=<splunk_hec_endpoint> --log-opt splunk-token=<splunk-hec_token> --log-opt splunk-insecureskipverify=true -d <docker_image>

Step 4: Set Configuration variables

Use the configuration variables to configure the behaviors and rules for Splunk Connect for Docker. For example you can confiugre your certificate security or how messages are formatted and distributed. Note the following:

  • Configurations that pass though docker run --log-opt are effective instantaneously.
  • You must restart the Docker engine after configuring through <addr>daemon.json<addr>

How to use the variables

The following is an example of the logging options specified for the Splunk Enterprise instance. In this example:

The path to the root certificate and Common Name is specified using an HTTPS scheme to be used for verification.

$ docker run --log-driver=splunk-logging-plugin\
             --log-opt splunk-token=176FCEBF-4CF5-4EDF-91BC-703796522D20 \
             --log-opt splunk-url=https://splunkhost:8088 \
             --log-opt splunk-capath=/path/to/cert/cacert.pem \
             --log-opt splunk-caname=SplunkServerDefaultCert \
             --log-opt tag="{{.Name}}/{{.FullID}}" \
             --log-opt labels=location \
             --log-opt env=TEST \
             --env "TEST=false" \
             --label location=west \
             <docker_image>

Required Variables

Variable Description
splunk-token Splunk HTTP Event Collector token.
splunk-url Path to your Splunk Enterprise, self-service Splunk Cloud instance, or Splunk Cloud managed cluster (including port and scheme used by HTTP Event Collector) in one of the following formats: https://your_splunk_instance:8088 or https://input-prd-p-XXXXXXX.cloud.splunk.com:8088 or https://http-inputs-XXXXXXXX.splunkcloud.com

Optional Variables

Variable Description Default
splunk-source Event source
splunk-sourcetype Event source type
splunk-index Event index. (Note that HEC token must be configured to accept the specified index)
splunk-capath Path to root certificate. (Must be specified if splunk-insecureskipverify is false)
splunk-caname Name to use for validating server certificate; by default the hostname of the splunk-url is used.
splunk-insecureskipverify "false" means that the service certificates are validated and "true" means that server certificates are not validated. false
splunk-format Message format. Values can be inline, json, or raw. For more infomation about formats see the Messageformats option. inline
splunk-verify-connection Upon plug-in startup, verify that Splunk Connect for Docker can connect to Splunk HEC endpoint. False indicates that Splunk Connect for Docker will start up and continue to try to connect to HEC and will push logs to buffer until connection has been establised. Logs will roll off buffer once buffer is full. True indicates that Splunk Connect for Docker will not start up if connection to HEC cannot be established. false
splunk-gzip Enable/disable gzip compression to send events to Splunk Enterprise or Splunk Cloud instance. false
splunk-gzip-level Set compression level for gzip. Valid values are -1 (default), 0 (no compression), 1 (best speed) … 9 (best compression). -1
tag Specify tag for message, which interpret some markup. Refer to the log tag option documentation for customizing the log tag format. https://docs.docker.com/v17.09/engine/admin/logging/log_tags/ {{.ID}} (12 characters of the container ID)
labels Comma-separated list of keys of labels, which should be included in message, if these labels are specified for container.
env Comma-separated list of keys of environment variables to be included in message if they specified for a container.
env-regex A regular expression to match logging-related environment variables. Used for advanced log tag options. If there is collision between the label and env keys, the value of the env takes precedence. Both options add additional fields to the attributes of a logging message.

Advanced options - Environment Variables

To overwrite these values through environment variables, use docker plugin set =. For more information, see https://docs.docker.com/engine/reference/commandline/plugin_set/ .

Variable Description Default
SPLUNK_LOGGING_DRIVER_POST_MESSAGES_FREQUENCY How often plug-in posts messages when there is nothing to batch, i.e., the maximum time to wait for more messages to batch. The internal buffer used for batching is flushed either when the buffer is full (the disgnated batch size is reached) or the buffer timesout (specified by this frequency) 5s
SPLUNK_LOGGING_DRIVER_POST_MESSAGES_BATCH_SIZE The number of messages the plug-in should collect before sending them in one batch. 1000
SPLUNK_LOGGING_DRIVER_BUFFER_MAX The maximum amount of messages to hold in buffer and retry when the plug-in cannot connect to remote server. 10 * 1000
SPLUNK_LOGGING_DRIVER_CHANNEL_SIZE How many pending messages can be in the channel used to send messages to background logger worker, which batches them. 4 * 1000
SPLUNK_LOGGING_DRIVER_TEMP_MESSAGES_HOLD_DURATION Appends logs that are chunked by docker with 16kb limit. It specifies how long the system can wait for the next message to come. 100ms
SPLUNK_LOGGING_DRIVER_TEMP_MESSAGES_BUFFER_SIZE Appends logs that are chunked by docker with 16kb limit. It specifies the biggest message in bytes that the system can reassemble. The value provided here should be smaller than or equal to the Splunk HEC limit. 1 MB is the default HEC setting. 1048576 (1mb)
SPLUNK_LOGGING_DRIVER_JSON_LOGS Determines if JSON logging is enabled. https://docs.docker.com/config/containers/logging/json-file/ true
SPLUNK_TELEMETRY Determines if telemetry is enabled. true

Message formats

There are three logging plug-in messaging formats set under the optional variable splunk-format:

  • inline (this is the default format)
  • json
  • raw

The default format is inline, where each log message is embedded as a string and is assigned to "line" field. For example:

// Example #1
{
    "attrs": {
        "env1": "val1",
        "label1": "label1"
    },
    "tag": "MyImage/MyContainer",
    "source":  "stdout",
    "line": "my message"
}

// Example #2
{
    "attrs": {
        "env1": "val1",
        "label1": "label1"
    },
    "tag": "MyImage/MyContainer",
    "source":  "stdout",
    "line": "{\"foo\": \"bar\"}"
}

When messages are JSON objects, you may want to embed them in the message sent to Splunk.

To format messages as json objects, set --log-opt splunk-format=json. The plug-in will try to parse every line as a JSON object and embed the json object to "line" field. If it cannot parse the message, it is sent inline. For example:

//Example #1
{
    "attrs": {
        "env1": "val1",
        "label1": "label1"
    },
    "tag": "MyImage/MyContainer",
    "source":  "stdout",
    "line": "my message"
}

//Example #2
{
    "attrs": {
        "env1": "val1",
        "label1": "label1"
    },
    "tag": "MyImage/MyContainer",
    "source":  "stdout",
    "line": {
        "foo": "bar"
    }
}

If --log-opt splunk-format=raw, each message together with attributes (environment variables and labels) and tags are combined in a raw string. Attributes and tags are prefixed to the message. For example:

MyImage/MyContainer env1=val1 label1=label1 my message
MyImage/MyContainer env1=val1 label1=label1 {"foo": "bar"}

Troubleshooting

If your Splunk Connector for Docker does not behave as expected, use the debug functionality and then refer to the following tips included in output.

Enable Debug Mode to find log errors

Plugin logs can be found as docker daemon log. To enable debug mode, export environment variable LOGGIN_LEVEL=DEBUG in docker engine environment. See the Docker documentation for information about how to enable debug mode in your docker environment: https://docs.docker.com/config/daemon/

Use the debugger to check your debug the Splunk HEC connection

Check HEC endpoint accessibility Docker environment. If the endpoint cannot be reached, debug logs are not sent to Splunk, or the logs or will buffer and drop as they roll off the buffer.

Test HEC endpoint is accessible
$ curl -k https://<ip_address>:8088/services/collector/health
{"text":"HEC is healthy","code":200}

Check your HEC configuration for clusters

If you are using an Indexer Cluster, the current plugin accepts a single splunk-url value. We recommend that you configure a load balancer in front of your Indexer tier. Make sure the load balancer can successfully tunnel the HEC requests to the indexer tier. If HEC is configured in an Indexer Cluster environment, all indexers should have same HEC token configured. See http://docs.splunk.com/Documentation/Splunk/7.0.3/Data/UsetheHTTPEventCollector.

Check your heavy forwarder connection

If you ae using a heavy forwarder to preprocess the events (e.g: funnel multiple log lines to a single event), make sure that the heavy forwarder is properly connecting to the indexers. To troubleshoot the forwarder and receiver connection, see: https://docs.splunk.com/Documentation/SplunkCloud/7.0.0/Forwarding/Receiverconnection.

Check the plugin's debug log in docker

Stdout of a plugin is redirected to Docker logs. Such entries have a plugin= suffix.

To find out the plugin ID of Splunk Connect for Docker, use the command below and look for Splunk Logging Plugin entry.

# list all the plugins
$ docker plugin ls

Depending on your system, location of Docker daemon logging may vary. Refer to Docker documentation for Docker daemon log location for your specific platform. Here are a few examples:

  • Ubuntu (old using upstart ) - /var/logging/upstart/docker.logging
  • Ubuntu (new using systemd ) - sudo journalctl -fu docker.service
  • Boot2Docker - /var/logging/docker.logging
  • Debian GNU/Linux - /var/logging/daemon.logging
  • CentOS - /var/logging/daemon.logging | grep docker
  • CoreOS - journalctl -u docker.service
  • Fedora - journalctl -u docker.service
  • Red Hat Enterprise Linux Server - /var/logging/messages | grep docker
  • OpenSuSE - journalctl -u docker.service
  • OSX - ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/logging/d‌ocker.logging
  • Windows - Get-EventLog -LogName Application -Source Docker -After (Get-Date).AddMinutes(-5) | Sort-Object Time, as mentioned here.

docker-logging-plugin's People

Contributors

anushjay avatar bbourbie avatar dbaldwin-splunk avatar dtregonning avatar gp510 avatar hyfather avatar jenworthington avatar luckyj5 avatar sharonx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-logging-plugin's Issues

differences between log-driver `splunk` vs `splunk-logging-plugin`

Is there some documentation about the functional differences between the splunk logging driver, which docker docs mention here: https://docs.docker.com/config/containers/logging/splunk/
vs the splunk-logging-plugin (the one from this repo?)
I did look around, however the only difference I can find (indirect) reference to is that the splunk-logging-plugin might allow for local container log inspection (i.e. docker logs CONTAINERNAME) whereas the splunk driver may not, per
#2

Multiple labels in Splunk logging driver's "log-opts"

What happened:

unable to configure the Docker daemon with file /etc/docker/daemon.json: invalid character ',' after object key

What you expected to happen:

As per documentation: https://docs.docker.com/config/containers/logging/splunk/

labels: "Comma-separated list of keys of labels, which should be included in message, if these labels are specified for container."

How to reproduce it (as minimally and precisely as possible):

{
"log-driver": "splunk",
"log-opts": {
"splunk-format": "json",
"labels": "Label1", "Label2"
}
}

Anything else we need to know?:
Tried array type structure as well.
We need to have multiple (different) labels for containers.

Environment:

  • Docker version (use docker version): 20.10.17
  • OS (e.g: cat /etc/os-release): AWS LInux 4.14.301-224.520.amzn2.x86_64
  • Splunk version:
  • Others:

splunk-driver vs logging-plugin

We are trying to get the container logs into Splunk so we have installed the plugin as per guideline.
We have able to send the stout logs to Splunk.

But our problem is how to identify that the docker is using Splunk-driver, or this plugin(docker-logging-plugin) to send the data.?
As we changed the code of driver.go file and rebuild the plugin but we could not find any impact. So we believe that after installing this plugin docker is still using the Splunk-driver.

Please share your thoughts.

Allow 'splunk-url' param to be a list

Current behavior of docker-logging-plugin only accepts one value for splunk-url field.
We should update the plugin to accept list for splunk-url.

Adding current epoch time to POST body

Hello and please forgive me if this is not the appropriate place to ask this question. I would like to add a "time" field to the POST body with a value of the current epoch time. I understand how to add a field named "time", but is there a way to dynamically set the value? I'm not the expert with this plugin, so please forgive me if this question sounds ignorant.

Thanks in advance.

Delay in sending logs to Splunk

I am using the Splunk logging driver to send logs to splunk with the following command line: docker run -d -p 443:8443 --log-driver=splunk --log-opt splunk-token=REDACTED --log-opt splunk-url=https://myloghost.example.net:8088 --log-opt splunk-sourcetype=idp --log-opt splunk-index=auth_idp --log-opt splunk-insecureskipverify=1 --log-opt splunk-format=raw --log-opt splunk-gzip=true --name shib --restart always --health-cmd 'curl -k -f https://127.0.0.1:8443/idp/status || exit 1' --health-interval=2m --health-timeout=30s

The container runs normally, and logs flow into Splunk. All is good. This is in a testing environment, so it is not always in use, but the container is left running. Sometimes, when I start using the service the container provides, nothing is logged to Splunk immediately. If I wait 10-15 minutes, the logs eventually show up with the correct time stamps, etc.

I've noticed on the docker host that netstat -tpn | grep -e 8088 gives me output similar to this:

Active Internet connections (w/o servers)
Proto Recv-Q    Send-Q  Local Address           Foreign Address         State       PID/Program name    
tcp        0    947     xxx.xxx.x.xxx:49010     xxx.xxx.x.xx:8088       ESTABLISHED 12682/dockerd-curre   

On the Splunk host, the same command shows zeroes in the Recv-Q and Send-Q columns. The Splunk Distributed Management Console doesn't show any events received during the lag time. On the Docker host, there is a message in /var/log/messages from Docker that happens at the same time the logs are finally sent to Splunk:

Jul  6 13:14:19 idpdock0-0 dockerd-current: time="2018-07-06T13:14:19.428396282-04:00" level=error msg="Post https://myloghost.example.net:8088/services/collector/event/1.0: read tcp xxx.xxx.x.xxx:49010->xxx.xxx.x.xx:8088: read: connection timed out"

It seems to me like the logging driver get stuck trying to do some I/O operation, and when it finally times out, it tries again and the logs are sent. However, I have no idea what the condition that causes it to get stuck is, nor do I know of any way to adjust the time out period.

Splunk driver not getting response from splunk makes docker unresponsive

What happened:
We have a cluster of nodes running docker and managed by marathon/mesos. The containers running there are using the docker splunk logging plugin to send logs to the splunk event collector.

The load balancer in front of the splunk event collector was having trouble connecting so from the point of view of the logging plugin, the https connections were being opened, but not replied, so all connections were "hanging". This made all the environment unstable as containers were not passing healthchecks and not able to serve the application running on them.

An example of the logs seen in docker are:

Aug 12 12:50:34 dockerhost.local dockerd[10030]: time="2019-08-12T12:50:34.493818095-07:00" level=warning msg="Error while sending logs" error="Post https://splunk-ec:443/services/collector/event/1.0: context deadline exceeded" module=logger/splunk

The manual connection to the splunk-ec shows that it hangs after sending the headers and will get no response at all:

$ curl -vk https://splunk-ec:443/services/collector/event/1.0
* About to connect() to splunk-ec port 443 (#0)
*   Trying 10.0.0.1...
* Connected to splunk-ec (10.0.0.1) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* SSL connection using TLS_RSA_WITH_AES_256_CBC_SHA
* Server certificate:
*       subject: CN=<REDACTED>
*       start date: Jan 22 16:45:30 2010 GMT
*       expire date: Jan 23 01:36:42 2020 GMT
*       common name: <REDACTED>
*       issuer: CN=Entrust Certification Authority - L1C,OU="(c) 2009 Entrust, Inc.",OU=www.entrust.net/rpa is incorporated by reference,O="Entrust, Inc.",C=US
> GET /services/collector/event/1.0 HTTP/1.1
> User-Agent: curl/7.29.0
> Host: splunk-ec
> Accept: */*
>
^C

What you expected to happen:
If the splunk logging driver can't send logs for any reason, it should fill the buffer and drop logs when it's full, not make the docker agent unstable and make the application inaccessible

How to reproduce it (as minimally and precisely as possible):
Have a small app (maybe just nc -l -p443) listen in https but not make any reply either successful or unsuccessful, then point the splunk logging plugin there.

Anything else we need to know?:
The docker agent runs with these environment variables:

SPLUNK_LOGGING_DRIVER_BUFFER_MAX=400
SPLUNK_LOGGING_DRIVER_CHANNEL_SIZE=200
SPLUNK_LOGGING_DRIVER_POST_MESSAGES_BATCH_SIZE=20

the containers are running with these options:

--log-driver=splunk
--log-opt=splunk-token=<token>
--log-opt=splunk-url=https://splunk-ec:443
--log-opt=splunk-index=app
--log-opt=splunk-sourcetype=<sourcetype>
--log-opt=splunk-insecureskipverify=true
--log-opt=env=APP_NAME,HOST,ACTIVE_VERSION
--log-opt=splunk-format=raw
--log-opt=splunk-verify-connection=false

Environment:

  • Docker version (use docker version):
Server: Docker Engine - Community
 Engine:
  Version:          18.09.2
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       6247962
  Built:            Sun Feb 10 03:47:25 2019
  OS/Arch:          linux/amd64
  Experimental:     false
  • OS (e.g: cat /etc/os-release):
CentOS Linux release 7.6.1810 (Core)
Linux hostname 3.10.0-957.12.1.el7.x86_64 #1 SMP Mon Apr 29 14:59:59 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  • Splunk version:
7.1.6

(this shouldn't affect as the problem was with splunk not getting an https response from the load balancer)

  • Others:

logging plugin consumes large amount of memory

What happened:
logging plugin consumes large amount of memory

Mem: 5498852K used, 2148392K free, 166208K shrd, 50612K buff, 938332K cached
CPU: 63% usr 10% sys 0% nic 26% idle 0% io 0% irq 0% sirq
Load average: 1.48 1.62 1.66 2/507 8054
PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
3476 3459 root S 2575m 34% 1 53% /bin/splunk-logging-plugin

What you expected to happen:
logging plugin should consume acceptable amount of memory

How to reproduce it (as minimally and precisely as possible):
It happened after enabling the plugin for some time. We have several containers using this plugin to forward the logs to splunk instance.

Environment:

  • Docker version (use docker version): 17.06.0-ce
  • OS (e.g: cat /etc/os-release): Alpine Linux v3.5

Use HEC health endpoint to verify HEC connection

Currently we send an Option request to the given --splunk-url. So any url that accepts Option returns true for verifySplunkConnection()

Instead, we should use the dedicated HEC health endpoint: /services/collector/health for verification.

Config to remove the default "source" field

Hello and please forgive me if this is not the appropriate place to ask this question. I would like to know if it's possible to add configuration to the daemon.json file to completely remove the value of the 'source' field from the event because I would like to let Splunk automatically set the source at index time. Reading the documentation, I see there is an optional flag 'splunk-source' where I can change the value. But I don't want to change the value, I would like to completely remove it from the POST body. Is there way to do this in the configuration file? It seems all the doc examples show a value for that field.

Here's what our current Splunk config looks like. I'm not the app developer here, so this information was provided to me when I requested the configuration. All the variables are defined and the event is being indexed as I would expect. I just want to remove the source field from the POST body.
"logConfiguration": {
"logDriver": "splunk",
"options": {
"splunk-url": "${splunk_url}",
"splunk-index": "${splunk_index}",
"splunk-insecureskipverify": "true",
"splunk-sourcetype": "${splunk_sourcetype}",
"splunk-format": "json"
},

Thanks in advance.

Allow use of logging tags in source and sourcetype fields

What would you like to be added:
When running docker compose I would like to be able to set the source and sourcetype using the logging tags: https://docs.docker.com/v17.09/engine/admin/logging/log_tags/. This way I can set up a default logging config and have it applied to all my containers:

x-logging:
  &default-logging
  driver: "splunk"
  options:
    splunk-token: "12345678-ABCD-EFGH-IJKL-123456789012"
    splunk-url: "https://localhost:8088"
    splunk-sourcetype: "docker:{{.Name}}"
    splunk-source: "{{.Name}}-{{.ID}}"
    tag: "{{.Name}}-{{.ID}}

Why is this needed:
The alternative is to manually set the source and sourcetype on each container in the compose file which is not very user friendly.

Logs are being written to disk

We have just found that all the logs sent to splunk are also logged to disk.

[root@hostname docker]# sudo docker plugin ls
ID                  NAME                           DESCRIPTION             ENABLED
a6269368de77        splunk-logging-plugin:latest   Splunk Logging Plugin   true

[root@hostname docker]# pwd
/var/lib/docker/plugins/bd27a6269368de776e3773cf02ceb74cca39e4de3cca7dadc30f832f83ac37ab/rootfs/var/log/docker

[root@hostname docker]# du -sh *
93M     0da81cda054cd9f9b76278b71c743149aafb60fa89ffb4b1daeb137f6052f630
38G     0eb8c9c87335831629655535bce1bf9eb3556ef772744a66ab75f3d43219a845
80M     0f86ee162622f6be17d04fbeb3880d6920b91acaee609f418137a946fcc3c1c9
128M    17403b83b0486836c5115f0c7f549997ea33ff28a418c47b102564b5888b845c
7.7M    39b31d58d1a0da89e8af52bb7efdafb6cfc590fee72186f83cea60f12bcfcb6d
38G     3e3986be739abb8d3903e6e8c5a92e82f6ff32e3bbaf073f546875ce219ce1ef
35M     73096342216ba7ae59448e4fceced401c9c8c3cf9f91d9b94dfb2fa2a8226aca
84M     a41a0c56de2ad9ae83aa985901290b2b3d606c5a01196697c209cd63d9399af1
119M    ae61bf126a10587879b21d91c4a72cdb04a2d6762ebebf90cb617bc176f57de4
12G     dd0661c47c0ece1a02f65b045ed52f6ed475e2957d4cc8ff5d4a165fab28161a
22M     e3269b321d5f6aac6b96ec631ae348c6cca84085658d5c83ddba5fd41777f1b9
25M     e36384dd11bef199e1122657e39ac2033d2dee8b9f6d0c12805489f283d73e17
142M    ecb83fa0102dba193e4acc6f1a7f5441c2560a0b73fe0f67c1529f8565fbe384
276M    f3560b57b96ef1044d441b82f7651de398fa1509867dd661d06ceb295b5e2c6e

[root@hostname docker]# file 0da81cda054cd9f9b76278b71c743149aafb60fa89ffb4b1daeb137f6052f630
0da81cda054cd9f9b76278b71c743149aafb60fa89ffb4b1daeb137f6052f630: ASCII text, with very long lines

The logs persist even through docker service restart or even when the containers are stopped and deleted from the server.

There isn't any indication on the readme for this plugin that the logs would be written to disk as this should be a streaming plugin to send the logs elsewhere.

Splunk Logging Driver splunk-capath: no such file or directory

splunk-capath is read before volume mount: as a result, it can't find the certificate, bc it's not been mounted yet

driver: "splunk-logging-plugin"
options:
splunk-url: "https://xxxxxx/"
splunk-token: "xxxxx"
splunk-capath: "/etc/nginx/XXX.crt"
tag: "{{pll}}-xxx-nginx-{{inu}}"
{% endif %}
volumes:

  • /app/{{platform}}/{{pll}}-xxx-nginx-{{inu}}/config:/etc/nginx

Environment:

  • Docker version (use docker version): 20.10.3
  • OS (e.g: cat /etc/os-release): rhel 8
  • Splunk version: 8.2.9

Allow for json message format to store in root level of log object

It would be nice to have format options, so we can store json logs in the root level.

Instead of:

{
	"line": {
		"message": "app:start",
		"date": "2018-05-22 14:01:35",
		"file": "/opt/api/app/index.js",
		"line": 13,
		"severity": "info"
	},
	"source": "stdout",
	"tag": "docker_container_info"
}

We could instead have:

{
	"message": "app:start",
	"date": "2018-05-22 14:01:35",
	"file": "/opt/api/app/index.js",
	"line": 13,
	"severity": "info",
	"source": "stdout",
	"tag": "docker_container_info"
}

Error response from daemon: logger: no log driver named 'splunk-logging-plugin:latest' is registered

I'm trying to use the docker logging plugin on my environment, but cannot get any container to run when using it.

I've started a Splunk server instance on a docker container using https://hub.docker.com/r/splunk/splunk/, exposing 8000 and 8088 ports, and enabled HTC and generated a token.

Then I run following commands successfully:

docker plugin install splunk/docker-logging-plugin:latest --alias splunk-logging-plugin
docker plugin enable splunk-logging-plugin

However, when I try to run any custom image, I'm getting an error:

docker run \
    --name=my-server \
    --network=$DOCKER_NETWORK \
    -v /logs:/logs \
    -p0.0.0.0:8017:8017 \
    --restart unless-stopped \
    --log-driver=splunk-logging-plugin:latest \
    --log-opt splunk-token=888dee70-6168-4d91-971f-17e977982204 \
    --log-opt splunk-url=http://localhost:8088 \
    --log-opt tag="{{.Name}}/{{.FullID}}" \
    --log-opt labels=my-server \
    --log-opt env=TEST \
    --env "TEST=true" \
    --label source=my-server \
    -d my-server

docker: Error response from daemon: logger: no log driver named 'splunk-logging-plugin:latest' is registered.

I tried also with the alias, and with a global daemon.json, all resulted with the same errors.

Docker version 17.03.2-ce, build f5ec1e2
Ubuntu 16.04.4 LTS

Fix functional tests failure

Functional tests are failing randomly in CI setup when running all the tests together.
But tests are not failing when running them in groups (Splitting the tests in 2 or 3 sets and run them in parallel). Need to investigate and troubleshoot to figure out the root cause and fix the failure.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.