Code Monkey home page Code Monkey logo

docker's Introduction

Merged repository with pfelk/pfelk

  • 13 Augsut 2023

Elastic Integration

docker-pfelk

Deploy pfelk with docker-compose Video Tutorial

Version badge

YouTube

(0) Required Prerequisits

  • Docker
  • Docker-Compose
  • Adequate Memory (i.e. 8GB+)

(1) Docker Install

sudo apt-get install docker
sudo apt-get install docker-compose

(2) Download pfELK Docker

sudo wget https://github.com/pfelk/docker/archive/refs/heads/main.zip

(2a) Unzip pfelkdocker.zip

sudo apt-get install unzip
sudo unzip main.zip

(3) Memory

(3a) Set vm.max_map_count to no less than 262144 (must run each time host is booted)

sudo sysctl -w vm.max_map_count=262144

(3b) Set vm.max_map_count to no less than 262144 (one time configuration)

sudo echo "vm.max_map_count=262144" >> /etc/sysctl.conf

(4) Configure Variables (Credentials)

(4a) Edit .env File

sudo nano .env

(4b) Amend .env File as Desired

ELK_VERSION=8.9.0
ELASTIC_PASSWORD=changeme
KIBANA_PASSWORD=changeme
LOGSTASH_PASSWORD=changeme
LICENSE=basic

(4c) Update LOGSTASH_PASSWORD in configuration files

sed -i 's/logstash_system_password/LOGSTASH-PASSWORD/' etc/logstash/config/logstash.yml
sed -i 's/elastic_password/ELASTIC-PASSWORD/' etc/pfelk/conf.d/50-outputs.pfelk

or use the Script

./set-logstash-password.sh

(5) Start Docker

sudo docker-compose up

Once fully running, navigate to the host ip (ex: 192.168.0.100:5601)

(6) Install Templates

(7) Finish Configuring

  • Finish Configuring here

(8) Finished

docker's People

Contributors

braunbearded avatar fktkrt avatar mmohoney avatar nibblesandbits007 avatar pclever1 avatar travisboss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker's Issues

Kibana Visualization error "[esaggs] > Bad Request"

Hello,
thank you for updating the pfelk-repo!
I just tried it out, following the steps here and here but ended up with an empty dashboard with an error mark at each visualization saying:
"[esaggs] > Bad Request"

Only the vis. Firewall - Discover is working.

I can confirm that data is coming in and under "discover" in Kibana I can see a long list of fields and parsed events.

Image 6
Image 7

Logstash 401 Unauthorized errors

New installation.
I see 401 Unauthorized error in logs. Also, Kibana asks for credentials.

To Reproduce
Steps to reproduce the behavior:

  1. Install
  2. check logs
logstash    | [WARN ] 2022-04-26 11:28:56.121 [Ruby-0-Thread-10: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:213] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://pfelk_logstash:xxxxxx@es01:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://es01:9200/'"}

Screenshots
If applicable, add screenshots to help explain your problem.

Operating System (please complete the following information):

Linux 5.15.0-27-generic x86_64
PRETTY_NAME="Ubuntu 22.04 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04 (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy

 - Version of Docker (docker version 20.10.12, build 20.10.12-0ubuntu4)
 - Version of Docker-Compose (docker-compose version 1.29.2, build unknown)

**Elasticsearch, Logstash, Kibana (please complete the following information):**
 - Version of ELK (ELK_VERSION=8.1.1)
 
 **Service logs
 - `docker-compose logs pfelk01` 

{"@timestamp":"2022-04-26T11:39:31.318Z", "log.level": "INFO", "message":"Authentication of [elastic] was terminated by realm [reserved] - failed to authenticate user [elastic]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es01][transport_worker][T#3]","log.logger":"org.elasticsearch.xpack.security.authc.RealmsAuthenticator","elasticsearch.cluster.uuid":"ynFu1V-JT7S131Cr1K0OSw","elasticsearch.node.id":"sghCi4faTMq-0HWfuHE6Wg","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"es-docker-cluster"}

 - `docker-compose logs pfelk02`

{"@timestamp":"2022-04-26T11:27:10.187Z", "log.level": "INFO", "message":"added {{es03}{he_uTsJLRVqVdt94ZJZedQ}{XIGGdlwDTu6hmdcuNqYScQ}{172.18.0.5}{172.18.0.5:9300}{cdfhilmrstw}}, term: 2, version: 30, reason: ApplyCommitRequest{term=2, version=30, sourceNode={es01}{sghCi4faTMq-0HWfuHE6Wg}{29Cab6DxSUeO4rLOk3WPYw}{172.18.0.3}{172.18.0.3:9300}{cdfhilmrstw}{ml.machine_memory=16754929664, xpack.installed=true, ml.max_jvm_size=536870912}}", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es02][clusterApplierService#updateTask][T#1]","log.logger":"org.elasticsearch.cluster.service.ClusterApplierService","elasticsearch.cluster.uuid":"ynFu1V-JT7S131Cr1K0OSw","elasticsearch.node.id":"LW-kJ81hQICwVKDyxvp-lA","elasticsearch.node.name":"es02","elasticsearch.cluster.name":"es-docker-cluster"}
{"@timestamp":"2022-04-26T11:27:14.199Z", "log.level": "INFO", "message":"license [9d9bf110-6edb-43d5-80ed-ef06331a806b] mode [basic] - valid", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es02][clusterApplierService#updateTask][T#1]","log.logger":"org.elasticsearch.license.LicenseService","elasticsearch.cluster.uuid":"ynFu1V-JT7S131Cr1K0OSw","elasticsearch.node.id":"LW-kJ81hQICwVKDyxvp-lA","elasticsearch.node.name":"es02","elasticsearch.cluster.name":"es-docker-cluster"}
{"@timestamp":"2022-04-26T11:27:14.200Z", "log.level": "INFO", "message":"license mode is [basic], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es02][clusterApplierService#updateTask][T#1]","log.logger":"org.elasticsearch.xpack.security.authc.Realms","elasticsearch.cluster.uuid":"ynFu1V-JT7S131Cr1K0OSw","elasticsearch.node.id":"LW-kJ81hQICwVKDyxvp-lA","elasticsearch.node.name":"es02","elasticsearch.cluster.name":"es-docker-cluster"}
{"@timestamp":"2022-04-26T11:27:17.829Z", "log.level": "INFO", "message":"retrieve geoip database [GeoLite2-ASN.mmdb] from [.geoip_databases] to [/tmp/elasticsearch-8501814693674185395/geoip-databases/LW-kJ81hQICwVKDyxvp-lA/GeoLite2-ASN.mmdb.tmp.gz]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es02][clusterApplierService#updateTask][T#1]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","elasticsearch.cluster.uuid":"ynFu1V-JT7S131Cr1K0OSw","elasticsearch.node.id":"LW-kJ81hQICwVKDyxvp-lA","elasticsearch.node.name":"es02","elasticsearch.cluster.name":"es-docker-cluster"}
{"@timestamp":"2022-04-26T11:27:18.677Z", "log.level": "INFO", "message":"successfully loaded geoip database file [GeoLite2-ASN.mmdb]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es02][generic][T#3]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","elasticsearch.cluster.uuid":"ynFu1V-JT7S131Cr1K0OSw","elasticsearch.node.id":"LW-kJ81hQICwVKDyxvp-lA","elasticsearch.node.name":"es02","elasticsearch.cluster.name":"es-docker-cluster"}
{"@timestamp":"2022-04-26T11:27:34.902Z", "log.level": "INFO", "message":"retrieve geoip database [GeoLite2-City.mmdb] from [.geoip_databases] to [/tmp/elasticsearch-8501814693674185395/geoip-databases/LW-kJ81hQICwVKDyxvp-lA/GeoLite2-City.mmdb.tmp.gz]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es02][clusterApplierService#updateTask][T#1]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","elasticsearch.cluster.uuid":"ynFu1V-JT7S131Cr1K0OSw","elasticsearch.node.id":"LW-kJ81hQICwVKDyxvp-lA","elasticsearch.node.name":"es02","elasticsearch.cluster.name":"es-docker-cluster"}
{"@timestamp":"2022-04-26T11:27:36.504Z", "log.level": "INFO", "message":"retrieve geoip database [GeoLite2-Country.mmdb] from [.geoip_databases] to [/tmp/elasticsearch-8501814693674185395/geoip-databases/LW-kJ81hQICwVKDyxvp-lA/GeoLite2-Country.mmdb.tmp.gz]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es02][clusterApplierService#updateTask][T#1]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","elasticsearch.cluster.uuid":"ynFu1V-JT7S131Cr1K0OSw","elasticsearch.node.id":"LW-kJ81hQICwVKDyxvp-lA","elasticsearch.node.name":"es02","elasticsearch.cluster.name":"es-docker-cluster"}
{"@timestamp":"2022-04-26T11:27:36.716Z", "log.level": "INFO", "message":"successfully loaded geoip database file [GeoLite2-Country.mmdb]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es02][generic][T#4]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","elasticsearch.cluster.uuid":"ynFu1V-JT7S131Cr1K0OSw","elasticsearch.node.id":"LW-kJ81hQICwVKDyxvp-lA","elasticsearch.node.name":"es02","elasticsearch.cluster.name":"es-docker-cluster"}
{"@timestamp":"2022-04-26T11:27:36.974Z", "log.level": "INFO", "message":"successfully loaded geoip database file [GeoLite2-City.mmdb]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es02][generic][T#1]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","elasticsearch.cluster.uuid":"ynFu1V-JT7S131Cr1K0OSw","elasticsearch.node.id":"LW-kJ81hQICwVKDyxvp-lA","elasticsearch.node.name":"es02","elasticsearch.cluster.name":"es-docker-cluster"}

 - `docker-compose logs pfelk03`

{"@timestamp":"2022-04-26T11:27:10.767Z", "log.level": "INFO", "message":"publish_address {172.18.0.5:9200}, bound_addresses {0.0.0.0:9200}", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.http.AbstractHttpServerTransport","elasticsearch.cluster.uuid":"ynFu1V-JT7S131Cr1K0OSw","elasticsearch.node.id":"he_uTsJLRVqVdt94ZJZedQ","elasticsearch.node.name":"es03","elasticsearch.cluster.name":"es-docker-cluster"}
{"@timestamp":"2022-04-26T11:27:10.768Z", "log.level": "INFO", "message":"started", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.node.Node","elasticsearch.cluster.uuid":"ynFu1V-JT7S131Cr1K0OSw","elasticsearch.node.id":"he_uTsJLRVqVdt94ZJZedQ","elasticsearch.node.name":"es03","elasticsearch.cluster.name":"es-docker-cluster"}
{"@timestamp":"2022-04-26T11:27:14.211Z", "log.level": "INFO", "message":"license [9d9bf110-6edb-43d5-80ed-ef06331a806b] mode [basic] - valid", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es03][clusterApplierService#updateTask][T#1]","log.logger":"org.elasticsearch.license.LicenseService","elasticsearch.cluster.uuid":"ynFu1V-JT7S131Cr1K0OSw","elasticsearch.node.id":"he_uTsJLRVqVdt94ZJZedQ","elasticsearch.node.name":"es03","elasticsearch.cluster.name":"es-docker-cluster"}
{"@timestamp":"2022-04-26T11:27:14.212Z", "log.level": "INFO", "message":"license mode is [basic], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es03][clusterApplierService#updateTask][T#1]","log.logger":"org.elasticsearch.xpack.security.authc.Realms","elasticsearch.cluster.uuid":"ynFu1V-JT7S131Cr1K0OSw","elasticsearch.node.id":"he_uTsJLRVqVdt94ZJZedQ","elasticsearch.node.name":"es03","elasticsearch.cluster.name":"es-docker-cluster"}
{"@timestamp":"2022-04-26T11:27:17.828Z", "log.level": "INFO", "message":"retrieve geoip database [GeoLite2-ASN.mmdb] from [.geoip_databases] to [/tmp/elasticsearch-18255990313510465210/geoip-databases/he_uTsJLRVqVdt94ZJZedQ/GeoLite2-ASN.mmdb.tmp.gz]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es03][clusterApplierService#updateTask][T#1]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","elasticsearch.cluster.uuid":"ynFu1V-JT7S131Cr1K0OSw","elasticsearch.node.id":"he_uTsJLRVqVdt94ZJZedQ","elasticsearch.node.name":"es03","elasticsearch.cluster.name":"es-docker-cluster"}
{"@timestamp":"2022-04-26T11:27:18.665Z", "log.level": "INFO", "message":"successfully loaded geoip database file [GeoLite2-ASN.mmdb]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es03][generic][T#1]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","elasticsearch.cluster.uuid":"ynFu1V-JT7S131Cr1K0OSw","elasticsearch.node.id":"he_uTsJLRVqVdt94ZJZedQ","elasticsearch.node.name":"es03","elasticsearch.cluster.name":"es-docker-cluster"}
{"@timestamp":"2022-04-26T11:27:34.906Z", "log.level": "INFO", "message":"retrieve geoip database [GeoLite2-City.mmdb] from [.geoip_databases] to [/tmp/elasticsearch-18255990313510465210/geoip-databases/he_uTsJLRVqVdt94ZJZedQ/GeoLite2-City.mmdb.tmp.gz]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es03][clusterApplierService#updateTask][T#1]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","elasticsearch.cluster.uuid":"ynFu1V-JT7S131Cr1K0OSw","elasticsearch.node.id":"he_uTsJLRVqVdt94ZJZedQ","elasticsearch.node.name":"es03","elasticsearch.cluster.name":"es-docker-cluster"}
{"@timestamp":"2022-04-26T11:27:36.516Z", "log.level": "INFO", "message":"retrieve geoip database [GeoLite2-Country.mmdb] from [.geoip_databases] to [/tmp/elasticsearch-18255990313510465210/geoip-databases/he_uTsJLRVqVdt94ZJZedQ/GeoLite2-Country.mmdb.tmp.gz]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es03][clusterApplierService#updateTask][T#1]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","elasticsearch.cluster.uuid":"ynFu1V-JT7S131Cr1K0OSw","elasticsearch.node.id":"he_uTsJLRVqVdt94ZJZedQ","elasticsearch.node.name":"es03","elasticsearch.cluster.name":"es-docker-cluster"}
{"@timestamp":"2022-04-26T11:27:36.693Z", "log.level": "INFO", "message":"successfully loaded geoip database file [GeoLite2-Country.mmdb]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es03][generic][T#3]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","elasticsearch.cluster.uuid":"ynFu1V-JT7S131Cr1K0OSw","elasticsearch.node.id":"he_uTsJLRVqVdt94ZJZedQ","elasticsearch.node.name":"es03","elasticsearch.cluster.name":"es-docker-cluster"}
{"@timestamp":"2022-04-26T11:27:37.062Z", "log.level": "INFO", "message":"successfully loaded geoip database file [GeoLite2-City.mmdb]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es03][generic][T#2]","log.logger":"org.elasticsearch.ingest.geoip.DatabaseNodeService","elasticsearch.cluster.uuid":"ynFu1V-JT7S131Cr1K0OSw","elasticsearch.node.id":"he_uTsJLRVqVdt94ZJZedQ","elasticsearch.node.name":"es03","elasticsearch.cluster.name":"es-docker-cluster"}

 - `docker-compose logs logstash`

[WARN ] 2022-04-26 11:41:31.580 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:213] licensereader - Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@es01:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://es01:9200/'"}
[WARN ] 2022-04-26 11:41:31.996 [Ruby-0-Thread-10: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:213] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://pfelk_logstash:xxxxxx@es01:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://es01:9200/'"}
[WARN ] 2022-04-26 11:41:37.001 [Ruby-0-Thread-10: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:213] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://pfelk_logstash:xxxxxx@es01:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://es01:9200/'"}
[WARN ] 2022-04-26 11:41:42.016 [Ruby-0-Thread-10: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:213] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://pfelk_logstash:xxxxxx@es01:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://es01:9200/'"}
[WARN ] 2022-04-26 11:41:47.021 [Ruby-0-Thread-10: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:213] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://pfelk_logstash:xxxxxx@es01:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'https://es01:9200/'"}

 - `docker-compose logs kibana`

[2022-04-26T11:26:36.862+00:00][INFO ][plugins-service] Plugin "metricsEntities" is disabled.
[2022-04-26T11:26:36.978+00:00][INFO ][http.server.Preboot] http server running at http://0.0.0.0:5601
[2022-04-26T11:26:37.028+00:00][INFO ][plugins-system.preboot] Setting up [1] plugins: [interactiveSetup]
[2022-04-26T11:26:37.061+00:00][WARN ][config.deprecation] The default mechanism for Reporting privileges will work differently in future versions, which will affect the behavior of this cluster. Set "xpack.reporting.roles.enabled" to "false" to adopt the future behavior before upgrading.
[2022-04-26T11:26:37.269+00:00][INFO ][plugins-system.standard] Setting up [112] plugins: [translations,licensing,globalSearch,globalSearchProviders,features,mapsEms,licenseApiGuard,usageCollection,taskManager,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,sharedUX,share,embeddable,uiActionsEnhanced,screenshotMode,screenshotting,banners,telemetry,newsfeed,fieldFormats,expressions,dataViews,charts,esUiShared,bfetch,data,savedObjects,presentationUtil,expressionShape,expressionRevealImage,expressionRepeatImage,expressionMetric,expressionImage,customIntegrations,home,searchprofiler,painlessLab,grokdebugger,management,watcher,licenseManagement,advancedSettings,spaces,security,savedObjectsTagging,reporting,lists,fileUpload,ingestPipelines,encryptedSavedObjects,dataEnhanced,cloud,snapshotRestore,eventLog,actions,alerting,triggersActionsUi,transform,stackAlerts,ruleRegistry,savedObjectsManagement,console,controls,graph,fleet,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,visualizations,canvas,visTypeXy,visTypeVislib,visTypeVega,visTypeTimeseries,rollup,visTypeTimelion,visTypeTagcloud,visTypeTable,visTypeMetric,visTypeHeatmap,visTypeMarkdown,dashboard,maps,dashboardEnhanced,expressionTagcloud,expressionPie,visTypePie,expressionMetricVis,expressionHeatmap,expressionGauge,dataViewFieldEditor,lens,cases,timelines,discover,osquery,observability,discoverEnhanced,dataVisualizer,ml,uptime,securitySolution,infra,upgradeAssistant,monitoring,logstash,enterpriseSearch,apm,dataViewManagement]
[2022-04-26T11:26:37.308+00:00][INFO ][plugins.taskManager] TaskManager is identified by the Kibana UUID: c40fdeb9-0c5c-4299-b81a-7a6e03fc4c30
[2022-04-26T11:26:37.556+00:00][WARN ][plugins.security.config] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[2022-04-26T11:26:37.557+00:00][WARN ][plugins.security.config] Session cookies will be transmitted over insecure connections. This is not recommended.
[2022-04-26T11:26:37.579+00:00][WARN ][plugins.security.config] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[2022-04-26T11:26:37.583+00:00][WARN ][plugins.security.config] Session cookies will be transmitted over insecure connections. This is not recommended.
[2022-04-26T11:26:37.619+00:00][WARN ][plugins.reporting.config] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[2022-04-26T11:26:37.626+00:00][WARN ][plugins.reporting.config] Found 'server.host: "0.0.0.0"' in Kibana configuration. Reporting is not able to use this as the Kibana server hostname. To enable PNG/PDF Reporting to work, 'xpack.reporting.kibanaServer.hostname: localhost' is automatically set in the configuration. You can prevent this message by adding 'xpack.reporting.kibanaServer.hostname: localhost' in kibana.yml.
[2022-04-26T11:26:37.636+00:00][WARN ][plugins.encryptedSavedObjects] Saved objects encryption key is not set. This will severely limit Kibana functionality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[2022-04-26T11:26:37.654+00:00][WARN ][plugins.actions] APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[2022-04-26T11:26:37.671+00:00][WARN ][plugins.alerting] APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
[2022-04-26T11:26:37.714+00:00][INFO ][plugins.ruleRegistry] Installing common resources shared between all indices
[2022-04-26T11:26:38.549+00:00][INFO ][plugins.screenshotting.config] Chromium sandbox provides an additional layer of protection, and is supported for Linux Ubuntu 20.04 OS. Automatically enabling Chromium sandbox.
[2022-04-26T11:26:42.165+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. connect ECONNREFUSED 172.18.0.3:9200
[2022-04-26T11:26:44.118+00:00][INFO ][plugins.screenshotting.chromium] Browser executable: /usr/share/kibana/x-pack/plugins/screenshotting/chromium/headless_shell-linux_x64/headless_shell
[2022-04-26T11:27:07.675+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. security_exception: [security_exception] Reason: unable to authenticate user [kibana_system] for REST request [/_nodes?filter_path=nodes..version%2Cnodes..http.publish_address%2Cnodes.*.ip]
[2022-04-26T11:27:23.854+00:00][INFO ][savedobjects-service] Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations...
[2022-04-26T11:27:23.855+00:00][INFO ][savedobjects-service] Starting saved objects migrations
[2022-04-26T11:27:24.626+00:00][INFO ][savedobjects-service] [.kibana] INIT -> CREATE_NEW_TARGET. took: 94ms.
[2022-04-26T11:27:24.673+00:00][INFO ][savedobjects-service] [.kibana_task_manager] INIT -> CREATE_NEW_TARGET. took: 139ms.
[2022-04-26T11:27:34.506+00:00][INFO ][savedobjects-service] [.kibana_task_manager] CREATE_NEW_TARGET -> MARK_VERSION_INDEX_READY. took: 9833ms.
[2022-04-26T11:27:34.668+00:00][INFO ][savedobjects-service] [.kibana] CREATE_NEW_TARGET -> MARK_VERSION_INDEX_READY. took: 10042ms.
[2022-04-26T11:27:34.756+00:00][INFO ][savedobjects-service] [.kibana_task_manager] MARK_VERSION_INDEX_READY -> DONE. took: 250ms.
[2022-04-26T11:27:34.756+00:00][INFO ][savedobjects-service] [.kibana_task_manager] Migration completed after 10222ms
[2022-04-26T11:27:34.825+00:00][INFO ][savedobjects-service] [.kibana] MARK_VERSION_INDEX_READY -> DONE. took: 157ms.
[2022-04-26T11:27:34.825+00:00][INFO ][savedobjects-service] [.kibana] Migration completed after 10293ms
[2022-04-26T11:27:35.306+00:00][INFO ][status] Kibana is now unavailable
[2022-04-26T11:27:35.306+00:00][INFO ][plugins-system.preboot] Stopping all plugins.
[2022-04-26T11:27:35.307+00:00][INFO ][plugins-system.standard] Starting [112] plugins: [translations,licensing,globalSearch,globalSearchProviders,features,mapsEms,licenseApiGuard,usageCollection,taskManager,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,sharedUX,share,embeddable,uiActionsEnhanced,screenshotMode,screenshotting,banners,telemetry,newsfeed,fieldFormats,expressions,dataViews,charts,esUiShared,bfetch,data,savedObjects,presentationUtil,expressionShape,expressionRevealImage,expressionRepeatImage,expressionMetric,expressionImage,customIntegrations,home,searchprofiler,painlessLab,grokdebugger,management,watcher,licenseManagement,advancedSettings,spaces,security,savedObjectsTagging,reporting,lists,fileUpload,ingestPipelines,encryptedSavedObjects,dataEnhanced,cloud,snapshotRestore,eventLog,actions,alerting,triggersActionsUi,transform,stackAlerts,ruleRegistry,savedObjectsManagement,console,controls,graph,fleet,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,visualizations,canvas,visTypeXy,visTypeVislib,visTypeVega,visTypeTimeseries,rollup,visTypeTimelion,visTypeTagcloud,visTypeTable,visTypeMetric,visTypeHeatmap,visTypeMarkdown,dashboard,maps,dashboardEnhanced,expressionTagcloud,expressionPie,visTypePie,expressionMetricVis,expressionHeatmap,expressionGauge,dataViewFieldEditor,lens,cases,timelines,discover,osquery,observability,discoverEnhanced,dataVisualizer,ml,uptime,securitySolution,infra,upgradeAssistant,monitoring,logstash,enterpriseSearch,apm,dataViewManagement]
[2022-04-26T11:27:36.864+00:00][INFO ][plugins.fleet] Beginning fleet setup
[2022-04-26T11:27:36.882+00:00][INFO ][plugins.monitoring.monitoring] config sourced from: production cluster
[2022-04-26T11:27:37.827+00:00][INFO ][http.server.Kibana] http server running at http://0.0.0.0:5601
[2022-04-26T11:27:38.412+00:00][INFO ][plugins.monitoring.monitoring.kibana-monitoring] Starting monitoring stats collection
[2022-04-26T11:27:39.366+00:00][INFO ][status] Kibana is now degraded (was unavailable)
[2022-04-26T11:27:40.399+00:00][INFO ][plugins.ruleRegistry] Installed common resources shared between all indices
[2022-04-26T11:27:40.399+00:00][INFO ][plugins.ruleRegistry] Installing resources for index .alerts-observability.uptime.alerts
[2022-04-26T11:27:40.400+00:00][INFO ][plugins.ruleRegistry] Installing resources for index .alerts-security.alerts
[2022-04-26T11:27:40.400+00:00][INFO ][plugins.ruleRegistry] Installing resources for index .preview.alerts-security.alerts
[2022-04-26T11:27:40.400+00:00][INFO ][plugins.ruleRegistry] Installing resources for index .alerts-observability.logs.alerts
[2022-04-26T11:27:40.400+00:00][INFO ][plugins.ruleRegistry] Installing resources for index .alerts-observability.metrics.alerts
[2022-04-26T11:27:40.400+00:00][INFO ][plugins.ruleRegistry] Installing resources for index .alerts-observability.apm.alerts
[2022-04-26T11:27:40.838+00:00][INFO ][plugins.ruleRegistry] Installed resources for index .alerts-observability.uptime.alerts
[2022-04-26T11:27:40.980+00:00][INFO ][plugins.ruleRegistry] Installed resources for index .alerts-observability.logs.alerts
[2022-04-26T11:27:41.107+00:00][INFO ][plugins.ruleRegistry] Installed resources for index .alerts-observability.metrics.alerts
[2022-04-26T11:27:41.436+00:00][INFO ][plugins.ruleRegistry] Installed resources for index .alerts-security.alerts
[2022-04-26T11:27:41.714+00:00][INFO ][plugins.ruleRegistry] Installed resources for index .alerts-observability.apm.alerts
[2022-04-26T11:27:43.429+00:00][INFO ][plugins.securitySolution.endpoint:metadata-check-transforms-task:0.0.1] no endpoint installation found
[2022-04-26T11:27:43.484+00:00][INFO ][plugins.ruleRegistry] Installed resources for index .preview.alerts-security.alerts
[2022-04-26T11:27:45.059+00:00][INFO ][plugins.fleet] Fleet setup completed
[2022-04-26T11:27:45.103+00:00][INFO ][plugins.securitySolution] Dependent plugin setup complete - Starting ManifestTask
[2022-04-26T11:27:47.738+00:00][INFO ][status] Kibana is now available (was degraded)
[2022-04-26T11:27:47.757+00:00][INFO ][plugins.reporting.store] Creating ILM policy for managing reporting indices: kibana-reporting


**Additional context**
Add any other context about the problem here.

No indexes in Kibana

I have installed the pfelk in docker from the zip provided and run the sh script for creating templates and dashboards.
All seems ok, the port 5140 of logstash is receving packet, checked with tcpdump and saw logs from firewall ip, but the dashborad shows me an error and I cannot see any index in the kibana dashborad management
image
image

These are the logs of logstash

[INFO ] 2021-05-13 12:31:41.197 [[pfelk]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://es01:9200"]}
[ERROR] 2021-05-13 12:31:41.270 [[pfelk]-pipeline-manager] javapipeline - Pipeline error {:pipeline_id=>"pfelk", :exception=>#<Grok::PatternError: pattern %{HAPROXY} not defined>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:123:in `block in compile'", "org/jruby/RubyKernel.java:1442:in `loop'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:93:in `compile'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.3.0/lib/logstash/filters/grok.rb:288:in `block in register'", "org/jruby/RubyArray.java:1809:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.3.0/lib/logstash/filters/grok.rb:282:in `block in register'", "org/jruby/RubyHash.java:1415:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.3.0/lib/logstash/filters/grok.rb:277:in `register'", "org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:75:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:228:in `block in register_plugins'", "org/jruby/RubyArray.java:1809:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:227:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:586:in `maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:240:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:185:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:137:in `block in start'"], "pipeline.sources"=>["/etc/pfelk/conf.d/01-inputs.conf", "/etc/pfelk/conf.d/02-types.conf", "/etc/pfelk/conf.d/03-filter.conf", "/etc/pfelk/conf.d/05-apps.conf", "/etc/pfelk/conf.d/20-interfaces.conf", "/etc/pfelk/conf.d/30-geoip.conf", "/etc/pfelk/conf.d/37-enhanced_user_agent.conf", "/etc/pfelk/conf.d/38-enhanced_url.conf", "/etc/pfelk/conf.d/45-cleanup.conf", "/etc/pfelk/conf.d/50-outputs.conf"], :thread=>"#<Thread:0x78d20b07@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:125 run>"}
[INFO ] 2021-05-13 12:31:41.271 [[pfelk]-pipeline-manager] javapipeline - Pipeline terminated {"pipeline.id"=>"pfelk"}
[ERROR] 2021-05-13 12:31:41.277 [Converge PipelineAction::Create<pfelk>] agent - Failed to execute action {:id=>:pfelk, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<pfelk>, action_result: false", :backtrace=>nil}
[INFO ] 2021-05-13 12:31:41.325 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2021-05-13 12:31:42.323 [[.monitoring-logstash]-pipeline-manager] javapipeline - Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[INFO ] 2021-05-13 12:31:43.319 [LogStash::Runner] runner - Logstash shut down.
Using bundled JDK: /usr/share/logstash/jdk

are you sure that the path are all correct? Cause in the docker-compose I see:

      - ./etc/pfelk/conf.d/patterns/:/etc/pfelk/patterns:ro
      - ./etc/pfelk/conf.d/databases/:/etc/pfelk/databases:ro

but these directories are empty. The files are in /etc/pfelk/patterns and /etc/pfelk/databases on the host

the volume for host logstash configuration folder does not match the container folder.

Describe the bug
I followed the steps for setting up PFELK on docker but I don't see logs coming in. It looks like the correct container folder for .conf files is in "/usr/share/logstash/pipeline" so I updated the volume in the docker compose yml file and was able to get logs but logstash shuts down after a few seconds.

Original volume container folder: /etc/pfelk/conf.d:ro
What I changed it to: /usr/share/logstash/pipeline:ro

To Reproduce
Steps to reproduce the behavior:

  1. Install fresh ELK on docker using latest version

Screenshots
If applicable, add screenshots to help explain your problem.

Operating System (please complete the following information):

Elasticsearch, Logstash, Kibana (please complete the following information):

  • Version of ELK (cat /docker-pfelk/.env) 7.11

**Service logs

  • docker-compose logs pfelk01

  • docker-compose logs pfelk02

  • docker-compose logs pfelk03

  • docker-compose logs logstash
    logstash | Using bundled JDK: /usr/share/logstash/jdk
    logstash | Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
    logstash | [INFO ] 2021-03-24 12:15:02.768 [main] runner - Starting Logstash {"logstash.version"=>"7.11.0", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +jit [linux-x86_64]"}
    logstash | [INFO ] 2021-03-24 12:15:02.844 [main] writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
    logstash | [INFO ] 2021-03-24 12:15:02.878 [main] writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
    logstash | [INFO ] 2021-03-24 12:15:04.238 [LogStash::Runner] agent - No persistent UUID file found. Generating new UUID {:uuid=>"e49647e6-91e5-4042-bace-5479b6fe76c0", :path=>"/usr/share/logstash/data/uuid"}
    logstash | [WARN ] 2021-03-24 12:15:05.083 [LogStash::Runner] pipelineregisterhook - Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
    logstash | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
    logstash | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
    logstash | [WARN ] 2021-03-24 12:15:05.860 [LogStash::Runner] elasticsearch - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
    logstash | [INFO ] 2021-03-24 12:15:07.623 [LogStash::Runner] licensereader - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
    logstash | [WARN ] 2021-03-24 12:15:08.269 [LogStash::Runner] licensereader - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://es01:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://es01:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
    logstash | [WARN ] 2021-03-24 12:15:08.436 [LogStash::Runner] licensereader - Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://es01:9200/][Manticore::SocketException] Connection refused (Connection refused) {:url=>http://es01:9200/, :error_message=>"Elasticsearch Unreachable: [http://es01:9200/][Manticore::SocketException] Connection refused (Connection refused)", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
    logstash | [ERROR] 2021-03-24 12:15:08.465 [LogStash::Runner] licensereader - Unable to retrieve license information from license server {:message=>"Elasticsearch Unreachable: [http://es01:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
    logstash | [ERROR] 2021-03-24 12:15:08.512 [LogStash::Runner] internalpipelinesource - Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
    logstash | [INFO ] 2021-03-24 12:15:08.641 [Agent thread] configpathloader - No config files found in path {:path=>"/etc/pfelk/conf.d/.conf"}
    logstash | [ERROR] 2021-03-24 12:15:08.643 [Agent thread] sourceloader - No configuration found in the configured sources.
    logstash | [INFO ] 2021-03-24 12:15:09.042 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
    logstash | [INFO ] 2021-03-24 12:15:13.812 [LogStash::Runner] runner - Logstash shut down.
    logstash | Using bundled JDK: /usr/share/logstash/jdk
    logstash | Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
    logstash | [INFO ] 2021-03-24 12:16:03.525 [main] runner - Starting Logstash {"logstash.version"=>"7.11.0", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +jit [linux-x86_64]"}
    logstash | [WARN ] 2021-03-24 12:16:04.090 [LogStash::Runner] pipelineregisterhook - Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
    logstash | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
    logstash | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
    logstash | [WARN ] 2021-03-24 12:16:04.260 [LogStash::Runner] elasticsearch - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
    logstash | [INFO ] 2021-03-24 12:16:04.738 [LogStash::Runner] licensereader - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
    logstash | [WARN ] 2021-03-24 12:16:04.975 [LogStash::Runner] licensereader - Restored connection to ES instance {:url=>"http://es01:9200/"}
    logstash | [INFO ] 2021-03-24 12:16:05.278 [LogStash::Runner] licensereader - ES Output version determined {:es_version=>7}
    logstash | [WARN ] 2021-03-24 12:16:05.279 [LogStash::Runner] licensereader - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
    logstash | [INFO ] 2021-03-24 12:16:05.358 [LogStash::Runner] internalpipelinesource - Monitoring License OK
    logstash | [INFO ] 2021-03-24 12:16:05.359 [LogStash::Runner] internalpipelinesource - Validated license for monitoring. Enabling monitoring pipeline.
    logstash | [INFO ] 2021-03-24 12:16:05.403 [Agent thread] configpathloader - No config files found in path {:path=>"/etc/pfelk/conf.d/
    .conf"}
    logstash | [INFO ] 2021-03-24 12:16:06.457 [Converge PipelineAction::Create<.monitoring-logstash>] Reflections - Reflections took 53 ms to scan 1 urls, producing 23 keys and 47 values
    logstash | [WARN ] 2021-03-24 12:16:06.734 [Converge PipelineAction::Create<.monitoring-logstash>] elasticsearchmonitoring - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
    logstash | [INFO ] 2021-03-24 12:16:06.769 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
    logstash | [WARN ] 2021-03-24 12:16:06.775 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - Restored connection to ES instance {:url=>"http://es01:9200/"}
    logstash | [INFO ] 2021-03-24 12:16:06.785 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - ES Output version determined {:es_version=>7}
    logstash | [WARN ] 2021-03-24 12:16:06.785 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
    logstash | [INFO ] 2021-03-24 12:16:06.857 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://es01:9200"]}
    logstash | [WARN ] 2021-03-24 12:16:06.858 [[.monitoring-logstash]-pipeline-manager] javapipeline - 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
    logstash | [INFO ] 2021-03-24 12:16:06.910 [[.monitoring-logstash]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x60e35d64@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:125 run>"}
    logstash | [INFO ] 2021-03-24 12:16:08.002 [[.monitoring-logstash]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>1.08}
    logstash | [INFO ] 2021-03-24 12:16:08.010 [[.monitoring-logstash]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>".monitoring-logstash"}
    logstash | [INFO ] 2021-03-24 12:16:08.046 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:".monitoring-logstash"], :non_running_pipelines=>[]}
    logstash | [INFO ] 2021-03-24 12:16:08.199 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
    logstash | [INFO ] 2021-03-24 12:16:09.862 [[.monitoring-logstash]-pipeline-manager] javapipeline - Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
    logstash | [INFO ] 2021-03-24 12:16:10.135 [LogStash::Runner] runner - Logstash shut down.
    logstash | Using bundled JDK: /usr/share/logstash/jdk
    logstash | Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
    logstash | [INFO ] 2021-03-24 12:16:37.265 [main] runner - Starting Logstash {"logstash.version"=>"7.11.0", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +jit [linux-x86_64]"}
    logstash | [WARN ] 2021-03-24 12:16:38.002 [LogStash::Runner] pipelineregisterhook - Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
    logstash | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
    logstash | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
    logstash | [WARN ] 2021-03-24 12:16:38.162 [LogStash::Runner] elasticsearch - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
    logstash | [INFO ] 2021-03-24 12:16:38.477 [LogStash::Runner] licensereader - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
    logstash | [WARN ] 2021-03-24 12:16:38.642 [LogStash::Runner] licensereader - Restored connection to ES instance {:url=>"http://es01:9200/"}
    logstash | [INFO ] 2021-03-24 12:16:38.935 [LogStash::Runner] licensereader - ES Output version determined {:es_version=>7}
    logstash | [WARN ] 2021-03-24 12:16:38.937 [LogStash::Runner] licensereader - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
    logstash | [INFO ] 2021-03-24 12:16:39.035 [LogStash::Runner] internalpipelinesource - Monitoring License OK
    logstash | [INFO ] 2021-03-24 12:16:39.039 [LogStash::Runner] internalpipelinesource - Validated license for monitoring. Enabling monitoring pipeline.
    logstash | [INFO ] 2021-03-24 12:16:39.072 [Agent thread] configpathloader - No config files found in path {:path=>"/etc/pfelk/conf.d/.conf"}
    logstash | [INFO ] 2021-03-24 12:16:40.005 [Converge PipelineAction::Create<.monitoring-logstash>] Reflections - Reflections took 60 ms to scan 1 urls, producing 23 keys and 47 values
    logstash | [WARN ] 2021-03-24 12:16:40.177 [Converge PipelineAction::Create<.monitoring-logstash>] elasticsearchmonitoring - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
    logstash | [INFO ] 2021-03-24 12:16:40.201 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
    logstash | [WARN ] 2021-03-24 12:16:40.209 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - Restored connection to ES instance {:url=>"http://es01:9200/"}
    logstash | [INFO ] 2021-03-24 12:16:40.223 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - ES Output version determined {:es_version=>7}
    logstash | [WARN ] 2021-03-24 12:16:40.226 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
    logstash | [INFO ] 2021-03-24 12:16:40.275 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://es01:9200"]}
    logstash | [WARN ] 2021-03-24 12:16:40.278 [[.monitoring-logstash]-pipeline-manager] javapipeline - 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
    logstash | [INFO ] 2021-03-24 12:16:40.359 [[.monitoring-logstash]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x4f08277@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:125 run>"}
    logstash | [INFO ] 2021-03-24 12:16:41.464 [[.monitoring-logstash]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>1.1}
    logstash | [INFO ] 2021-03-24 12:16:41.485 [[.monitoring-logstash]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>".monitoring-logstash"}
    logstash | [INFO ] 2021-03-24 12:16:41.564 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:".monitoring-logstash"], :non_running_pipelines=>[]}
    logstash | [INFO ] 2021-03-24 12:16:41.689 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
    logstash | [INFO ] 2021-03-24 12:16:43.273 [[.monitoring-logstash]-pipeline-manager] javapipeline - Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
    logstash | [INFO ] 2021-03-24 12:16:43.643 [LogStash::Runner] runner - Logstash shut down.
    logstash | Using bundled JDK: /usr/share/logstash/jdk
    logstash | Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
    logstash | [INFO ] 2021-03-24 12:17:05.330 [main] runner - Starting Logstash {"logstash.version"=>"7.11.0", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +jit [linux-x86_64]"}
    logstash | [WARN ] 2021-03-24 12:17:06.011 [LogStash::Runner] pipelineregisterhook - Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
    logstash | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
    logstash | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
    logstash | [WARN ] 2021-03-24 12:17:06.178 [LogStash::Runner] elasticsearch - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
    logstash | [INFO ] 2021-03-24 12:17:06.485 [LogStash::Runner] licensereader - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
    logstash | [WARN ] 2021-03-24 12:17:06.646 [LogStash::Runner] licensereader - Restored connection to ES instance {:url=>"http://es01:9200/"}
    logstash | [INFO ] 2021-03-24 12:17:06.918 [LogStash::Runner] licensereader - ES Output version determined {:es_version=>7}
    logstash | [WARN ] 2021-03-24 12:17:06.919 [LogStash::Runner] licensereader - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
    logstash | [INFO ] 2021-03-24 12:17:07.003 [LogStash::Runner] internalpipelinesource - Monitoring License OK
    logstash | [INFO ] 2021-03-24 12:17:07.005 [LogStash::Runner] internalpipelinesource - Validated license for monitoring. Enabling monitoring pipeline.
    logstash | [INFO ] 2021-03-24 12:17:07.041 [Agent thread] configpathloader - No config files found in path {:path=>"/etc/pfelk/conf.d/
    .conf"}
    logstash | [INFO ] 2021-03-24 12:17:07.940 [Converge PipelineAction::Create<.monitoring-logstash>] Reflections - Reflections took 77 ms to scan 1 urls, producing 23 keys and 47 values
    logstash | [WARN ] 2021-03-24 12:17:08.095 [Converge PipelineAction::Create<.monitoring-logstash>] elasticsearchmonitoring - Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
    logstash | [INFO ] 2021-03-24 12:17:08.131 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://es01:9200/]}}
    logstash | [WARN ] 2021-03-24 12:17:08.141 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - Restored connection to ES instance {:url=>"http://es01:9200/"}
    logstash | [INFO ] 2021-03-24 12:17:08.150 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - ES Output version determined {:es_version=>7}
    logstash | [WARN ] 2021-03-24 12:17:08.150 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
    logstash | [INFO ] 2021-03-24 12:17:08.213 [[.monitoring-logstash]-pipeline-manager] elasticsearchmonitoring - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://es01:9200"]}
    logstash | [WARN ] 2021-03-24 12:17:08.215 [[.monitoring-logstash]-pipeline-manager] javapipeline - 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
    logstash | [INFO ] 2021-03-24 12:17:08.340 [[.monitoring-logstash]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x5395f7d0@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:125 run>"}
    logstash | [INFO ] 2021-03-24 12:17:09.470 [[.monitoring-logstash]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>1.13}
    logstash | [INFO ] 2021-03-24 12:17:09.479 [[.monitoring-logstash]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>".monitoring-logstash"}
    logstash | [INFO ] 2021-03-24 12:17:09.522 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:".monitoring-logstash"], :non_running_pipelines=>[]}
    logstash | [INFO ] 2021-03-24 12:17:09.605 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
    logstash | [INFO ] 2021-03-24 12:17:11.211 [[.monitoring-logstash]-pipeline-manager] javapipeline - Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
    logstash | [INFO ] 2021-03-24 12:17:11.600 [LogStash::Runner] runner - Logstash shut down.

  • docker-compose logs kibana

Additional context
Add any other context about the problem here.

Docker Configuration Bugs

Describe the bug
When following the documentation, the logstash container is unable to reach the es01 instance because the output host is set to http://localhost:9200 rather than http://es01:9200.

Additionally, the MaxMind documentation states to update line 18 to be DatabaseDirectory /usr/share/GeoIP/, but the corresponding path in the docker-compose.yml file for the logstash container is /usr/share/GeoIP/:/usr/share/logstash/GeoIP/ which results in being unable to load the files.

To Reproduce
Steps to reproduce the behavior:

  • follow the docker installation guide step-by-step on a new Ubuntu 20.04 installation.

Screenshots
If applicable, add screenshots to help explain your problem.

Operating System (please complete the following information):

  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):
Linux 5.4.0-59-generic x86_64
NAME="Ubuntu"
VERSION="20.04.1 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.1 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
  • Version of Docker (docker -v): Docker version 20.10.2, build 2291f61
  • Version of Docker-Compose (docker-compose -v): docker-compose version 1.25.0, build unknown

Elasticsearch, Logstash, Kibana (please complete the following information):

  • Version of ELK (cat /docker-pfelk/.env)

**Service logs

  • docker-compose logs pfelk01
  • docker-compose logs pfelk02
  • docker-compose logs pfelk03
  • docker-compose logs logstash
  • docker-compose logs kibana

Additional context
I'm going to add a PR shortly, so I'm skipping the service logs since I've already fixed the bug locally and it seems fairly obvious. If you'd like me to go back and re-do this, I'm happy to.

Thanks for product, question on config

I read in another issue about updating the config for naming and interfaces. I tried renaming since it only reconizes my instance as opnsense but it is in fact a pfsense instance and also tried naming each interface and vlans but the settings do not seem to stick on docker-compose restart is there something else I should be doing to get the names to match?

Also it looks like so far in the container all configuration is pre done? All I did was start the instance and everything shows up just fine.

And I noticed I can only send BSD syslog, when I try syslog format I get nothing.

Thanks!

map not working

Hi,

All working perfectly only map not working give me blank page

Screenshots
https://imgur.com/4ROdgmP
https://imgur.com/sD8jnAr

Operating System (please complete the following information):

  • OS
    **Linux 4.15.0-96-generic x86_64

NAME="Ubuntu"

VERSION="18.04.3 LTS (Bionic Beaver)"

ID=ubuntu

ID_LIKE=debian

PRETTY_NAME="Ubuntu 18.04.3 LTS"

VERSION_ID="18.04"

HOME_URL="https://www.ubuntu.com/"

SUPPORT_URL="https://help.ubuntu.com/"

BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"

PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"

VERSION_CODENAME=bionic

UBUNTU_CODENAME=bionic**

  • Version of Docker (docker -v)

Docker version 19.03.6, build 369ce74a3c

  • Version of Docker-Compose (docker-compose -v):

docker-compose version 1.17.1, build unknown

Elasticsearch, Logstash, Kibana (please complete the following information):

  • Version of ELK (cat /docker-pfelk/.env)
    ELK_VERSION=7.6.1

ERROR: for es01 Container "xxxxxxxxxxxx" is unhealthy

Issue

docker-compose erroring out when instantiating container - error below
tested on Ubuntu 20.04.4 LTS and 18.04.6 LTS VM - clean installs
hypervisor XCPng 8.2
Docker version 20.10.7, build 20.10.7-0ubuntu5~18.04.3
docker-compose version 1.17.1, build unknown

looks like an issue bringing up interfaces

what am I missing here? Any assistance warmly rcvd.

Cheers...

error

Starting dockermain_setup_1 ... done

ERROR: for es01 Container "a38b229c83c2" is unhealthy.
ERROR: Encountered errors while bringing up the project.

interfaces

3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:5f:29:16:ae brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: br-7b6c673229b7: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:2d:80:34:45 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global br-7b6c673229b7
valid_lft forever preferred_lft forever
inet6 fe80::42:2dff:fe80:3445/64 scope link
valid_lft forever preferred_lft forever

syslog

Mar 26 09:37:30 mrc-node-a5 kernel: [ 2150.140883] br-7b6c673229b7: port 1(veth1fe0cf8) entered blocking state
Mar 26 09:37:30 mrc-node-a5 kernel: [ 2150.140884] br-7b6c673229b7: port 1(veth1fe0cf8) entered disabled state
Mar 26 09:37:30 mrc-node-a5 kernel: [ 2150.141031] device veth1fe0cf8 entered promiscuous mode
Mar 26 09:37:30 mrc-node-a5 systemd-udevd[17589]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Mar 26 09:37:30 mrc-node-a5 systemd-udevd[17589]: Could not generate persistent MAC address for veth23b82c5: No such file or directory
Mar 26 09:37:30 mrc-node-a5 systemd-networkd[2630]: veth1fe0cf8: Link UP
Mar 26 09:37:30 mrc-node-a5 systemd-timesyncd[2699]: Network configuration changed, trying to establish connection.
Mar 26 09:37:30 mrc-node-a5 kernel: [ 2150.142612] IPv6: ADDRCONF(NETDEV_UP): veth1fe0cf8: link is not ready
Mar 26 09:37:30 mrc-node-a5 networkd-dispatcher[1227]: WARNING:Unknown index 8 seen, reloading interface list
Mar 26 09:37:30 mrc-node-a5 systemd-udevd[17591]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Mar 26 09:37:30 mrc-node-a5 systemd-udevd[17591]: Could not generate persistent MAC address for veth1fe0cf8: No such file or directory
Mar 26 09:37:30 mrc-node-a5 dockerd[14398]: time="2022-03-26T09:37:30.501182725Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]"
Mar 26 09:37:30 mrc-node-a5 dockerd[14398]: time="2022-03-26T09:37:30.501619765Z" level=info msg="IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]"
Mar 26 09:37:30 mrc-node-a5 containerd[13883]: time="2022-03-26T09:37:30.529884353Z" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/a38b229c83c27d75d3d70982f84665f4ef094667919c056c3793a9b90be712f2 pid=17612
Mar 26 09:37:30 mrc-node-a5 systemd-timesyncd[2699]: Synchronized to time server 91.189.89.199:123 (ntp.ubuntu.com).
Mar 26 09:37:31 mrc-node-a5 systemd-timesyncd[2699]: Network configuration changed, trying to establish connection.
Mar 26 09:37:31 mrc-node-a5 kernel: [ 2150.662400] eth0: renamed from veth23b82c5
Mar 26 09:37:31 mrc-node-a5 systemd-networkd[2630]: veth1fe0cf8: Gained carrier
Mar 26 09:37:31 mrc-node-a5 systemd-networkd[2630]: br-7b6c673229b7: Gained carrier
Mar 26 09:37:31 mrc-node-a5 kernel: [ 2150.680713] IPv6: ADDRCONF(NETDEV_CHANGE): veth1fe0cf8: link becomes ready
Mar 26 09:37:31 mrc-node-a5 kernel: [ 2150.680761] br-7b6c673229b7: port 1(veth1fe0cf8) entered blocking state
Mar 26 09:37:31 mrc-node-a5 kernel: [ 2150.680764] br-7b6c673229b7: port 1(veth1fe0cf8) entered forwarding state
Mar 26 09:37:31 mrc-node-a5 dockerd[14398]: time="2022-03-26T09:37:31.166589765Z" level=info msg="ignoring event" container=a38b229c83c27d75d3d70982f84665f4ef094667919c056c3793a9b90be712f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 26 09:37:31 mrc-node-a5 containerd[13883]: time="2022-03-26T09:37:31.166818308Z" level=info msg="shim disconnected" id=a38b229c83c27d75d3d70982f84665f4ef094667919c056c3793a9b90be712f2
Mar 26 09:37:31 mrc-node-a5 containerd[13883]: time="2022-03-26T09:37:31.166904591Z" level=warning msg="cleaning up after shim disconnected" id=a38b229c83c27d75d3d70982f84665f4ef094667919c056c3793a9b90be712f2 namespace=moby
Mar 26 09:37:31 mrc-node-a5 containerd[13883]: time="2022-03-26T09:37:31.166929693Z" level=info msg="cleaning up dead shim"
Mar 26 09:37:31 mrc-node-a5 containerd[13883]: time="2022-03-26T09:37:31.174186189Z" level=warning msg="cleanup warnings time="2022-03-26T09:37:31Z" level=info msg="starting signal loop" namespace=moby pid=17721\n"
Mar 26 09:37:31 mrc-node-a5 systemd-networkd[2630]: veth1fe0cf8: Lost carrier
Mar 26 09:37:31 mrc-node-a5 kernel: [ 2150.830830] br-7b6c673229b7: port 1(veth1fe0cf8) entered disabled state
Mar 26 09:37:31 mrc-node-a5 kernel: [ 2150.831548] veth23b82c5: renamed from eth0
Mar 26 09:37:31 mrc-node-a5 systemd-udevd[17749]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Mar 26 09:37:31 mrc-node-a5 systemd-networkd[2630]: veth1fe0cf8: Link DOWN
Mar 26 09:37:31 mrc-node-a5 kernel: [ 2150.860827] br-7b6c673229b7: port 1(veth1fe0cf8) entered disabled state
Mar 26 09:37:31 mrc-node-a5 kernel: [ 2150.864005] device veth1fe0cf8 left promiscuous mode
Mar 26 09:37:31 mrc-node-a5 kernel: [ 2150.864009] br-7b6c673229b7: port 1(veth1fe0cf8) entered disabled state
Mar 26 09:37:31 mrc-node-a5 networkd-dispatcher[1227]: WARNING:Unknown index 7 seen, reloading interface list
Mar 26 09:37:31 mrc-node-a5 networkd-dispatcher[1227]: ERROR:Unknown interface index 7 seen even after reload
Mar 26 09:37:31 mrc-node-a5 systemd-timesyncd[2699]: Synchronized to time server 91.189.89.199:123 (ntp.ubuntu.com).
Mar 26 09:37:31 mrc-node-a5 systemd-networkd[2630]: br-7b6c673229b7: Lost carrier
Mar 26 09:37:31 mrc-node-a5 systemd-timesyncd[2699]: Network configuration changed, trying to establish connection.
Mar 26 09:37:31 mrc-node-a5 systemd-timesyncd[2699]: Synchronized to time server 91.189.89.199:123 (ntp.ubuntu.com).

Dashboard Sorry, there was an error import

Describe the bug
I can't import dashboard

Thank you for your help 👍

To Reproduce
Steps to reproduce the behavior:

  1. Install the lastest docker pfelf
  2. Import the template
  3. try to import Firewall dashboard v6.0 - Firewall.ndjson

Screenshots
If applicable, add screenshots to help explain your problem.
image
image

Operating System (please complete the following information):

  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):
    Linux 3.10.0-1127.19.1.el7.x86_64 x86_64
    NAME="CentOS Linux"
    VERSION="7 (Core)"
    ID="centos"
    ID_LIKE="rhel fedora"
    VERSION_ID="7"
    PRETTY_NAME="CentOS Linux 7 (Core)"
    ANSI_COLOR="0;31"
    CPE_NAME="cpe:/o:centos:centos:7"
    HOME_URL="https://www.centos.org/"
    BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

  • Version of Docker (docker -v): Docker version 19.03.13, build 4484c46d9d
  • Version of Docker-Compose (docker-compose -v): docker-compose version 1.18.0, build 8dd22a9

Elasticsearch, Logstash, Kibana (please complete the following information):

  • Version of ELK (cat /docker-pfelk/.env) 7.9.2

**Service logs

  • docker-compose logs pfelk01
  • docker-compose logs pfelk02
  • docker-compose logs pfelk03
  • docker-compose logs logstash
  • docker-compose logs kibana

Additional context
Add any other context about the problem here.

Installation Error Docker

(3b) Set vm.max_map_count to no less than 262144 (one time configuration)

sudo echo "vm.max_map_count=262144" >> /etc/sysctl.conf

Note this command failed with privileges on Debian Buster.
su root was required to run command

counciller@buster:$ sudo echo "vm.max_map_count=262144" >> /etc/sysctl.conf
-bash: /etc/sysctl.conf: Permission denied
counciller@buster:
$ su root
Password:
root@buster:/home/counciller# echo "vm.max_map_count=262144" >> /etc/sysctl.conf
root@buster:/home/counciller# exit

(4) Start Docker

sudo docker-compose up
Once fully running, navigate to the host ip (ex: 192.168.0.100:5601)

counciller@buster:~$ sudo docker-compose up
WARNING: The ELK_VERSION variable is not set. Defaulting to a blank string.
Creating network "counciller_elastic" with driver "bridge"
Creating volume "counciller_data01" with local driver
Creating volume "counciller_data02" with local driver
Creating volume "counciller_data03" with local driver
Building es01
Step 1/2 : ARG ELK_VERSION
Step 2/2 : FROM docker.elastic.co/elasticsearch/elasticsearch:${ELK_VERSION}
ERROR: Service 'es01' failed to build: invalid reference format

can't log / logstash error

Hi,
1st: Thanks for this great repo!!

Describe the bug
I can not log into logstash (see logfiles)
have I to set any file rights ?

.:
insgesamt 2,1M
drwxr-xr-x 6 root root 4,0K Apr 26 10:20 .
drwxr-xr-x 2 root root 4,0K Apr 25 23:26 kibana
drwxr-xr-x 2 root root 4,0K Apr 25 23:26 logstash
drwxr-xr-x 2 root root 4,0K Apr 25 23:26 elasticsearch
drwxr-xr-x 4 root root 4,0K Apr 25 23:26 etc
drwxr-xr-x 5 root root 4,0K Apr 25 23:25 ..
-rw-r--r-- 1 root root 62K Apr 22 22:46 pfelkdocker.zip
-rw-r--r-- 1 root root 2,0M Apr 19 20:38 geoipupdate_4.7.1_linux_amd64.deb
-rw-r--r-- 1 root root 2,7K Mär 26 07:57 docker-compose.yml
-rw-r--r-- 1 root root 18 Mär 26 07:57 .env

./kibana:
insgesamt 12K
drwxr-xr-x 6 root root 4,0K Apr 26 10:20 ..
drwxr-xr-x 2 root root 4,0K Apr 25 23:26 .
-rw-r--r-- 1 root root 70 Mär 26 07:57 Dockerfile

./logstash:
insgesamt 12K
drwxr-xr-x 6 root root 4,0K Apr 26 10:20 ..
drwxr-xr-x 2 root root 4,0K Apr 25 23:26 .
-rw-r--r-- 1 root root 74 Mär 26 07:57 Dockerfile

./elasticsearch:
insgesamt 12K
drwxr-xr-x 6 root root 4,0K Apr 26 10:20 ..
drwxr-xr-x 2 root root 4,0K Apr 25 23:26 .
-rw-r--r-- 1 root root 84 Mär 26 07:57 Dockerfile

./etc:
insgesamt 16K
drwxr-xr-x 6 root root 4,0K Apr 26 10:20 ..
drwxr-xr-x 5 root root 4,0K Apr 25 23:26 pfelk
drwxr-xr-x 4 root root 4,0K Apr 25 23:26 .
drwxr-xr-x 3 root root 4,0K Apr 25 23:26 logstash

./etc/pfelk:
insgesamt 20K
drwxr-xr-x 4 root root 4,0K Apr 26 09:43 conf.d
drwxr-xr-x 5 root root 4,0K Apr 25 23:26 .
drwxr-xr-x 2 root root 4,0K Apr 25 23:26 patterns
drwxr-xr-x 4 root root 4,0K Apr 25 23:26 ..
drwxr-xr-x 2 root root 4,0K Apr 25 23:26 databases

./etc/pfelk/conf.d:
insgesamt 88K
drwxr-xr-x 4 root root 4,0K Apr 26 09:43 .
drwxr-xr-x 2 root root 4,0K Apr 26 09:01 databases
drwxr-xr-x 2 root root 4,0K Apr 26 09:01 patterns
drwxr-xr-x 5 root root 4,0K Apr 25 23:26 ..
-rw-r--r-- 1 root root 2,0K Mär 26 07:57 01-inputs.conf
-rw-r--r-- 1 root root 2,3K Mär 26 07:57 02-types.conf
-rw-r--r-- 1 root root 1,2K Mär 26 07:57 03-filter.conf
-rw-r--r-- 1 root root 7,2K Mär 26 07:57 05-apps.conf
-rw-r--r-- 1 root root 4,1K Mär 26 07:57 20-interfaces.conf
-rw-r--r-- 1 root root 4,4K Mär 26 07:57 30-geoip.conf
-rw-r--r-- 1 root root 1005 Mär 26 07:57 35-rules-desc.conf
-rw-r--r-- 1 root root 1,3K Mär 26 07:57 36-ports-desc.conf
-rw-r--r-- 1 root root 2,1K Mär 26 07:57 37-enhanced_user_agent.conf
-rw-r--r-- 1 root root 5,2K Mär 26 07:57 38-enhanced_url.conf
-rw-r--r-- 1 root root 926 Mär 26 07:57 45-cleanup.conf
-rw-r--r-- 1 root root 2,8K Mär 26 07:57 49-enhanced_private.conf
-rw-r--r-- 1 root root 6,6K Mär 26 07:57 50-outputs.conf

./etc/pfelk/conf.d/databases:
insgesamt 8,0K
drwxr-xr-x 4 root root 4,0K Apr 26 09:43 ..
drwxr-xr-x 2 root root 4,0K Apr 26 09:01 .

./etc/pfelk/conf.d/patterns:
insgesamt 8,0K
drwxr-xr-x 4 root root 4,0K Apr 26 09:43 ..
drwxr-xr-x 2 root root 4,0K Apr 26 09:01 .

./etc/pfelk/patterns:
insgesamt 20K
drwxr-xr-x 2 root root 4,0K Apr 25 23:26 .
drwxr-xr-x 5 root root 4,0K Apr 25 23:26 ..
-rw-r--r-- 1 root root 9,4K Mär 26 07:57 pfelk.grok

./etc/pfelk/databases:
insgesamt 132K
drwxr-xr-x 5 root root 4,0K Apr 25 23:26 ..
drwxr-xr-x 2 root root 4,0K Apr 25 23:26 .
-rw-r--r-- 1 root root 15 Mär 26 07:57 private-hostnames.csv
-rw-r--r-- 1 root root 26 Mär 26 07:57 rule-names.csv
-rw-r--r-- 1 root root 116K Mär 26 07:57 service-names-port-numbers.csv

./etc/logstash:
insgesamt 12K
drwxr-xr-x 3 root root 4,0K Apr 25 23:26 .
drwxr-xr-x 4 root root 4,0K Apr 25 23:26 ..
drwxr-xr-x 2 root root 4,0K Apr 25 23:26 config

./etc/logstash/config:
insgesamt 16K
drwxr-xr-x 2 root root 4,0K Apr 25 23:26 .
drwxr-xr-x 3 root root 4,0K Apr 25 23:26 ..
-rw-r--r-- 1 root root 720 Mär 26 07:57 logstash.yml
-rw-r--r-- 1 root root 893 Mär 26 07:57 pipelines.yml

To Reproduce
docker-compose up

Operating System (please complete the following information):

  • OS: Linux 4.19.0-16-amd64 x86_64 PRETTY_NAME="Debian GNU/Linux 10 (buster)"
  • Version of Docker: Docker version 20.10.6, build 370c289
  • Version of Docker-Compose: docker-compose version 1.21.0, build unknown

Elasticsearch, Logstash, Kibana (please complete the following information):

  • Version of ELK: ELK_VERSION=7.11.0

**Service logs
logstash | [ERROR] 2021-04-26 08:29:15.555 [Converge PipelineAction::Create] translate - Invalid setting for translate filter plugin:
logstash |
logstash | filter {
logstash | translate {
logstash | # This setting must be a path
logstash | # File does not exist or cannot be opened /etc/pfelk/databases/rule-names.csv
logstash | dictionary_path => "/etc/pfelk/databases/rule-names.csv"
logstash | ...
logstash | }
logstash | }
logstash | [ERROR] 2021-04-26 08:29:15.592 [Converge PipelineAction::Create] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:pfelk, :exception=>"Java::JavaLang::IllegalStateException", :message=>"Unable to configure plugins: (ConfigurationError) Something is wrong with your configuration.", :backtrace=>["org.logstash.config.ir.CompiledPipeline.(CompiledPipeline.java:119)", "org.logstash.execution.JavaBasePipelineExt.initialize(JavaBasePipelineExt.java:83)", "org.logstash.execution.JavaBasePipelineExt$INVOKER$i$1$0$initialize.call(JavaBasePipelineExt$INVOKER$i$1$0$initialize.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:837)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuper(IRRuntimeHelpers.java:1169)", "org.jruby.ir.instructions.InstanceSuperInstr.interpret(InstanceSuperInstr.java:84)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:361)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:72)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:86)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:73)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:332)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:86)", "org.jruby.RubyClass.newInstance(RubyClass.java:939)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:332)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:86)", "org.jruby.ir.instructions.CallBase.interpret(CallBase.java:549)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:361)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:72)", "org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:92)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:191)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:178)", "org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:208)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:396)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:205)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:325)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:72)", "org.jruby.ir.interpreter.Interpreter.INTERPRET_BLOCK(Interpreter.java:116)", "org.jruby.runtime.MixedModeIRBlockBody.commonYieldPath(MixedModeIRBlockBody.java:137)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:60)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:52)", "org.jruby.runtime.Block.call(Block.java:139)", "org.jruby.RubyProc.call(RubyProc.java:318)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105)", "java.base/java.lang.Thread.run(Thread.java:834)"]}
logstash | warning: thread "Converge PipelineAction::Create" terminated with exception (report_on_exception is true):
logstash | LogStash::Error: Don't know how to handle Java::JavaLang::IllegalStateException for PipelineAction::Create<pfelk>
logstash | create at org/logstash/execution/ConvergeResultExt.java:129
logstash | add at org/logstash/execution/ConvergeResultExt.java:57
logstash | converge_state at /usr/share/logstash/logstash-core/lib/logstash/agent.rb:380
logstash | [ERROR] 2021-04-26 08:29:15.620 [Agent thread] agent - An exception happened when converging configuration {:exception=>LogStash::Error, :message=>"Don't know how to handle Java::JavaLang::IllegalStateException for PipelineAction::Create<pfelk>"}
logstash | [FATAL] 2021-04-26 08:29:15.636 [LogStash::Runner] runner - An unexpected error occurred! {:error=>#<LogStash::Error: Don't know how to handle Java::JavaLang::IllegalStateException for PipelineAction::Create<pfelk>>, :backtrace=>["org/logstash/execution/ConvergeResultExt.java:129:in create'", "org/logstash/execution/ConvergeResultExt.java:57:in add'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:380:in `block in converge_state'"]}
logstash | [FATAL] 2021-04-26 08:29:15.658 [LogStash::Runner] Logstash - Logstash stopped processing because of an error: (SystemExit) exit
logstash | org.jruby.exceptions.SystemExit: (SystemExit) exit
logstash | at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:747) ~[jruby-complete-9.2.13.0.jar:?]
logstash | at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:710) ~[jruby-complete-9.2.13.0.jar:?]
logstash | at usr.share.logstash.lib.bootstrap.environment.

(/usr/share/logstash/lib/bootstrap/environment.rb:89) ~[?:?]

Combined DockerFile

Is it possible to build a single docker file with all the elements of pfelk in it?

The reason I ask and I know its been asked before is then we could run this stack on UnRAID.

Thanks. 👍

Dashboard Visualization Erros

Describe the bug
Fresh install using Docker. Visualizations in dashboards showing errors and not presenting data.

To Reproduce
Steps to reproduce the behavior:

  1. Fresh install of Ubuntu 20.04
  2. Fresh install of Docker
  3. Fresh install of MaxMind
  4. pfElk docker install script executed without errors
  5. pfElk configuration followed in order without errors
    Index Mgmt
    Index Templates
    Saved Objects
    Log Enrichment

Screenshots
If applicable, add screenshots to help explain your problem.

Operating System (please complete the following information):
Linux 5.4.0-58-generic x86_64
NAME="Ubuntu"
VERSION="20.04.1 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.1 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

Docker version 20.10.1, build 831ebea
docker-compose version 1.25.0, build unknown

Elasticsearch, Logstash, Kibana (please complete the following information):

  • Version of ELK: ELK_VERSION=7.10.0

**Service logs
es01.log
es02.log
es03.log
kibana.log
logstash.log

Additional context
Screenshots
DHCP_Dashboard
Firewall_Dashboard
Unbound_Dashboard

** Last Note of Interest**
I am using pfSense and yet the Observer.Name field shows "OPNSense" in the Discover view where you see the specific data enrichment fields (Log Enrichment screenshot).

volume directories seem to be wrong?

Hi,
When building the docker-compose the docker seems to point to /usr/share/logstash/etc... when infact the pipelines etc just point to /etc/...
also the maxmind points to /usr/share/logstash/GeoIP and not /usr/share/GeoIP

If I change it to /usr/share/GeoIP it then works. However I am still having issues with the first volume issue above and am reluctant to make wholesale changes through the config files as obviously im doing something wrong if it works for everyone else :) ?

Cheers

Logstash Grok::PatternError: pattern %{SNORT} not defined

Describe the bug
Data does not flow to elasticksearch.

To Reproduce
Configured by default. Changed only ip Pfsence and maxmind added in docker. In logs logstash See error
[ERROR] 2020-06-10 08:57:49.898 [[main]-pipeline-manager] javapipeline - Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<Grok::PatternError: pattern %{SNORT} not defined>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:123:in block in compile'", "org/jruby/RubyKernel.java:1442:in loop'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:93:in compile'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.3.0/lib/logstash/filters/grok.rb:288:in block in register'", "org/jruby/RubyArray.java:1809:in each'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.3.0/lib/logstash/filters/grok.rb:282:in block in register'", "org/jruby/RubyHash.java:1415:in each'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.3.0/lib/logstash/filters/grok.rb:277:in register'", "org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:75:in register'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:216:in block in register_plugins'", "org/jruby/RubyArray.java:1809:in each'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:215:in register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:521:in maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:228:in start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:170:in run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:125:in block in start'"], "pipeline.sources"=>["/usr/share/logstash/etc/logstash/conf.d/01-inputs.conf", "/usr/share/logstash/etc/logstash/conf.d/05-firewall.conf", "/usr/share/logstash/etc/logstash/conf.d/10-others.conf", "/usr/share/logstash/etc/logstash/conf.d/20-suricata.conf", "/usr/share/logstash/etc/logstash/conf.d/25-snort.conf", "/usr/share/logstash/etc/logstash/conf.d/30-geoip.conf", "/usr/share/logstash/etc/logstash/conf.d/40-dns.conf", "/usr/share/logstash/etc/logstash/conf.d/45-cleanup.conf", "/usr/share/logstash/etc/logstash/conf.d/50-outputs.conf"], :thread=>"#<Thread:0x66ea3b06@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:121 run>"}
[ERROR] 2020-06-10 08:57:49.913 [Converge PipelineAction::Create

] agent - Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create, action_result: false", :backtrace=>nil}

Screenshots
kibana
https://www.dropbox.com/s/xwl1x9578mfdpfo/%D0%A1%D0%BD%D0%B8%D0%BC%D0%BE%D0%BA.PNG?dl=0

Operating System (please complete the following information):

  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"):
    Linux 4.19.0-9-amd64 x86_64
    PRETTY_NAME="Debian GNU/Linux 10 (buster)"
    NAME="Debian GNU/Linux"
    VERSION_ID="10"
    VERSION="10 (buster)"
    VERSION_CODENAME=buster
    ID=debian
    HOME_URL="https://www.debian.org/"
    SUPPORT_URL="https://www.debian.org/support"
    BUG_REPORT_URL="https://bugs.debian.org/"

  • Version of Docker (docker -v):
    Docker version 19.03.11, build 42e35e61f3

  • Version of Docker-Compose (docker-compose -v):
    docker-compose version 1.26.0, build unknown
    Elasticsearch, Logstash, Kibana (please complete the following information):

  • Version of ELK (cat /docker-pfelk/.env)
    ELK_VERSION=7.7.0
    **Service logs

  • docker-compose logs pfelk01

  • docker-compose logs pfelk02

  • docker-compose logs pfelk03

  • docker-compose logs logstash

  • docker-compose logs kibana
    https://www.dropbox.com/s/r4knplbwaxkwi6e/logs.zip?dl=0
    Additional context
    Add any other context about the problem here.

Containerization of PFELK #69

This issues was initially opened within pfelk (#69) with the goal of containerizing pfelk which was accomplished. The outlier is adding/configuring a cron job to update GeoIP.

Rules & Ports conf Files Not Parsing w/Logasth

Describe the bug
The 35-rules-desc.conf and 36-ports-desc.conf files crash logstash. Omitting for further troubleshooting

To Reproduce
Steps to reproduce the behavior:
Follow the docker install instructions

Screenshots
Capture

Elasticsearch, Logstash, Kibana (please complete the following information):

  • Version of ELK 7.9.2

**Service logs

  • docker-compose logs logstash

Attaching to logstash.log

stuck with docker compose up process and unable to access Kibana

Describe the bug
stuck with docker compose up process and unable to access Kibana

To Reproduce
after sudo docker-compose up

Operating System (please complete the following information):

  • OS (Linux 4.4.180+ x86_64"):
  • Version of Docker (20.10.3, build 55f0773):
  • Version of Docker-Compose (1.28.5):

Elasticsearch, Logstash, Kibana (please complete the following information):

  • Version of ELK (all latest version)

*Service logs
{"@timestamp":"2022-07-16T15:58:01.090Z", "log.level": "INFO", "message":"using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/md2)]], net usable_space [233.1gb], net total_space [3.4tb], types [btrfs]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.env.NodeEnvironment","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"es-docker-cluster"}
es01 | {"@timestamp":"2022-07-16T15:58:01.093Z", "log.level": "INFO", "message":"heap size [512mb], compressed ordinary object pointers [true]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.env.NodeEnvironment","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"es-docker-cluster"}
es01 | {"@timestamp":"2022-07-16T15:58:01.338Z", "log.level": "INFO", "message":"node name [es01], node ID [ak8K5T76SfidrbZnn07TFw], cluster name [es-docker-cluster], roles [data_cold, data, remote_cluster_client, master, data_warm, data_content, transform, data_hot, ml, data_frozen, ingest]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.node.Node","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"es-docker-cluster"}
logstash | [INFO ] 2022-07-16 15:58:03.390 [Ruby-0-Thread-10: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:213] elasticsearch - Failed to perform request {:message=>"Connect to es01:9200 [es01/172.22.0.3] failed: Connection refused (Connection refused)", :exception=>Manticore::SocketException, :cause=>org.apache.http.conn.HttpHostConnectException: Connect to es01:9200 [es01/172.22.0.3] failed: Connection refused (Connection refused)}
logstash | [WARN ] 2022-07-16 15:58:03.394 [Ruby-0-Thread-10: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:213] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@es01:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://es01:9200/][Manticore::SocketException] Connect to es01:9200 [es01/172.22.0.3] failed: Connection refused (Connection refused)"}
logstash | [INFO ] 2022-07-16 15:58:08.405 [Ruby-0-Thread-10: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:213] elasticsearch - Failed to perform request {:message=>"Connect to es01:9200 [es01/172.22.0.3] failed: Connection refused (Connection refused)", :exception=>Manticore::SocketException, :cause=>org.apache.http.conn.HttpHostConnectException: Connect to es01:9200 [es01/172.22.0.3] failed: Connection refused (Connection refused)}
logstash | [WARN ] 2022-07-16 15:58:08.408 [Ruby-0-Thread-10: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:213] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@es01:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://es01:9200/][Manticore::SocketException] Connect to es01:9200 [es01/172.22.0.3] failed: Connection refused (Connection refused)"}
logstash | [INFO ] 2022-07-16 15:58:13.423 [Ruby-0-Thread-10: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:213] elasticsearch - Failed to perform request {:message=>"Connect to es01:9200 [es01/172.22.0.3] failed: Connection refused (Connection refused)", :exception=>Manticore::SocketException, :cause=>org.apache.http.conn.HttpHostConnectException: Connect to es01:9200 [es01/172.22.0.3] failed: Connection refused (Connection refused)}
logstash | [WARN ] 2022-07-16 15:58:13.427 [Ruby-0-Thread-10: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:213] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@es01:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://es01:9200/][Manticore::SocketException] Connect to es01:9200 [es01/172.22.0.3] failed: Connection refused (Connection refused)"}
logstash | [ERROR] 2022-07-16 15:58:18.390 [monitoring-license-manager] licensereader - Unable to retrieve license information from license server {:message=>"No Available connections"}
logstash | [INFO ] 2022-07-16 15:58:18.436 [Ruby-0-Thread-10: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:213] elasticsearch - Failed to perform request {:message=>"Connect to es01:9200 [es01/172.22.0.3] failed: Connection refused (Connection refused)", :exception=>Manticore::SocketException, :cause=>org.apache.http.conn.HttpHostConnectException: Connect to es01:9200 [es01/172.22.0.3] failed: Connection refused (Connection refused)}
logstash | [WARN ] 2022-07-16 15:58:18.437 [Ruby-0-Thread-10: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:213] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@es01:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://es01:9200/][Manticore::SocketException] Connect to es01:9200 [es01/172.22.0.3] failed: Connection refused (Connection refused)"}
logstash | [INFO ] 2022-07-16 15:58:20.997 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:213] licensereader - Failed to perform request {:message=>"Connect to es01:9200 [es01/172.22.0.3] failed: Connection refused (Connection refused)", :exception=>Manticore::SocketException, :cause=>org.apache.http.conn.HttpHostConnectException: Connect to es01:9200 [es01/172.22.0.3] failed: Connection refused (Connection refused)}
logstash | [WARN ] 2022-07-16 15:58:21.014 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:213] licensereader - Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://logstash_system:xxxxxx@es01:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://es01:9200/][Manticore::SocketException] Connect to es01:9200 [es01/172.22.0.3] failed: Connection refused (Connection refused)"}
es02 | {"@timestamp":"2022-07-16T15:58:23.065Z", "log.level": "WARN", "message":"unable to install syscall filter: ", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.JNANatives","elasticsearch.node.name":"es02","elasticsearch.cluster.name":"es-docker-cluster","error.type":"java.lang.UnsupportedOperationException","error.message":"seccomp unavailable: CONFIG_SECCOMP not compiled into kernel, CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed","error.stack_trace":"java.lang.UnsupportedOperationException: seccomp unavailable: CONFIG_SECCOMP not compiled into kernel, CONFIG_SECCOMP and emCallFilter(Natives.java:102)\n\tat org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:112)\n\tat org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:183)\n\tat org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:358)\n\tat org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:166)\n\tat org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:157)\n\tat org.elasticsearch.common.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:81)\n\tat org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:112)\n\tat org.elasticsearch.cli.Command.main(Command.java:77)\n\tat org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:122)\n\tat org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:80)\n"}logstash | [INFO ] 2022-07-16 15:58:38.507 [Ruby-0-Thread-10: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:213] elasticsearch - Failed to perform request {:message=>"Connect to es01:9200 [es01/172.22.0.3] failed: Connection refused (Connection refused)", :exception=>Manticore::SocketException, :cause=>org.apache.http.conn.HttpHostConnectException: Connect to es01:9200 [es01/172.22.0.3] failed: Connection refused (Connection refused)}logstash | [WARN ] 2022-07-16 15:58:38.518 [Ruby-0-Thread-10: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-11.4.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:213] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error {:url=>"https://elastic:xxxxxx@es01:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [https://es01:9200/][Manticore::SocketException] Connect to es01:9200 [es01/172.22.0.3] failed: Connection refused (Connection refused)"}es01 | {"@timestamp":"2022-07-16T15:58:39.886Z", "log.level": "INFO", "message":"creating NettyAllocator with the following configs: [name=unpooled, suggested_max_allocation_size=1mb, factors={es.unsafe.use_unpooled_allocator=null, g1gc_enabled=true, g1gc_region_size=4mb, heap_size=512mb}]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.transport.netty4.NettyAllocator","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"es-docker-cluster"}es03 | {"@timestamp":"2022-07-16T15:58:40.018Z", "log.level": "INFO", "message":"version[8.2.2], pid[10], build[default/docker/9876968ef3c745186b94fdabd4483e01499224ef/2022-05-25T15:47:06.259735307Z], OS[Linux/4.4.180+/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/18.0.1.1/18.0.1.1+2-6]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.node.Node","elasticsearch.node.name":"es03","elasticsearch.cluster.name":"es-docker-cluster"}es03 | {"@timestamp":"2022-07-16T15:58:40.028Z", "log.level": "INFO", "message":"JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.node.Node","elasticsearch.node.name":"es03","elasticsearch.cluster.name":"es-docker-cluster"}es03 | {"@timestamp":"2022-07-16T15:58:40.030Z", "log.level": "INFO", "message":"JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -Djava.security.manager=allow, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-12626546595507795926, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc
,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Des.cgroups.hierarchy.override=/, -Xms512m, -Xmx512m, -XX:MaxDirectMemorySize=268435456, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=docker, -Des.bundled_jdk=true]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.node.Node","elasticsearch.node.name":"es03","elasticsearch.cluster.name":"es-docker-cluster"}es01 | {"@timestamp":"2022-07-16T15:58:40.067Z", "log.level": "INFO", "message":"using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.indices.recovery.RecoverySettings","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"es-docker-cluster"}es01 | {"@timestamp":"2022-07-16T15:58:40.246Z", "log.level": "INFO", "message":"using discovery type [multi-node] and seed hosts providers [settings]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.discovery.DiscoveryModule","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"es-docker-cluster"}

Logstash PipelineAction ERROR on startup

Continuing to see Logstash failure at startup when docker-compose set to one node

logstash     **| [ERROR] 2020-10-18 19:10:35.633 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}**
logstash     | [INFO ] 2020-10-18 19:10:35.661 [Ruby-0-Thread-9: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.6.2-java/lib/logstash/outputs/elasticsearch/common.rb:40] elasticsearch - Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
logstash     | [INFO ] 2020-10-18 19:10:35.699 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
logstash     | [INFO ] 2020-10-18 19:10:40.721 [LogStash::Runner] runner - Logstash shut down.
logstash exited with code 1```

No visualizations in Squid Dashboard

Describe the bug
I have deployed the pfelk through docker containers. I am able to receive the data from my pfsense to the elastic and I can see the indices created in Kibana and the data is being added to the index. However, I can't see the visualization in the Squid dashboard

To Reproduce
Steps to reproduce the behavior:

  1. Install pfelk with docker
  2. Install the templates(component and index templates)
  3. Set the logstash UDP address([IP]:5140) as remote log server in pfsense
  4. Import the Dashboards into Kibana
  5. Open the Squid Dashboard

Screenshots
Screenshot from 2021-01-08 15-14-51
Screenshot from 2021-01-08 15-15-07
Screenshot from 2021-01-08 15-14-06

Operating System (please complete the following information):

Elasticsearch, Logstash, Kibana (please complete the following information):

  • Version of ELK (cat /docker-pfelk/.env): ELK_VERSION=7.10.1

Service logs

  • docker-compose logs pfelk01
es01        | {"type": "server", "timestamp": "2021-01-08T09:31:28,722Z", "level": "INFO", "component": "o.e.x.s.a.TokenService", "cluster.name": "es-docker-cluster", "node.name": "es01", "message": "refreshed keys" }
es01        | {"type": "server", "timestamp": "2021-01-08T09:31:28,746Z", "level": "INFO", "component": "o.e.l.LicenseService", "cluster.name": "es-docker-cluster", "node.name": "es01", "message": "license [bef35319-786a-4d30-8a9c-5ca755b1339a] mode [basic] - valid" }
es01        | {"type": "server", "timestamp": "2021-01-08T09:31:28,748Z", "level": "INFO", "component": "o.e.x.s.s.SecurityStatusChangeListener", "cluster.name": "es-docker-cluster", "node.name": "es01", "message": "Active license is now [BASIC]; Security is disabled" }
es01        | {"type": "server", "timestamp": "2021-01-08T09:31:28,761Z", "level": "INFO", "component": "o.e.h.AbstractHttpServerTransport", "cluster.name": "es-docker-cluster", "node.name": "es01", "message": "publish_address {172.18.0.3:9200}, bound_addresses {0.0.0.0:9200}" }
es01        | {"type": "server", "timestamp": "2021-01-08T09:31:28,761Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "es-docker-cluster", "node.name": "es01", "message": "started" }
es01        | {"type": "deprecation", "timestamp": "2021-01-08T09:32:51,771Z", "level": "DEPRECATION", "component": "o.e.d.a.b.BulkRequestParser", "cluster.name": "es-docker-cluster", "node.name": "es01", "message": "[types removal] Specifying types in bulk requests is deprecated.", "cluster.uuid": "_XuOhpVCQ_-4ePdZ68tELQ", "node.id": "5xp6gCnEQvG8Ez3j5X_6og"  }

  • docker-compose logs pfelk02
es02        | {"type": "server", "timestamp": "2021-01-08T09:31:34,711Z", "level": "WARN", "component": "o.e.c.r.a.AllocationService", "cluster.name": "es-docker-cluster", "node.name": "es02", "message": "[.kibana-event-log-7.10.1-000001][0] marking unavailable shards as stale: [G9qWF4wUT32f_XC-hg5F_Q]", "cluster.uuid": "_XuOhpVCQ_-4ePdZ68tELQ", "node.id": "3AmE-f8oSlepBeIuHRWaOQ"  }
es02        | {"type": "server", "timestamp": "2021-01-08T09:31:40,944Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "es-docker-cluster", "node.name": "es02", "message": "Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[pfelk-firewall-2021.01][0]]]).", "cluster.uuid": "_XuOhpVCQ_-4ePdZ68tELQ", "node.id": "3AmE-f8oSlepBeIuHRWaOQ"  }
es02        | {"type": "server", "timestamp": "2021-01-08T09:31:42,809Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "es-docker-cluster", "node.name": "es02", "message": "Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana-event-log-7.10.1-000001][0]]]).", "cluster.uuid": "_XuOhpVCQ_-4ePdZ68tELQ", "node.id": "3AmE-f8oSlepBeIuHRWaOQ"  }

  • docker-compose logs pfelk03
es03        | {"type": "server", "timestamp": "2021-01-08T09:31:27,993Z", "level": "INFO", "component": "o.e.x.s.a.TokenService", "cluster.name": "es-docker-cluster", "node.name": "es03", "message": "refresh keys", "cluster.uuid": "_XuOhpVCQ_-4ePdZ68tELQ", "node.id": "u2PpmSdrSAmFsXOE7ZxWwg"  }
es03        | {"type": "server", "timestamp": "2021-01-08T09:31:28,126Z", "level": "INFO", "component": "o.e.x.s.a.TokenService", "cluster.name": "es-docker-cluster", "node.name": "es03", "message": "refreshed keys", "cluster.uuid": "_XuOhpVCQ_-4ePdZ68tELQ", "node.id": "u2PpmSdrSAmFsXOE7ZxWwg"  }
es03        | {"type": "server", "timestamp": "2021-01-08T09:31:28,199Z", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "es-docker-cluster", "node.name": "es03", "message": "added {{es01}{5xp6gCnEQvG8Ez3j5X_6og}{ClUVqoGoREuh12Y7eees-Q}{172.18.0.3}{172.18.0.3:9300}{cdhilmrstw}{ml.machine_memory=16646402048, ml.max_open_jobs=20, xpack.installed=true, transform.node=true}}, term: 79, version: 1263, reason: ApplyCommitRequest{term=79, version=1263, sourceNode={es02}{3AmE-f8oSlepBeIuHRWaOQ}{UIdhHD5_TAOx_JkhP4rH6g}{172.18.0.4}{172.18.0.4:9300}{cdhilmrstw}{ml.machine_memory=16646402048, ml.max_open_jobs=20, xpack.installed=true, transform.node=true}}", "cluster.uuid": "_XuOhpVCQ_-4ePdZ68tELQ", "node.id": "u2PpmSdrSAmFsXOE7ZxWwg"  }

  • docker-compose logs logstash
logstash    | [INFO ] 2021-01-08 09:31:40.606 [[main]<udp] udp - UDP listener started {:address=>"0.0.0.0:5190", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
logstash    | [INFO ] 2021-01-08 09:31:40.607 [[main]<udp] udp - UDP listener started {:address=>"0.0.0.0:5140", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
logstash    | [INFO ] 2021-01-08 09:31:40.610 [[main]<udp] udp - UDP listener started {:address=>"0.0.0.0:5141", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
logstash    | [INFO ] 2021-01-08 09:31:40.653 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}

  • docker-compose logs kibana
kibana      | {"type":"log","@timestamp":"2021-01-08T09:31:32Z","tags":["info","plugins","watcher"],"pid":10,"message":"Your basic license does not support watcher. Please upgrade your license."}
kibana      | {"type":"log","@timestamp":"2021-01-08T09:31:32Z","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":10,"message":"Starting monitoring stats collection"}
kibana      | {"type":"log","@timestamp":"2021-01-08T09:31:33Z","tags":["listening","info"],"pid":10,"message":"Server running at http://0:5601"}
kibana      | {"type":"log","@timestamp":"2021-01-08T09:31:34Z","tags":["info","http","server","Kibana"],"pid":10,"message":"http server running at http://0:5601"}
kibana      | {"type":"log","@timestamp":"2021-01-08T09:31:34Z","tags":["error","elasticsearch","data"],"pid":10,"message":"[version_conflict_engine_exception]: [task:endpoint:user-artifact-packager:1.0.0]: version conflict, document already exists (current version [9190])"}
kibana      | {"type":"log","@timestamp":"2021-01-08T09:31:34Z","tags":["error","elasticsearch","data"],"pid":10,"message":"[version_conflict_engine_exception]: [task:Lens-lens_telemetry]: version conflict, document already exists (current version [16])"}
kibana      | {"type":"log","@timestamp":"2021-01-08T09:31:34Z","tags":["error","elasticsearch","data"],"pid":10,"message":"[version_conflict_engine_exception]: [task:Actions-actions_telemetry]: version conflict, document already exists (current version [16])"}
kibana      | {"type":"log","@timestamp":"2021-01-08T09:31:34Z","tags":["error","elasticsearch","data"],"pid":10,"message":"[version_conflict_engine_exception]: [task:apm-telemetry-task]: version conflict, document already exists (current version [22])"}
kibana      | {"type":"log","@timestamp":"2021-01-08T09:31:34Z","tags":["error","elasticsearch","data"],"pid":10,"message":"[version_conflict_engine_exception]: [task:Alerting-alerting_telemetry]: version conflict, document already exists (current version [16])"}
kibana      | {"type":"log","@timestamp":"2021-01-08T09:31:35Z","tags":["warning","plugins","reporting"],"pid":10,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}


Additional context
I see that the grok pattern for the Squid logs is missing in the pfelk.grok file.

Logstash Crashing GeoIP Volume Location

Describe the bug
Logstash crashes due to the volume path in the docker-compose.yml for GeoIP. According to that error it is looking in the path /usr/share/GeoIP/, but the volume is mounted to /usr/share/logstash/GeoIP/

https://github.com/3ilson/docker-pfelk/blob/651a902cfe7a22b637046d6938d3a0d3a996c442/docker-compose.yml#L102

[ERROR] 2020-04-08 22:00:03.265 [Converge PipelineAction::Create<main>] geoip - Invalid setting for geoip filter plugin:,
,
  filter {,
    geoip {,
      # This setting must be a path,
      # File does not exist or cannot be opened /usr/share/GeoIP/GeoLite2-City.mmdb,
      database => "/usr/share/GeoIP/GeoLite2-City.mmdb",�,
.,
    },
  },

Changing this to be /usr/share/GeoIP/:/usr/share/GeoIP/ fixes the issue but would like to understand if that is not the intention.

To Reproduce
Steps to reproduce the behavior:
Follow the README.md on a fresh install.

Operating System (please complete the following information):

  • OS (printf "$(uname -srm)\n$(cat /etc/os-release)\n"): Ubuntu 18.04.4 LTS
  • Version of Docker (docker -v): 19.03.8
  • Version of Docker-Compose (docker-compose -v): 1.24.1

Elasticsearch, Logstash, Kibana (please complete the following information):

  • Version of ELK (cat /docker-pfelk/.env) Latest zip - afb6bec

**Service logs

  • docker-compose logs pfelk01
  • docker-compose logs pfelk02
  • docker-compose logs pfelk03
  • docker-compose logs logstash
  • docker-compose logs kibana

logstash.txt

Additional context
Add any other context about the problem here.

logstash giving warning and error for pipline and java

logstash     | [INFO ] 2020-10-22 02:36:14.817 [LogStash::Runner] runner - Logstash shut down.
logstash     | WARNING: An illegal reflective access operation has occurred
logstash     | WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-1/jruby5029500797019692358jopenssl.jar) to field java.security.MessageDigest.provider
logstash     | WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
logstash     | WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
logstash     | WARNING: All illegal access operations will be denied in a future release
logstash     | Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
logstash     | [INFO ] 2020-10-22 02:36:26.616 [main] runner - Starting Logstash {"logstash.version"=>"7.9.2", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10-LTS on 11.0.8+10-LTS +jit [linux-x86_64]"}
logstash     | [ERROR] 2020-10-22 02:36:28.252 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"input\", \"filter\", \"output\" at line 6, column 1 (byte 6) after ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:183:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:69:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:44:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:357:in `block in converge_state'"]}
logstash     | [INFO ] 2020-10-22 02:36:28.324 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
logstash     | [INFO ] 2020-10-22 02:36:33.348 [LogStash::Runner] runner - Logstash shut down.
logstash     | WARNING: An illegal reflective access operation has occurred
logstash     | WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-1/jruby3221644865931324601jopenssl.jar) to field java.security.MessageDigest.provider
logstash     | WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
logstash     | WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
logstash     | WARNING: All illegal access operations will be denied in a future release
logstash     | Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
logstash     | [INFO ] 2020-10-22 02:36:45.650 [main] runner - Starting Logstash {"logstash.version"=>"7.9.2", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10-LTS on 11.0.8+10-LTS +jit [linux-x86_64]"}
logstash     | [ERROR] 2020-10-22 02:36:47.318 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"input\", \"filter\", \"output\" at line 6, column 1 (byte 6) after ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:183:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:69:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:44:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:357:in `block in converge_state'"]}
logstash     | [INFO ] 2020-10-22 02:36:47.383 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
logstash     | [INFO ] 2020-10-22 02:36:52.403 [LogStash::Runner] runner - Logstash shut down.```

Disk space

Hi,

thanks for that great project!

I have a question regarding disk space: is there anything required to configure so that the disk of the pfelk host will not get full with logs? Is there a mechanisms that will free up some disk space after a certain amount is full?

Please head me into the right direction.

Thanks,
Dago

logstash can not connect to localhost_9200

logstash | [WARN ] 2020-10-21 21:04:11.839 [Ruby-0-Thread-12: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.6.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}

logstash crashes with error

Describe the bug
While trying to make it work as described in HOWTO guide, I noticed that logstash crashes repeatedly with error below

To Reproduce
Steps to reproduce the behavior:
Install ELK as described in the guide.

Screenshots
[ERROR LOG]
logstash | Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
logstash | [INFO ] 2020-04-18 18:01:51.803 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"7.6.1"}
logstash | [INFO ] 2020-04-18 18:01:57.122 [Converge PipelineAction::Create

] Reflections - Reflections took 89 ms to scan 1 urls, producing 20 keys and 40 values
logstash | [ERROR] 2020-04-18 18:01:58.014 [Converge PipelineAction::Create] geoip - Invalid setting for geoip filter plugin:
logstash |
logstash | filter {
logstash | geoip {
logstash | # This setting must be a path
logstash | # File does not exist or cannot be opened /usr/share/logstash/GeoIP/GeoLite2-ASN.mmdb
logstash | database => "/usr/share/logstash/GeoIP/GeoLite2-ASN.mmdb"
logstash | ...
logstash | }
logstash | }
logstash | [ERROR] 2020-04-18 18:01:58.018 [Converge PipelineAction::Create] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::JavaLang::IllegalStateException", :message=>"Unable to configure plugins: (ConfigurationError) Something is wrong with your configuration.", :backtrace=>["org.logstash.config.ir.CompiledPipeline.(CompiledPipeline.java:103)", "org.logstash.execution.JavaBasePipelineExt.initialize(JavaBasePipelineExt.java:60)", "org.logstash.execution.JavaBasePipelineExt$INVOKER$i$1$0$initialize.call(JavaBasePipelineExt$INVOKER$i$1$0$initialize.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:837)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuper(IRRuntimeHelpers.java:1169)", "org.jruby.ir.instructions.InstanceSuperInstr.interpret(InstanceSuperInstr.java:84)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:361)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:72)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:86)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:73)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:332)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:86)", "org.jruby.RubyClass.newInstance(RubyClass.java:915)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:332)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:86)", "org.jruby.ir.instructions.CallBase.interpret(CallBase.java:540)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:361)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:72)", "org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:92)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:191)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:178)", "org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:208)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:396)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:205)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:325)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:72)", "org.jruby.ir.interpreter.Interpreter.INTERPRET_BLOCK(Interpreter.java:116)", "org.jruby.runtime.MixedModeIRBlockBody.commonYieldPath(MixedModeIRBlockBody.java:143)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:79)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:71)", "org.jruby.runtime.Block.call(Block.java:125)", "org.jruby.RubyProc.call(RubyProc.java:274)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105)", "java.base/java.lang.Thread.run(Thread.java:834)"]}
logstash | warning: thread "Converge PipelineAction::Create" terminated with exception (report_on_exception is true):
logstash | LogStash::Error: Don't know how to handle Java::JavaLang::IllegalStateException for PipelineAction::Create<main>
logstash | create at org/logstash/execution/ConvergeResultExt.java:109
logstash | add at org/logstash/execution/ConvergeResultExt.java:37
logstash | converge_state at /usr/share/logstash/logstash-core/lib/logstash/agent.rb:339
logstash | [ERROR] 2020-04-18 18:01:58.058 [Agent thread] agent - An exception happened when converging configuration {:exception=>LogStash::Error, :message=>"Don't know how to handle Java::JavaLang::IllegalStateException for PipelineAction::Create<main>", :backtrace=>["org/logstash/execution/ConvergeResultExt.java:109:in create'", "org/logstash/execution/ConvergeResultExt.java:37:in add'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:339:in block in converge_state'"]} logstash | [FATAL] 2020-04-18 18:01:58.088 [LogStash::Runner] runner - An unexpected error occurred! {:error=>#<LogStash::Error: Don't know how to handle Java::JavaLang::IllegalStateExceptionforPipelineAction::Create>, :backtrace=>["org/logstash/execution/ConvergeResultExt.java:109:in create'", "org/logstash/execution/ConvergeResultExt.java:37:in add'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:339:in block in converge_state'"]}
logstash | [ERROR] 2020-04-18 18:01:58.112 [LogStash::Runner] Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

Operating System (please complete the following information):

Elasticsearch, Logstash, Kibana (please complete the following information):

**Service logs

  • docker-compose logs pfelk01
  • docker-compose logs pfelk02
  • docker-compose logs pfelk03
  • docker-compose logs logstash
  • docker-compose logs kibana

Additional context
Add any other context about the problem here.

docker - logstash cant find conf files

i can get pfelk to run natively (not docker) just fine.. but i'd really prefer a docker installation. I've watched the youtube videos..

I get this error: logstash | [INFO ] 2021-03-20 15:39:28.963 [Agent thread] configpathloader - No config files found in path {:path=>"/etc/pfelk/conf.d/*.conf"}

where do i need to put the folders for it to locate..
- ./etc/logstash/config/:/usr/share/logstash/config:ro
- ./etc/logstash/conf.d/:/etc/pfelk/conf.d:ro
- ./etc/logstash/conf.d/patterns/:/etc/pfelk/patterns:ro
- ./etc/logstash/conf.d/databases/:/etc/pfelk/databases:ro
so its my thinking that it should just look in your local folder where you pulled the git.

where am i missing?

Docker installation error

Describe the bug
Error while deploying docker instances.

To Reproduce
Steps to reproduce the behavior:

  1. Deploy Ubuntu 22.04 as a LXC on Proxmox (unprivileged)
  2. Update all the packages and reboot
  3. Follow the steps, type docker compose up
  4. Error:

`Recreating docker-main_setup_1 ... done
Recreating docker-main_es01_1 ... error
ERROR: for docker-main_es01_1 Cannot start service es01: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error setting rlimits for ready process: error setting rlimit type 8: operation not permitted: unknown

ERROR: for es01 Cannot start service es01: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error setting rlimits for ready process: error setting rlimit type 8: operation not permitted: unknown`

Operating System (please complete the following information):

  • OS
    PRETTY_NAME="Ubuntu 22.04.2 LTS"
    NAME="Ubuntu"
    VERSION_ID="22.04"
    VERSION="22.04.2 LTS (Jammy Jellyfish)"
    VERSION_CODENAME=jammy
    ID=ubuntu
    ID_LIKE=debian
  • Version of Docker: Docker version 20.10.21, build 20.10.21-0ubuntu1~22.04.3
  • Version of Docker-Compose: docker-compose version 1.29.2, build unknown

Logstash crashes during startup "Could not execute action: PipelineAction"

logstash | [2020-09-03T19:43:32,780][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@Version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
logstash | [2020-09-03T19:43:32,859][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch:9200"]}
logstash | [2020-09-03T19:43:32,864][INFO ][logstash.filters.geoip ][main] Using geoip database {:path=>"/usr/share/logstash/GeoIP/GeoLite2-City.mmdb"}
logstash | [2020-09-03T19:43:32,943][INFO ][logstash.outputs.elasticsearch] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
logstash | [2020-09-03T19:43:33,168][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@Version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
logstash | [2020-09-03T19:43:33,172][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create

, action_result: false", :backtrace=>nil}
logstash | [2020-09-03T19:43:33,541][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
logstash | [2020-09-03T19:43:38,444][INFO ][logstash.runner ] Logstash shut down.
logstash | [2020-09-03T19:43:38,538][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

config of 50-output

Describe the bug
A reference to pfelk01 still exists in logstash config file 50-outputs.conf. This prevents logstash from finding the elasticsearch instance, when running with a single elasticsearch instance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.