Code Monkey home page Code Monkey logo

helk's Introduction

HELK

License: GPL v3 GitHub issues-closed Twitter Open Source Love stability-alpha

The Hunting ELK or simply the HELK is one of the first open source hunt platforms with advanced analytics capabilities such as SQL declarative language, graphing, structured streaming, and even machine learning via Jupyter notebooks and Apache Spark over an ELK stack. This project was developed primarily for research, but due to its flexible design and core components, it can be deployed in larger environments with the right configurations and scalable infrastructure.

Goals

  • Provide an open source hunting platform to the community and share the basics of Threat Hunting.
  • Expedite the time it takes to deploy a hunt platform.
  • Improve the testing and development of hunting use cases in an easier and more affordable way.
  • Enable Data Science capabilities while analyzing data via Apache Spark, GraphFrames & Jupyter Notebooks.

Current Status: Alpha

The project is currently in an alpha stage, which means that the code and the functionality are still changing. We haven't yet tested the system with large data sources and in many scenarios. We invite you to try it and welcome any feedback.

Docs:

Resources

Author

Current Committers

License: GPL-3.0

HELK's GNU General Public License

helk's People

Contributors

aarju avatar badgateway666 avatar bastelfreak avatar brokenvhs avatar colinrubbert avatar crboyd avatar cyb3rward0g avatar dev-id avatar devdua avatar esebese avatar ferretesq avatar freeload101 avatar itsnotapt avatar jaredcatkinson avatar leechristensen avatar neu5ron avatar nguyenl95 avatar nicholasaleks avatar pebri96 avatar richiercyrus avatar rlarabee avatar robwinchester3 avatar rsimplicio avatar svch0stz avatar thomaspatzke avatar troplolbe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helk's Issues

Error Code 127

When attempting install on a fresh Ubuntu 16.04 LTS build I am seeing this error

image

Any help would be appreciated

Not able to create index related to winlogbeat [just making sure that's needed]

Hi @Cyb3rWard0g
Thanks a lot for the time used to implement such a great project.

I am using the latest HELK image.
What i see on the index patterns is the below:
screenshot from 2018-04-22 14 11 35

Do i still have to do what your winlogbeat manual describes as: "You will be able to substitute logstatsh-* for winlogbeat-* and set the field name to Timestamp as shown in figure 33 below. Click on the create option at the bottom to continue." ?

Thanks and sorry for the noobish question

Unable to configure new index pattern in kibana

Firstly, thank you so much for your contributions including your awesome blog posts about setting up a threat hunting lab.

Do you know why my pull-down menu, "Time Filter field name" is empty in kibana when I go to add a new winlogbeat* index pattern? Thanks for your help

screen shot 2017-11-02 at 2 17 47 pm

Minimum Disk Space Requirement

Hey @Cyb3rWard0g ,

During our testing / me breaking HELK last week. We kept getting that weird ElasticSearch error, which I don't recall off the top of my head but it was related to disk size.

Having the minimum requirement documented for disk space to 50gb for testing purposes and 100gb+ for production should meet minimum requirements on the disk space issue.

Secondly, you could look at further optimization of ElasticSearch to fine tune the disk usage https://www.elastic.co/guide/en/elasticsearch/reference/6.2/tune-for-disk-usage.html shows some good information on how that can be done through ElasticSearch.

access security and alerting

hi one more thing..

in order for us to use this setup with live data there has to be access security on the kibana prefebaly on index level. search guard has a free product if you dont need windows AD integration.

for alerting elastalert does not currently work with version6 of ELK sentinl does however. might want to check that out too.

Installing on Ubuntu pip==1.5.4 distribution was not found...

@Cyb3rWard0g
Error message shows up in /var/log/helk-install.log
Error: The 'pip==1.5.4' distribution was not found and is required by...

Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial

Fix:
sudo easy_install pip==1.5.4

After that, the installation went through fine and I got happy message:
IT IS HUNTING SEASON!!!!!

May want to consider dropping that into the installation script. Cheers and thanks for the great work!

use sentinl instead of elastalert

hi just an idea

the sentinl guys support the latest elk versions

https://github.com/sirensolutions/sentinl

i've started to amend the logstash filters on my repositiory, but work is slow as i have a do my "normal" tasks at work also. but im trying to give it priority when i can

im working on adding firewall and network logs and updating sysmon etc with the event id's i shared earlier.

reguarding the integration of bro have you seen the security onion project they in the process of adding elk functionality so you might want to take a look at what they are doing, already could save you some time. i run security onion in prod and are very pleased with it, the prospect of adding elk and sentinl alerting is on my road map

failed to parse [process_parent_id]", "caused_by"=>{"type"=>"number_format_exception", "reason"=>"For input string: \"0x1598\

Describe the problem
I started getting the following error messages after ingesting windows security event logs:

[2018-06-11T01:52:12,840][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"3149644173", :_index=>"logs-endpoint-winevent-security-2018.06.11", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x69227a77>], :response=>{"index"=>{"_index"=>"logs-endpoint-winevent-security-2018.06.11", "_type"=>"doc", "_id"=>"3149644173", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [process_id]", "caused_by"=>{"type"=>"number_format_exception", "reason"=>"For input string: \"0x10dc\""}}}}}
[2018-06-11T01:52:12,843][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"3583471113", :_index=>"logs-endpoint-winevent-security-2018.06.11", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x2077b005>], :response=>{"index"=>{"_index"=>"logs-endpoint-winevent-security-2018.06.11", "_type"=>"doc", "_id"=>"3583471113", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [process_parent_id]", "caused_by"=>{"type"=>"number_format_exception", "reason"=>"For input string: \"0x1bfc\""}}}}}
[2018-06-11T01:52:13,077][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"4289718454", :_index=>"logs-endpoint-winevent-security-2018.06.11", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x53f9f0c3>], :response=>{"index"=>{"_index"=>"logs-endpoint-winevent-security-2018.06.11", "_type"=>"doc", "_id"=>"4289718454", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [process_parent_id]", "caused_by"=>{"type"=>"number_format_exception", "reason"=>"For input string: \"0x1598\""}}}}}

Obviously Hex was not being considered a number or type "integer" for :

  • process_id
  • process_parent_id

What steps did you take trying to fix the issue?
I used the same idea as https://github.com/Cyb3rWard0g/HELK/blob/master/helk-logstash/pipeline/12-winevent-security-filter.conf#L827 and applied it to the other two fields. It seemed to work but I got into the following issue:

#71

I applied the right updates to the Ruby code and it started to work fine. I am pushing a commit with the new code to close this issue

How could we replicate the issue?
Install HELK and ingest Windows Security Logs

If you are having issue during the installation stage, please provide the HELK installation logs located at /var/log/helk-install.log

What version of HELK are you using?
6.2.4

What OS are you using to host the HELK?
Linux Ubuntu

Any additional context?

issue for installing helk on ubuntu 16.04

Hello,
my name is Gourav and i have a problem facing for installing helk so please give me the solution for the issues

  1. i am trying to install helk on an ubuntu machine 16.04 version i am get stuck at running the jupyter server. can you help me to resolve this issue
    helk11

logstash config IOC additions

hi

im trying to add what syspanda.com have made to tag IOC within the systemon conf file. as there is a renaming of the fields in there i can't get the exact code from syspanda.com to work

could you help me along then i'll complete the task and add a pull request once i have tested it as working

here is the relevant post from syspanda

https://www.syspanda.com/index.php/2018/05/04/labeling-endpoint-actions-logstash-threat-hunting/

here is the code from 11-winevent-sysmon-filter.conf in question

if [event_id] == 1 {
  mutate {
    add_field => { "action" => "processcreate" }
    rename => {
      "[event_data][CommandLine]" => "process_command_line"
      "[event_data][CurrentDirectory]" => "process_current_directory"
      "[event_data][ParentImage]" => "process_parent_path"
      "[event_data][ParentCommandLine]" => "process_parent_command_line"
      "[event_data][IntegrityLevel]" => "process_integrity_level"
      "[event_data][LogonGuid]" => "user_logon_guid"
      "[event_data][ParentProcessGuid]" => "process_parent_guid"
      "[event_data][ParentProcessId]" => "process_parent_id"
      "[event_data][TerminalSessionId]" => "user_terminal_session_id"
      "[event_data][FileVersion]" => "file_version"
      "[event_data][Description]" => "file_description"
      "[event_data][Product]" => "file_product"
      "[event_data][Company]" => "file_company"
    }
    gsub => ["process_parent_guid","[{}]",""]
    gsub => ["user_logon_guid","[{}]",""]
  }
}

IOC

if [event_id] == 1 {
and ( ([event_data][process_parent_path] =~ /(?i)(OUTLOOK.EXE)/ ) or (##if i put it after the renaming do i have to use [process_parent_path] or can i just use [parentImage] still?###)
([event_data][ParentImage] =~ /(?i)(OUTLOOK.EXE)/
and ([event_data][Image] =~ /(?i)(iexplore.exe|chrome.exe|firefox.exe|edge.exe)/ ) )
{
mutate {
add_field => { "IOC" => "Browser Launched From Outlook Sysmon 1" } }}
}

How do i add this mutate statement into the existing event_id:1 filter ? this must be possible to do in an elegant way

LS-X-PACK - You are using a deprecated config setting "document_type" set in elasticsearch

Describe the problem
When I start Logstash even with the latest Docker image 6.2.4, I get the following error message:

[2018-06-11T03:16:09,429][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[http://helk-elasticsearch:9200], bulk_path=>"/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s", manage_template=>false, document_type=>"%{[@metadata][document_type]}", sniffing=>false, user=>"logstash_system", password=><password>, id=>"a0b13b5af7fb206feba8d8f6ac0efffedc9681e37d35b6382157dd99efbdffbd", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_50a703bf-240b-48fe-bf1d-c250f83d12e0", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}

Apparently this is just a warning message. I am not using document_type anywhere in my Logstash configs.

What steps did you take trying to fix the issue?
I just googled the error message and found this : https://discuss.elastic.co/t/latest-ls-x-pack-still-using-a-deprecated-feature/128866

How could we replicate the issue?
Just installing HELK and then monitoring the helk-logstash container

sudo docker logs --follow helk-logstash

If you are having issue during the installation stage, please provide the HELK installation logs located at /var/log/helk-install.log

What version of HELK are you using?
6.2.4

What OS are you using to host the HELK?
Ubuntu

Any additional context?
https://discuss.elastic.co/t/latest-ls-x-pack-still-using-a-deprecated-feature/128866
https://discuss.elastic.co/t/logstash-x-pack-is-using-deprecated-feature/108304

Create Wiki

Create FAQ for wiki. Some users may not know how to restart the HELK server if it stops, modify variables etc... A FAQ page in the wiki would help answer some of these questions.

Logstash issue - where located conf file

Hi guys,
HELK installation was perfect. Later I would like to set up events ingestion with Logstash and Winlogbeat.
Following your tutorial ( https://cyberwardog.blogspot.it/2017/02/setting-up-pentesting-i-mean-threat_98.html ) I don't understand where configuration file for logstash need to be located (I suppose in the appropriate docker container) and if the file needs to be created (tutorial refer to 5.x version).

A little bit confused at the moment, sorry :(

All the best, Andrea

New release install over previous install : java.lang.IllegalStateException: failed to obtain node locks

Following a succesful build, an attempt was made to upgrade to HELK 6.2.4-050318. The following steps were taken:

  1. Bring down the environment with docker-compose down.
  2. Delete the HELK directory.
  3. Clone the repo for the new release.
  4. Run install script.

The build is successful.
Compose spins up the environement.

Following issues noted:

  1. The Kibana URL is not accessible. ''Connection refused''.
  2. The Kibana enterypoint.sh script fails the curl check on the local URL.
  3. The Elasticsearch container keeps restarting every 3 seconds.
  4. The Elasticsearch docker logs display ''java.lang.IllegalStateException: failed to obtain node locks''.

After searching the web, it would appear that the issue was caused by orphaned java processes.

Resolutuion:

Kill all java processes: killall -9 java
Removed and cloned repo.
Ran install script.

Build successful.

heapspace calc sets to zero

When trying to start up Elasticsearch using the entrypoint script noticed an error around incorrect heapspace and double checked. It was in fact setting -Xmx to 0g so was not able to start the service. Had to change to 1g and restart image to get URI etc.
screen shot 2018-04-30 at 16 19 43

Error during install

from the log file:
Package openjdk-8-jre-headless is a virtual package provided by:
oracle-java9-installer 9.0.1-1webupd80
oracle-java8-installer 8u151-1webupd80

E: Package 'openjdk-8-jre-headless' has no installation candidate

Ruby exception occurred: undefined method `hex' for 4:Fixnum

Describe the problem
I was updating https://github.com/Cyb3rWard0g/HELK/blob/master/helk-logstash/pipeline/12-winevent-security-filter.conf#L827 adding other two event fields (process_id and process_parent_id)

if [user_logon_id] {
      mutate { gsub => [ "user_logon_id", "0x", "" ]}
      ruby {
        code => "
          userlogonid = event.get('[user_logon_id]')
          userlogonid = userlogonid.hex
          event.set('[user_logon_id]', userlogonid)
        "
        tag_on_exception =>  "_0591_rubyexception"

And I started getting the following error messages:

[2018-06-11T02:32:10,833][ERROR][logstash.filters.ruby    ] Ruby exception occurred: undefined method `hex' for 0:Fixnum
[2018-06-11T02:32:10,848][ERROR][logstash.filters.ruby    ] Ruby exception occurred: undefined method `hex' for 0:Fixnum
[2018-06-11T02:32:10,863][ERROR][logstash.filters.ruby    ] Ruby exception occurred: undefined method `hex' for 0:Fixnum
[2018-06-11T02:32:10,865][ERROR][logstash.filters.ruby    ] Ruby exception occurred: undefined method `hex' for 0:Fixnum
[2018-06-11T02:32:10,878][ERROR][logstash.filters.ruby    ] Ruby exception occurred: undefined method `hex' for 0:Fixnum
[2018-06-11T02:32:10,883][ERROR][logstash.filters.ruby    ] Ruby exception occurred: undefined method `hex' for 0:Fixnum
[2018-06-11T02:32:10,891][ERROR][logstash.filters.ruby    ] Ruby exception occurred: undefined method `hex' for 0:Fixnum
[2018-06-11T02:32:10,896][ERROR][logstash.filters.ruby    ] Ruby exception occurred: undefined method `hex' for 4:Fixnum
[2018-06-11T02:32:10,901][ERROR][logstash.filters.ruby    ] Ruby exception occurred: undefined method `hex' for 0:Fixnum
[2018-06-11T02:32:10,905][ERROR][logstash.filters.ruby    ] Ruby exception occurred: undefined method `hex' for 4:Fixnum
[2018-06-11T02:32:10,909][ERROR][logstash.filters.ruby    ] Ruby exception occurred: undefined method `hex' for 0:Fixnum

What steps did you take trying to fix the issue?
I updated the ruby code and replaced it with:

if [user_logon_id] {
      mutate { gsub => [ "user_logon_id", "0x", "" ]}
      ruby {
        code => "event.set('user_logon_id', event.get('user_logon_id').to_s.hex)"
        tag_on_exception =>  "_0591_rubyexception"
      }
    }

Adding it to the next commit and closing this issue

How could we replicate the issue?
just installing HELK and passing Windows Security Event Logs that contain process_id and process_parent_id (Event 4688)

If you are having issue during the installation stage, please provide the HELK installation logs located at /var/log/helk-install.log

What version of HELK are you using?
6.2.4

What OS are you using to host the HELK?
Linux Ubuntu

Any additional context?

winlogbeat=UNKNOWN_TOPIC_OR_PARTITION

Hi,
I installed the HELK stack with Docker for windows (didn't use the shell script). I use Sysmon for collecting the windows logs. I also use the WinLogBeats service to forward the logs to the Kafka service. I can connect to my local host and see the Kibana dashboard but I cant see the logs coming in. I checked the logs produced by the winlogbeat service (see picture 2). We can see that there is something being sent. Also, i checked the logs produced by the kafka service to see if anything is comming in (using sudo /opt/helk/kafka/kafka_2.11-1.1.0/bin/kafka-console-consumer.sh --bootstrap-server 192.168.64.131:9092 --topic winlogbeat --from-beginning inside the container) and, again, something seems to be receive but there is an error message : uknown topic or partition.

Docker ps :
docker_ps

Logs produced by winlogbeat :
logs_beats

Logs produced by the kafka service (inside de container):
untitled

Config of winlogbeat (notice that i dont have the port 9093 use in the docker ps picture) :
winlogbeat_logs

Thanks.

*Im on widows 10, using hyperV and Docker-compose.

Install without docker?

Any chance the bash native install option will come back? Looks like you removed it in the latest version?

I can't use the docker version (without some modding) as I've got a load of ip overlaps, plus i'd really prefer it running native๐Ÿ‘

build in functionality from deepblueCLIv2 srcipts into the logstash conf

hi thanks for the great work you have shared here!

do you think its feasable to incorporate the "behavior" checks into logstash that deepblue cli does via powershell parsing of the evt files

you are grabbing the data correctly (same eventid's) and its beings split op correctly also (your payload field) can you from here take the data and run the checks that eric conrad has in this code below

https://github.com/sans-blue-team/DeepBlueCLI

..... c/p relevant sections for example from scripts

function Check-Command(){
$text=""
$base64=""
# Check to see if command is whitelisted
foreach ($entry in $whitelist) {
if ($commandline -Match $entry.regex) {
# Command is whitelisted, return nothing
return
}
}
if ($commandline.length -gt $minlength){
$text += "Long Command Line: greater than $minlength bytesn" } $text += (Check-Obfu $commandline) $text += (Check-Regex $commandline 0) $text += (Check-Creator $commandline $creator) # Check for base64 encoded function, decode and print if found (add to e new filed in ELK?) # This section is highly use case specific, other methods of base64 encoding and/or compressing may evade these checks if ($commandline -Match "\-enc.*[A-Za-z0-9/+=]{100}"){ $base64= $commandline -Replace "^.* \-Enc(odedCommand)? ","" } ElseIf ($commandline -Match ":FromBase64String\("){ $base64 = $commandline -Replace "^.*:FromBase64String\(\'*","" $base64 = $base64 -Replace "\'.*$","" } if ($base64){ if ($commandline -Match "Compression.GzipStream.*Decompress"){ # Metasploit-style compressed and base64-encoded function. Uncompress it. $decoded=New-Object IO.MemoryStream(,[Convert]::FromBase64String($base64)) $uncompressed=(New-Object IO.StreamReader(((New-Object IO.Compression.GzipStream($decoded,[IO.Compression.CompressionMode]::Decompress))),[Text.Encoding]::ASCII)).ReadToEnd() $obj.Decoded=$uncompressed $text += "Base64-encoded and compressed functionn"
}
else{
$decoded = [System.Text.Encoding]::Unicode.GetString([System.Convert]::FromBase64String($base64))
$obj.Decoded=$decoded
$text += "Base64-encoded functionn" $text += (Check-Obfu $decoded) $text += (Check-Regex $decoded 0) } } if ($text){ if ($servicecmd){ $obj.Message = "Suspicious Service Command" $obj.Results = "Service name: $servicenamen"
}
Else{
$obj.Message = "Suspicious Command Line"
}
$obj.Command = $commandline
$obj.Results += $text
Write-Output $obj
}
return
}

function Check-Regex($string,$type){
$regextext="" # Local variable for return output
foreach ($regex in $regexes){
if ($regex.Type -eq $type) { # Type is 0 for Commands, 1 for services. Set in regexes.csv
if ($string -Match $regex.regex) {
$regextext += $regex.String + "`n"
}
}
}
#if ($regextext){
# $regextext = $regextext.Substring(0,$regextext.Length-1) # Remove final newline.
#}
return $regextext
}

function Check-Obfu($string){
# Check for special characters in the command. Inspired by Invoke-Obfuscation: https://twitter.com/danielhbohannon/status/778268820242825216
#
$obfutext="" # Local variable for return output
$lowercasestring=$string.ToLower()
$length=$lowercasestring.length
$noalphastring = $lowercasestring -replace "[a-z0-9/;:|.]"
$nobinarystring = $lowercasestring -replace "[01]" # To catch binary encoding
# Calculate the percent alphanumeric/common symbols
if ($length -gt 0){
$percent=(($length-$noalphastring.length)/$length)
# Adjust minpercent for very short commands, to avoid triggering short warnings
if (($length/100) -lt $minpercent){
$minpercent=($length/100)
}
if ($percent -lt $minpercent){
$percent = "{0:P0}" -f $percent # Convert to a percent
$obfutext += "Possible command obfuscation: only $percent alphanumeric and common symbolsn" } # Calculate the percent of binary characters $percent=(($nobinarystring.length-$length/$length)/$length) $binarypercent = 1-$percent if ($binarypercent -gt $maxbinary){ #$binarypercent = 1-$percent $binarypercent = "{0:P0}" -f $binarypercent # Convert to a percent $obfutext += "Possible command obfuscation: $binarypercent zeroes and ones (possible numeric or binary encoding)n"
}
}
return $obfutext
}

function Check-Creator($command,$creator){
$creatortext="" # Local variable for return output
if ($creator){
if ($command -Match "powershell"){
if ($creator -Match "PSEXESVC"){
$creatortext += "PowerShell launched via PsExec: $creatorn" } ElseIf($creator -Match "WmiPrvSE"){ $creatortext += "PowerShell launched via WMI: $creatorn"
}
}
}
return $creatortext
}

Could not build HELK via docker-compose on Ubuntu 16.04.4 LTS

Hello

Getting errors during installation on Ubuntu 16.04.4 LTS Server:

**********************************************
**          HELK - THE HUNTING ELK          **
**                                          **
** Author: Roberto Rodriguez (@Cyb3rWard0g) **
** HELK build version: 0.9 (Alpha)          **
** HELK ELK version: 6.2.4                  **
** License: BSD 3-Clause                    **
**********************************************

[HELK-INSTALLATION-INFO] HELK being hosted on a Linux box
[HELK-INSTALLATION-INFO] Available Memory: 14
[HELK-INSTALLATION-INFO] Available Disk: 95
[HELK-INSTALLATION-INFO] Obtaining current host IP..
[HELK-INSTALLATION-INFO] Set HELK IP. Default value is your current IP: x.x.x.x
[HELK-INSTALLATION-INFO] HELK IP set to x.x.x.x
[HELK-INSTALLATION-INFO] HELK identified Linux as the system kernel
[HELK-INSTALLATION-INFO] Checking distribution list and version
[HELK-INSTALLATION-INFO] You're using ubuntu version xenial
[HELK-INSTALLATION-INFO] Docker already installed
[HELK-INSTALLATION-INFO] Docker-compose already installed
[HELK-INSTALLATION-INFO] Dockerizing HELK..
[HELK-INSTALLATION-INFO] Checking local vm.max_map_count variable and setting it to 262144
[HELK-INSTALLATION-INFO] Setting KAFKA ADVERTISED_LISTENER value...
[HELK-INSTALLATION-INFO] Setting ES_JAVA_OPTS value...
[HELK-INSTALLATION-INFO] Building HELK via docker-compose
 * ERROR: Could not build HELK via docker-compose (Error Code: 1).
get more details in /var/log/helk-install.log locally
Version in "./docker-compose.yml" is unsupported. You might be seeing this error because you're using the wrong Compose file version. Either specify a version of "2" (or "2.0") and place your service definitions under the `services` key, or omit the `version` key and place your service definitions at the root of the file to use version 1.
For more on the Compose file format versions, see https://docs.docker.com/compose/compose-file/

If I remove "Version" then getting following errors:

The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for volumes: 'esdata'
Unsupported config option for networks: 'helk'
Unsupported config option for services: 'helk-elasticsearch'

Please advise.

Thanks in advance
TK

event_id:4624 from security logs reports a grokparse failure

forwarding data from WEC via winlogbeat. and get a grok error on this event_ID in kibana

json outbut from kibana, i cant not spot the error :(

{
"_index": "logs-endpoint-winevent-security-2018.05.17",
"_type": "doc",
"_id": "1580881363",
"_version": 1,
"_score": null,
"_source": {
"opcode": "Info",
"@timestamp": "2018-05-17T10:25:47.956Z",
"user_domain": "XXXXX",
"type": "wineventlog",
"message": "An account was successfully logged on.\n\nSubject:\n\tSecurity ID:\t\tS-1-0-0\n\tAccount Name:\t\t-\n\tAccount Domain:\t\t-\n\tLogon ID:\t\t0x0\n\nLogon Information:\n\tLogon Type:\t\t3\n\tRestricted Admin Mode:\t-\n\tVirtual Account:\t\tNo\n\tElevated Token:\t\tNo\n\nImpersonation Level:\t\tIdentification\n\nNew Logon:\n\tSecurity ID:\t\tS-1-5-21-2007484102-1456041316-233718849-37790\n\tAccount Name:\t\tSRVP00002$\n\tAccount Domain:\t\tDAC.LOCAL\n\tLogon ID:\t\t0x665F7DB\n\tLinked Logon ID:\t\t0x0\n\tNetwork Account Name:\t-\n\tNetwork Account Domain:\t-\n\tLogon GUID:\t\t{A6BD0686-17D4-384B-DA6D-5A81CDF698F5}\n\nProcess Information:\n\tProcess ID:\t\t0x0\n\tProcess Name:\t\t-\n\nNetwork Information:\n\tWorkstation Name:\t-\n\tSource Network Address:\t10.200.28.241\n\tSource Port:\t\t62536\n\nDetailed Authentication Information:\n\tLogon Process:\t\tKerberos\n\tAuthentication Package:\tKerberos\n\tTransited Services:\t-\n\tPackage Name (NTLM only):\t-\n\tKey Length:\t\t0\n\nThis event is generated when a logon session is created. It is generated on the computer that was accessed.\n\nThe subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe.\n\nThe logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network).\n\nThe New Logon fields indicate the account for whom the new logon was created, i.e. the account that was logged on.\n\nThe network fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases.\n\nThe impersonation level field indicates the extent to which a process in the logon session can impersonate.\n\nThe authentication information fields provide detailed information about this specific logon request.\n\t- Logon GUID is a unique identifier that can be used to correlate this event with a KDC event.\n\t- Transited services indicate which intermediate services have participated in this logon request.\n\t- Package name indicates which sub-protocol was used among the NTLM protocols.\n\t- Key length indicates the length of the generated session key. This will be 0 if no session key was requested.",
"event_data": {},
"level": "Information",
"user_sid": "XXXXXXXXXXXXXXXXXXXXXXXX",
"source_name": "Microsoft-Windows-Security-Auditing",
"logon_type": "3",
"logon_restricted_adminmode": "-",
"user_networkaccount_domain": "-",
"host_src_name": "-",
"ip_src": "XXX.XXX.XXX.XXX",
"logon_key_length": "0",
"@Version": "1",
"user_reporter_domain": "-",
"user_logon_linkedid": "0x0",
"logon_authentication_package": "Kerberos",
"provider_guid": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"thread_id": 15204,
"port_src_number": 62536,
"process_id": 0,
"version": 2,
"logon_elevated_token": "%%1843",
"task": "Logon",
"keywords": [
"Audit Success"
],
"log_name": "Security",
"user_reporter_name": "-",
"logon_package_name": "-",
"logon_virtual_account": "%%1843",
"logon_process_name": "Kerberos",
"activity_id": "{DC3C281D-ED96-0000-2128-3CDC96EDD301}",
"user_networkaccount_name": "-",
"logon_transmitted_services": "-",
"reporter_logon_id": "0x0",
"process_path": "-",
"user_logon_guid": "XXXXXXXXXXXXXXXXXXXXXXXXX5",
"user_reporter_sid": "S-1-0-0",
"event_id": 4624,
"tags": [
"_grokparsefailure",
"_parsefailure"
],
"impersonation_level": "%%1832",
"user_logon_id": 107345883,
"user_name": "SRVP00002$",
"beat": {
"name": "SRVPXXX",
"version": "6.0.0",
"hostname": "SRXXXXXXXXX"
},
"host_name": "srvpXXXXXXXXX",
"record_number": "35697580"
},
"fields": {
"@timestamp": [
"2018-05-17T10:25:47.956Z"
]
},
"sort": [
1526552747956
]
}

Overlapping Networks

When trying to install HELK with sudo ./helk_install.sh the script errors when attempting to deploy docker-compose.yml. The error came back with * ERROR: Could not run HELK via docker-compose (Error Code: 1). After investigating the logs, Creating network "helk_helk" with driver "bridge" cannot create network 8320bb7d24b22f724839bb65351a3ee058a06a8dc5bb549e14e060cbf5aa5746 (br-8320bb7d24b2): conflicts with network 2e2869986edfcee84d6dd7208ac357a032cc2da1d70f45d5141ba1c49e9f4d37 (br-2e2869986edf): networks have overlapping IPv4

After changing the networks to 10.18.0.X and the subnet 10.18.0.0/24. I am not entirely sure if the network overlap error is due to the wide subnet, but it temporarily repairs the issue.

Adding Packetbeat?

Using this more as a learning tool at home right now...hah, but pretty close to trying it on a test production site.

I'd really like to get Packetbeat on it to play around with that too... anyone done that yet?

docker container mem limit

hi i've tried to raise the limit for mem on the containers but nothing works do i have to fix it in the docker compose.yml file to

CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
0c0f3698ae97 helk-spark-worker2 0.09% 89.47MiB / 983.3MiB 9.10% 14.9kB / 16.2kB 603MB / 26.4MB 27
451397e84543 helk-spark-worker 0.10% 38.79MiB / 983.3MiB 3.94% 13.6kB / 18.1kB 356MB / 26.7MB 27
9a59a4919c83 helk-kafka-broker2 7.47% 533.1MiB / 983.3MiB 54.21% 7.37MB / 7.04MB 2.21GB / 41.6MB 73
05e189e07f3f helk-kafka-broker 1.59% 501MiB / 983.3MiB 50.95% 6.89MB / 208kB 2.26GB / 37.6MB 71
8607ccf98c6e helk-jupyter 0.00% 68KiB / 983.3MiB 0.01% 9.51kB / 0B 189MB / 7.39MB 1
fa59aac38e36 helk-nginx 0.02% 3.324MiB / 983.3MiB 0.34% 48.1kB / 40.1kB 328MB / 21.4MB 7
daf2399a476c helk-zookeeper 0.14% 24.29MiB / 983.3MiB 2.47% 89kB / 93.9kB 148MB / 13.8MB 28
ef6c1d43cf8c helk-spark-master 0.11% 73.64MiB / 983.3MiB 7.49% 41.8kB / 7.76kB 419MB / 22.5MB 37
144720494d8a helk-logstash 106.50% 1.144GiB / 983.3MiB 119.10% 14.6kB / 3.3kB 1.29GB / 53.1MB 36
dcf41cb7edd5 helk-kibana 1.62% 275.9MiB / 983.3MiB 28.06% 476kB / 329kB 486MB / 31.3MB 12
954c7d1aba59 helk-elasticsearch 1.20% 5.575GiB / 983.3MiB 580.56% 311kB / 459kB 980MB / 14.6MB 76

Could not locate that index-pattern-field (Various Patterns)

I just installed HELK yesterday and I am having some issues with index patterns. I have one host reporting to HELK. I am getting results, but there are some fields that don't show, that I'm pretty sure should have data. I'll attach a screenshot of the Sysmon Dashboard. The only issue I can see on the server, is the following:
root@6c029fbc34fe:/opt/helk/scripts# sudo /opt/helk/kafka/kafka_2.11-1.0.0/bin/zookeeper-shell.sh ls /brokers/ids Connecting to ls Exception in thread "main" java.net.UnknownHostException: ls: Temporary failure in name resolution at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323) at java.net.InetAddress.getAllByName0(InetAddress.java:1276) at java.net.InetAddress.getAllByName(InetAddress.java:1192) at java.net.InetAddress.getAllByName(InetAddress.java:1126) at org.apache.zookeeper.client.StaticHostProvider.<init>(StaticHostProvider.java:61) at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:445) at org.apache.zookeeper.ZooKeeperMain.connectToZK(ZooKeeperMain.java:281) at org.apache.zookeeper.ZooKeeperMain.<init>(ZooKeeperMain.java:296) at org.apache.zookeeper.ZooKeeperMain.main(ZooKeeperMain.java:289)

I'm not sure how to resolve that issue. I did the install using Ubuntu 16.04 server, and followed the current install instructions on your github page.
image

helk_install.sh wont install with more than 1tb drive on 16.04

** HELK ELK version: 6.2.4 **
** License: BSD 3-Clause **


[HELK-INSTALLATION-INFO] HELK being hosted on a Linux box
[HELK-INSTALLATION-ERROR] YOU DO NOT HAVE ENOUGH AVAILABLE MEMORY OR DISK SPACE
[HELK-INSTALLATION-ERROR] Available Memory: 61
[HELK-INSTALLATION-ERROR] Available Disk: 2

The script sees 1.9 vs 1900gb and wont continue install , resized for workaround insted of modifying script (so mostly info)

helk@helk:~/HELK$ df -h | awk '$NF=="/"{printf "%.f\t\t", $4}'

2

helk@helk:~/HELK$ df -h

Filesystem Size Used Avail Use% Mounted on
udev 32G 0 32G 0% /dev
tmpfs 6.3G 8.6M 6.3G 1% /run
/dev/mapper/atlas--helk--01--vg-root 1.9T 1.8G 1.8T 1% /
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/sda1 472M 58M 390M 13% /boot
tmpfs 6.3G 0 6.3G 0% /run/user/1000

No indices match pattern "powershell-*" and wmiprvse.exe

Hi,

I have a two questions:

  1. All of the below are working fine and are discoverable apart from powershell-*

powershell-*
sysmon-*
winevent-application*
winevent-security-*
winevent-system-*

No matching indices found: No indices match pattern "powershell-*"
Screnshoot with error message: https://imgur.com/Ox7ePL1

  1. I've noticed wmiprvse.exe is generating lots of data. I just installed Win 7 on my VM endpoint, configured sysmon and winlog beat using your blog post about hunting lab. Is this normal? Looks like it generated almost 500.000 entries within an hour.

Screenshoot - https://imgur.com/1T5zOMJ

Thanks!

Sysmon: tried to parse field [user] as object, but found a concrete value

Describe the problem
After last PR specifically this update: e70eafc , I noticed the following error messages in my logstash container:

[2018-05-31T05:22:35,741][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"2909338251", :_index=>"logs-endpoint-winevent-sysmon-2018.05.31", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x552c1397>], :response=>{"index"=>{"_index"=>"logs-endpoint-winevent-sysmon-2018.05.31", "_type"=>"doc", "_id"=>"2909338251", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [user] tried to parse field [user] as object, but found a concrete value"}}}}
[2018-05-31T05:22:35,745][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"3915583766", :_index=>"logs-endpoint-winevent-sysmon-2018.05.31", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x7aeff589>], :response=>{"index"=>{"_index"=>"logs-endpoint-winevent-sysmon-2018.05.31", "_type"=>"doc", "_id"=>"3915583766", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [user] tried to parse field [user] as object, but found a concrete value"}}}}
[2018-05-31T05:22:35,749][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"972395887", :_index=>"logs-endpoint-winevent-sysmon-2018.05.31", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x1a5c86b>], :response=>{"index"=>{"_index"=>"logs-endpoint-winevent-sysmon-2018.05.31", "_type"=>"doc", "_id"=>"972395887", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [user] tried to parse field [user] as object, but found a concrete value"}}}}
[2018-05-31T05:22:35,758][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"1068475737", :_index=>"logs-endpoint-winevent-sysmon-2018.05.31", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x508a96b9>], :response=>{"index"=>{"_index"=>"logs-endpoint-winevent-sysmon-2018.05.31", "_type"=>"doc", "_id"=>"1068475737", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [user] tried to parse field [user] as object, but found a concrete value"}}}}
[2018-05-31T05:22:37,287][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"1346072995", :_index=>"logs-endpoint-winevent-sysmon-2018.05.31", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0xd22f716>], :response=>{"index"=>{"_index"=>"logs-endpoint-winevent-sysmon-2018.05.31", "_type"=>"doc", "_id"=>"1346072995", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [user] tried to parse field [user] as object, but found a concrete value"}}}}
[2018-05-31T05:22:37,484][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"1346072995", :_index=>"logs-endpoint-winevent-sysmon-2018.05.31", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x3d3fc5e0>], :response=>{"index"=>{"_index"=>"logs-endpoint-winevent-sysmon-2018.05.31", "_type"=>"doc", "_id"=>"1346072995", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [user] tried to parse field [user] as object, but found a concrete value"}}}}
[2018-05-31T05:22:41,424][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"3546335163", :_index=>"logs-endpoint-winevent-sysmon-2018.05.31", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x109a6a35>], :response=>{"index"=>{"_index"=>"logs-endpoint-winevent-sysmon-2018.05.31", "_type"=>"doc", "_id"=>"3546335163", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [user] tried to parse field [user] as object, but found a concrete value"}}}}
[2018-05-31T05:22:41,509][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"3546335163", :_index=>"logs-endpoint-winevent-sysmon-2018.05.31", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x3776ba84>], :response=>{"index"=>{"_index"=>"logs-endpoint-winevent-sysmon-2018.05.31", "_type"=>"doc", "_id"=>"3546335163", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [user] tried to parse field [user] as object, but found a concrete value"}}}}

What steps did you take trying to fix the issue?
I updated this line out of the sysmon filter config in the pipeline and renamed [event_data][User] to "user_account":
https://github.com/Cyb3rWard0g/HELK/blob/master/helk-logstash/pipeline/11-winevent-sysmon-filter.conf#L301
How could we replicate the issue?

sudo ./helk_install

send logs to the HELK and then monitor for logstash logs

sudo docker logs --follow helk-logstash

If you are having issue during the installation stage, please provide the HELK installation logs located at /var/log/helk-install.log

What version of HELK are you using?

What OS are you using to host the HELK?

Any additional context?
other log files, pictures, etc.

Add custom logstash configs at build-time to support new data sources

Describe the problem
Logstash doesn't have an easy way to add new data because the port isn't exposed in the docker container and I can't add a custom config.

What steps did you take trying to fix the issue?
Lot's of Googling LOL

How could we replicate the issue?
Fresh installation of HELK then try to pump data to port 3515

If you are having issue during the installation stage, please provide the HELK installation logs located at /var/log/helk-install.log

What version of HELK are you using?
HELK 6.2.4-050318

What OS are you using to host the HELK?
Ubuntu 16.04

Any additional context?
other log files, pictures, etc.
Trying to add emails into HELK similar to the article: https://outflank.nl/blog/2018/01/23/public-password-dumps-in-elk/

ELK to Helk? Straight from your blog...

Awww I used your blog awhile ago to build ELK, complete with reverse proxy. How bout some steps to go from ELK to HELK? Awesome work dude, your one of the few I have set to receive twitter notifications!!

Signed.

Lazy

Error during installation of HELK

Hi,

I've created a new Ubuntu based VM Machine and I am trying to Install HELK however, I am experiencing a few errors. Do I need to install anything else prior to the installation process? README file doesn't specify that. Please take a look:

sudo ./helk_install.sh


** HELK - M E N U **


** Author: Roberto Rodriguez (@Cyb3rWard0g) **
** HELK build version: 0.9 (BETA) **
** HELK ELK version: 6.x **
** License: BSD 3-Clause **


  1. Pull the latest HELK image from DockerHub
  2. Build the HELK image from local Dockerfile
  3. Install the HELK from local bash script
  4. Exit

[HELK-INSTALLATION-INFO] Enter choice [ 1 - 4] 1
[HELK-DOCKER-INSTALLATION-INFO] Installing docker first
[HELK-DOCKER-INSTALLATION-INFO] This is a debian-based system..
[HELK-DOCKER-INSTALLATION-INFO] Installing Docker..
[HELK-DOCKER-INSTALLATION-INFO] Installing updates..
[HELK-DOCKER-INSTALLATION-INFO] Adding the GPG key for the official Docker repository to the system..
[HELK-DOCKER-INSTALLATION-INFO] Installing updates..
[HELK-DOCKER-INSTALLATION-INFO] Adding the docker repository to APT sources..
***** ERROR: Could not add the docker repository to APT sources.. (Error Code: 127).****
[HELK-DOCKER-INSTALLATION-INFO] Updating the package database with the Docker packages from the newly added repo..
[HELK-DOCKER-INSTALLATION-INFO] Making sure that Docker is being installed from the Docker repo and not the default Ubuntu 16.04 repo..
[HELK-DOCKER-INSTALLATION-INFO] Installing Docker..
*** ERROR: Could not install Docker.. (Error Code: 100).**
[HELK-DOCKER-INSTALLATION-INFO] Docker has been successfully installed..
[HELK-DOCKER-INSTALLATION-INFO] Docker Version:
scripts/helk_linux_deb_docker_install.sh: line 93: docker: command not found
[HELK-DOCKER-INSTALLATION-INFO] Checking local vm.max_map_count variable
[HELK-DOCKER-INSTALLATION-INFO] Building the HELK container from source..
./helk_install.sh: line 48: docker: command not found
[HELK-DOCKER-INSTALLATION-INFO] Running the HELK container in the background..
./helk_install.sh: line 50: docker: command not found
[HELK-DOCKER-INSTALLATION-INFO] Waiting for Jupyter Server to start..
./helk_install.sh: line 54: curl: command not found
./helk_install.sh: line 54: curl: command not found
./helk_install.sh: line 54: curl: command not found
./helk_install.sh: line 54: curl: command not found
./helk_install.sh: line 54: curl: command not found
./helk_install.sh: line 54: curl: command not found
./helk_install.sh: line 54: curl: command not found
./helk_install.sh: line 54: curl: command not found
./helk_install.sh: line 54: curl: command not found
./helk_install.sh: line 54: curl: command not found
./helk_install.sh: line 54: curl: command not found
^C

Getting an error running "Create a Spark RDD on top of Elasticsearch (logs-endpoint-winevent-sysmon-* as source)"

I've been all over everywhere else, trying to figure this out but so much is associated with standard ELK - so wondering if I need to do something specific with HELK to fix this, please help...thanks in advance

es_rdd = sc.newAPIHadoopRDD( inputFormatClass="org.elasticsearch.hadoop.mr.EsInputFormat", keyClass="org.apache.hadoop.io.NullWritable", valueClass="org.elasticsearch.hadoop.mr.LinkedMapWritable", conf={ "es.resource" : "logs-endpoint-winevent-sysmon-*/doc", "es.nodes" : "10.0.1.190" }) es_rdd.first()

gives the following error:

`---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
in ()
5 conf={
6 "es.resource" : "logs-endpoint-winevent-sysmon-*/doc",
----> 7 "es.nodes" : "10.0.1.190"
8 })
9 es_rdd.first()

/opt/helk/spark/spark-2.3.0-bin-hadoop2.7/python/pyspark/context.py in newAPIHadoopRDD(self, inputFormatClass, keyClass, valueClass, keyConverter, valueConverter, conf, batchSize)
703 jrdd = self._jvm.PythonRDD.newAPIHadoopRDD(self._jsc, inputFormatClass, keyClass,
704 valueClass, keyConverter, valueConverter,
--> 705 jconf, batchSize)
706 return RDD(jrdd, self)
707

/opt/helk/spark/spark-2.3.0-bin-hadoop2.7/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py in call(self, *args)
1158 answer = self.gateway_client.send_command(command)
1159 return_value = get_return_value(
-> 1160 answer, self.gateway_client, self.target_id, self.name)
1161
1162 for temp_arg in temp_args:

/opt/helk/spark/spark-2.3.0-bin-hadoop2.7/python/pyspark/sql/utils.py in deco(*a, **kw)
61 def deco(*a, **kw):
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString()

/opt/helk/spark/spark-2.3.0-bin-hadoop2.7/python/lib/py4j-0.10.6-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
318 raise Py4JJavaError(
319 "An error occurred while calling {0}{1}{2}.\n".
--> 320 format(target_id, ".", name), value)
321 else:
322 raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD.
: org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[10.0.1.190:9200]]
at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:149)
at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:380)
at org.elasticsearch.hadoop.rest.RestClient.executeNotFoundAllowed(RestClient.java:388)
at org.elasticsearch.hadoop.rest.RestClient.exists(RestClient.java:484)
at org.elasticsearch.hadoop.rest.RestClient.indexExists(RestClient.java:479)
at org.elasticsearch.hadoop.rest.InitializationUtils.checkIndexStatus(InitializationUtils.java:73)
at org.elasticsearch.hadoop.rest.InitializationUtils.validateSettingsForReading(InitializationUtils.java:271)
at org.elasticsearch.hadoop.rest.RestService.findPartitions(RestService.java:218)
at org.elasticsearch.hadoop.mr.EsInputFormat.getSplits(EsInputFormat.java:405)
at org.elasticsearch.hadoop.mr.EsInputFormat.getSplits(EsInputFormat.java:386)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:127)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1337)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.take(RDD.scala:1331)
at org.apache.spark.api.python.SerDeUtil$.pairRDDToPython(SerDeUtil.scala:239)
at org.apache.spark.api.python.PythonRDD$.newAPIHadoopRDD(PythonRDD.scala:282)
at org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)`

Increase overall security of HELK?

I'll start expressing my gratitude for such an awesome project, the day i found this repo my IT life changed forever.

I'm wondering if you have any plans to implement https and more flexibility regarding usernames/passwords?

This is not so much of an "issue" more of a request...

failed to parse [powershell.pipeline.id]

Describe the problem

Noticed that the powershell parser was throwing this warning message

[2018-06-11T05:30:51,459][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"1410602107", :_index=>"logs-endpoint-winevent-powershell-2018.06.11", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x52903d4>], :response=>{"index"=>{"_index"=>"logs-endpoint-winevent-powershell-2018.06.11", "_type"=>"doc", "_id"=>"1410602107", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [powershell.pipeline.id]", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"For input string: \"CommandName=\""}}}}}

What steps did you take trying to fix the issue?
I havent done anything yet ;)

How could we replicate the issue?
Install HELK and ingest powershell logs

If you are having issue during the installation stage, please provide the HELK installation logs located at /var/log/helk-install.log

What version of HELK are you using?
6.2.4

What OS are you using to host the HELK?
Linux Ubuntu

Any additional context?
other log files, pictures, etc.

no matching indices found

Getting this error for all indices on a fresh pull. I have a feeling it may be due to the Sysmon version on the monitored systems...I'm using the newest version (7.01) with the SwiftOnSecurity config with the mimikatz addition from ion...any thoughts would be apprecieated

use WEF (windows event forwarding)

hi more of a suggestion not really a issue, we have deployed sysmon on endpoint/servers and use windows event forwarding. on the windows event collector server I installed one winlogbeat this handles everything and it scales very well.

SparkContext doesn't appear to be present after install on ESXi

I installed this in AWS the other day with no issues. Trying to install on ESXi in a virtual lab and I've hit a snag twice in a row now where the Spark UI doesn't come up. I don't think it's an environmental issue - DNS, HTTP, and HTTPS all work. Did not see any errors whatsoever in the helk-install log.

Installed on Ubuntu Server 16.04.2 amd64 xenial VM. Pulled the image from dockerhub and ran it per the install script. I'm getting Kibana UI and Jupyter notebook (ports 80 and 8880 respectively) but nothing on 4040 for the Spark UI. I looked in the container and noticed the ESXI deployment was missing this process:

AWS:
image

I tried to run the command verbatim on my ESXI deployment inside the container and got this stack trace:
image

I don't have much experience with Spark so there's a chance I'm way off. I guess my question is has anyone had success installing on a VM in ESXI? Thanks!

Help | Problem having winlogbeat send logs to kafka

Hi,
I have been struggling with getting winlogbeat send to kafaka.

  • I have installed HELK on Ubuntu Server, and working with winlogbeat on Windows7SP1.
  • I can reach HELK server from the Windows machine.
  • The HELK seems to run perfectly.
  • I have applied to suggested winlogbeat.yml file by running-
    .\winlogbeat.exe -c winlogbeat.yml -e

I am getting the following erros:

2018-02-13T12:19:12-08:00 INFO Metrics logging every 30s
2018-02-13T12:19:12-08:00 INFO Beat UUID: dbab4128-3ec4-46da-ae07-496c91d0dc27
2018-02-13T12:19:12-08:00 INFO Setup Beat: winlogbeat; Version: 6.1.2
2018-02-13T12:19:12-08:00 INFO Beat name: machine
2018-02-13T12:19:12-08:00 INFO State will be read from and persisted to C:\ProgramData\winlogbeat\.winlogbeat.yml
2018-02-13T12:19:12-08:00 INFO winlogbeat start running.
2018-02-13T12:19:13-08:00 INFO kafka message: [Initializing new client]
2018-02-13T12:19:13-08:00 INFO client/metadata fetching metadata for all topics from broker [[172.18.39.25:9092]]

2018-02-13T12:19:13-08:00 INFO Connected to broker at [[172.18.39.25:9092]] (unregistered)

2018-02-13T12:19:42-08:00 INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30075 beat.memstats.gc_next=32281296 beat.memstats.memory_alloc=27280256 beat.memstats.memory_total=126464856 libbeat.config.module.running=0 libbeat.output.type=kafka libbeat.outputs.kafka.bytes_write=23 libbeat.pipeline.clients=4 libbeat.pipeline.events.active=4119 libbeat.pipeline.events.published=4116 libbeat.pipeline.events.total=4119 msg_file_cache.ApplicationHits=142 msg_file_cache.ApplicationMisses=21 msg_file_cache.ApplicationSize=21 msg_file_cache.Microsoft-windows-sysmon/operationalHits=1203 msg_file_cache.Microsoft-windows-sysmon/operationalMisses=1 msg_file_cache.Microsoft-windows-sysmon/operationalSize=1 msg_file_cache.SecurityHits=1399 msg_file_cache.SecurityMisses=2 msg_file_cache.SecuritySize=2 msg_file_cache.SystemHits=1576 msg_file_cache.SystemMisses=24 msg_file_cache.SystemSize=24 uptime={"server_time":"2018-02-13T20:19:42.6626885Z","start_time":"2018-02-13T20:19:12.587607Z","uptime":"30.0750815s","uptime_ms":"30075081"}
2018-02-13T12:19:43-08:00 INFO kafka message: [client/metadata got error from broker while fetching metadata: read tcp 172.18.39.2:49197->172.18.39.25:9092: i/o timeout]
2018-02-13T12:19:43-08:00 INFO Closed connection to broker [[172.18.39.25:9092]]

2018-02-13T12:19:43-08:00 INFO client/metadata fetching metadata for all topics from broker [[172.18.39.25:9093]]

2018-02-13T12:19:44-08:00 INFO Connected to broker at [[172.18.39.25:9093]] (unregistered)

2018-02-13T12:19:44-08:00 INFO kafka message: [Successfully initialized new client]
2018-02-13T12:19:44-08:00 INFO client/metadata fetching metadata for [[[winlogbeat] 172.18.39.25:9093]] from broker %!s(MISSING)

2018-02-13T12:19:44-08:00 INFO kafka message: [client/metadata found some partitions to be leaderless]
2018-02-13T12:19:44-08:00 INFO client/metadata retrying after [[250 3]]ms... (%!d(MISSING) attempts remaining)

2018-02-13T12:19:44-08:00 INFO client/metadata fetching metadata for [[[winlogbeat] 172.18.39.25:9093]] from broker %!s(MISSING)

2018-02-13T12:19:44-08:00 INFO kafka message: [client/metadata found some partitions to be leaderless]
2018-02-13T12:19:44-08:00 INFO client/metadata retrying after [[250 2]]ms... (%!d(MISSING) attempts remaining)

It goes like this forever..

Any help would be appreciated! Thanks!

failed to parse [target_process_id]", "caused_by"=>{"type"=>"number_format_exception", "reason"=>"For input string: \"0x4\""

Describe the problem
I forgot to also parse the following Hex value in Windows Security Event logs:

[2018-06-11T06:14:31,875][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"1287809585", :_index=>"logs-endpoint-winevent-security-2018.06.11", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x4b47e74>], :response=>{"index"=>{"_index"=>"logs-endpoint-winevent-security-2018.06.11", "_type"=>"doc", "_id"=>"1287809585", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [target_process_id]", "caused_by"=>{"type"=>"number_format_exception", "reason"=>"For input string: \"0x4\""}}}}}

What steps did you take trying to fix the issue?
I applied the same fix as an old issue #72

How could we replicate the issue?
Install HELK and send Windows Security logs

If you are having issue during the installation stage, please provide the HELK installation logs located at /var/log/helk-install.log

What version of HELK are you using?
6.2.4

What OS are you using to host the HELK?
Linux Ubuntu

Any additional context?

Must be root to run install script

Hey @Cyb3rWard0g! Currently, you must be the root user to use the HELK_INSTALL.sh.

I believe with a few modifications, this is avoidable. Specifically when it comes to operations on docker containers. If docker is installed and configured properly, the current user will be a member of the docker group and thus be able to perform container operations without invoking sudo.

There are a few parts of the install script that are a bit more problematic but should be avoidable. I can submit some PR's with a few modifications if you think this is a good idea. I just feel icky running scripts as root.

logs missing since last major update

Describe the problem
Prior to the major update commit on May 3 logs were available in /var/log/elasticsearch|logstash etc
What steps did you take trying to fix the issue?
not sure how best to fix it
How could we replicate the issue?
Can't find logs now
If you are having issue during the installation stage, please provide the HELK installation logs located at /var/log/helk-install.log

What version of HELK are you using?
latest
What OS are you using to host the HELK?
linux
Any additional context?
other log files, pictures, etc.

Logstash-Kafka Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member.

Describe the problem
I started to get the following error message:

[2018-06-11T03:40:54,698][WARN ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-0, groupId=logstash] Synchronous auto-commit of offsets {winlogbeat-0=OffsetAndMetadata{offset=178360, metadata=''}} failed: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.

This is the Kafka input defined in Logstash:

input {
  kafka
  {
    bootstrap_servers => "helk-kafka-broker:9092,helk-kafka-broker2:9093"
    topics => ["winlogbeat"]
    decorate_events => true
    codec => "json"
    auto_offset_reset => "earliest"
    ############################# HELK Optimizing Latency #############################
    fetch_min_bytes => "1"
    ############################# HELK Optimizing Availability #############################
    session_timeout_ms => "6000"
  }
}

What steps did you take trying to fix the issue?
I google some potential new configs. I looked into the default configs for a Kafka Consumer and I updated the Kafka Input Logstash file to:

input {
  kafka
  {
    bootstrap_servers => "helk-kafka-broker:9092,helk-kafka-broker2:9093"
    topics => ["winlogbeat"]
    decorate_events => true
    codec => "json"
    auto_offset_reset => "earliest"
    ############################# HELK Optimizing Latency #############################
    fetch_min_bytes => "1"
    request_timeout_ms => "305000"
    ############################# HELK Optimizing Availability #############################
    session_timeout_ms => "10000"
    max_poll_records => "550"
    max_poll_interval_ms => "300000"
  }
}

I also increased the pipeline.batch.size property in Logstash from 300 to 550 events.

How could we replicate the issue?
Just install HELK and monitor for helk-logstash logs:

sudo docker logs --follow helk-logstash

If you are having issue during the installation stage, please provide the HELK installation logs located at /var/log/helk-install.log

What version of HELK are you using?
6.2.4

What OS are you using to host the HELK?
Linux Ubuntu

Any additional context?
References: https://kafka.apache.org/documentation/#newconsumerconfigs

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.