Code Monkey home page Code Monkey logo

nfr's Introduction

Network Flight Recorder

NFR is a lightweight application which processes network traffic using the AlphaSOC Analytics Engine. NFR can monitor log files on disk (e.g. Microsoft DNS debug logs, Bro IDS logs) or run as a network sniffer under Linux to score traffic. Upon processing the data, alerts are presented in either JSON or CEF format for escalation via syslog.

Installation

Download NFR from the releases section. Once downloaded, run NFR as follows:

# nfr --help
Network Flight Recorder (NFR) is an application which captures network traffic
and provides deep analysis and alerting of suspicious events, identifying gaps
in your security controls, highlighting targeted attacks, and policy violations.

Usage:
  nfr [command] [argument]

Available Commands:
  account register       Generate an API key via the licensing server
  account reset [email]  Reset the API key associated with a given email address
  account status         Show the status of your AlphaSOC API key and license
  read [file]            Read network events from a PCAP file on disk
  start                  Start processing network events (inputs defined in config)
  version                Show the NFR binary version
  help                   Provides help and usage instructions

Use "nfr [command] --help" for more information about a given command.

Configuration

NFR expects to find its configuration file in /etc/nfr/config.yml. If you installed the Debian package, an example config.yml would have been installed for you in /etc/nfr. Otherwise, you can find the example config.yml file in the repository's root directory. The file defines the AlphaSOC Analytics Engine location and configuration, input preferences (e.g. log files to monitor), output preferences, and other variables. If you already have AlphaSOC API key, update the file with your key and place within the /etc/nfr/ directory.

If you are a new user, simply run nfr account register (as root) to create the file and generate an API key, e.g.

# nfr account register
Please provide your details to generate an AlphaSOC API key.
A valid email address is required for activation purposes.

By performing this request you agree to our Terms of Service and Privacy Policy
(https://www.alphasoc.com/terms-of-service)

Full Name: Joey Bag O'Donuts
Email: [email protected]

Success! The configuration has been written to /etc/nfr/config.yml
Next, check your email and click the verification link to activate your API key.

Processing events from the network

If you are running NFR under Linux, use the sniffer directive within /etc/nfr/config.yml to specify a network interface to monitor. To monitor interface eth1 you can use the configuration below.

  sniffer:
    enabled: true
    interface: eth1

Processing events from disk

Use the monitor directive within /etc/nfr/config.yml to actively read log files from disk. Bro IDS (Zeek) logs both DNS, IP, and HTTP traffic, whereas Suricata only logs DNS traffic. To monitor both Bro conn.log, dns.log, and http.log output you can use this configuration:

monitor:
  - format: bro
    type: dns
    file: /path/to/dns.log
  - format: bro
    type: ip
    file: /path/to/conn.log
  - format: bro
    type: http
    file: /path/to/http.log

To process Suricata DNS output you would use:

monitor:
  - format: suricata
    type: dns
    file: /path/to/eve.json

Microsoft DNS (format: msdns) and BIND over syslog (format: syslog-named) are also supported at this time. Please contact [email protected] if you have a particular use case and wish to monitor a file format that is not listed here. If you wish to process events from a given PCAP file on disk, please use the read command when running NFR.

Processing events from Elasticsearch

Use the elastic directive within /etc/nfr/config.yml to retrieve telemetry from Elasticsearch. Both Elastic Cloud and local deployments are supported. For configuration details, see comments in config.yml

If your data is ECS-compliant, configuration is straightforward:

  elastic:
    enabled: true
    hosts:
      - localhost:9200
    # If authorization is needed:
    # api_key: ... # or:
    # username: admin
    # password: password

    searches:
      - event_type: dns
        indices:
          - filebeat-*
        index_schema: ecs
      - event_type: ip
        indices:
          - filebeat-*
        index_schema: ecs
      - event_type: http
        indices:
          - filebeat-*
        index_schema: ecs

Currently ECS, Graylog and custom schemas are supported. For custom schemas you can define your own search terms and/or list fields that must be present in a document to be picked by nfr for processing.

Under the hood, nfr periodically runs a search:

{
  "docvalue_fields": [
    {
      "field": "@timestamp", // field name defined in config
      "format": "strict_date_time"
    },
    {
      "field": "event.ingested", // field name defined in config
      "format": "strict_date_time"
    }
  ],
  "_source": [
    // configurable field names
    "source.ip",
    "source.port",
    "dns.question.name",
    "dns.question.type"
  ],
  "size": 100,
  "query": {
    "bool": {
      "must": [
        // configurable field names
        {"exists": {"field": "source.ip"}},
        {"exists": {"field": "dns.question.name"}},
        {"exists": {"field": "dns.question.type"}}
      ],
      "filter": [
        {
          // configurable filter term
          "term": {"tags": "zeek.dns"}
        },
        {
          "range": {
            // automatically inserted to handle pagination
            "event.ingested": {
              "gte": "2021-03-05T13:28:49.254Z"
            }
          }
        }
      ]
    }
  },
  "sort": [
    {
      "event.ingested": "asc"
    }
  ],
  "pit": {
    "id": "w62xAwU..." // Every search runs inside Point-In-Time
  },
  "search_after": [
    1614950929254,
    "S8eTAngB14iTwI_2kzVm"
  ]
}

Monitoring scope

Use directives within /etc/nfr/scope.yml to define the monitoring scope. If you installed the Debian package, an example scope.yml would have been installed for you in /etc/nfr. Otherwise, you can find the example scope.yml file in the repository's root directory. Network traffic from the IP ranges within scope will be processed by the AlphaSOC Analytics Engine, and domains that are whitelisted (e.g. internal trusted domains) will be ignored. Adjust scope.yml to define the networks and systems that you wish to monitor, and the events to discard, e.g.

groups:
  private_network:
    label: "Private network"
    in_scope:
      - 10.0.0.0/8
      - 192.168.0.0/16
    out_scope:
      - 10.1.0.0/16
      - 10.2.0.254/32
    trusted_domains:
      - "*.example.com"
      - "*.alphasoc.net"
      - "google.com"
  public_network:
    label: "Private network"
    in_scope:
      - 131.1.0.0/16
  my_own_group:
    label: "Custom group"
    in_scope:
      - 131.2.0.0/16
    trusted_domains:
      - "site.net"
      - "*.internal.company.org"

Running NFR

You may run nfr start via tmux or screen under Linux, or set up a service (detailed in the following section). NFR returns alert data in JSON format to stderr. Below an example in which raw the JSON is both stored on disk at /tmp/alerts.json and rendered via jq to make it human-readable in the terminal.

# nfr start 2>&1 >/dev/null | tee /tmp/alerts.json | jq .
{
  "type": "alert",
  "eventType": "dns",
  "flags": [
    "apt",
    "freedns"
  ],
  "groups": [
    {
      "label": "default",
      "desc": "Default"
    }
  ],
  "threats": {
    "c2_communication": {
      "severity": 5,
      "desc": "C2 communication attempt indicating infection",
      "policy": false
    }
  },
  "ts": "2018-09-03T09:39:47Z",
  "srcIp": "10.15.0.4",
  "query": "microsoft775.com",
  "recordType": "A"
}

Running NFR as a service

Under Linux

If you are using a current Linux distribution (e.g. RHEL7, Ubuntu 16), it will have systemd installed. Follow these steps as root to run NFR as a service. NOTE: If you installed the Debian package, you can skip steps 1-3 below.

  1. Create the NFR configuration directory and copy config.yml and scope.yml into it
mkdir /etc/nfr
cp config.yml /etc/nfr
cp scope.yml /etc/nfr
  1. Copy the nfr binary into /usr/local/bin and ensure it's executable
cp nfr /usr/local/bin
chmod a+x /usr/local/bin/nfr
  1. Copy the sample NFR service file nfr.service to /etc/systemd/system/

  2. Use systemctl to enable NFR, start the service, and review its status

systemctl enable nfr
systemctl start nfr
systemctl status nfr

Once NFR is installed, you can view logs and troubleshoot using journalctl -u nfr.

To stop and remove the service, follow these steps:

systemctl stop nfr
systemctl disable nfr
rm /etc/systemd/system/nfr.service

Under Microsoft Windows

To run NFR as a service under Windows, first install NSSM, and follow the steps below within PowerShell as Administrator.

  1. Create the NFR configuration directory and copy config.yml and scope.yml into it
New-Item -ItemType directory -Path $Env:AppData\nfr
Move-Item -Path config.yml -Destination $Env:AppData\nfr
Move-Item -Path scope.yml -Destination $Env:AppData\nfr
  1. Use NSSM to install the service, start it, and review status (note: modify the path to nfr.exe as needed)
nssm.exe install nfr C:\path\to\nfr.exe start
nssm.exe start nfr
nssm.exe status nfr

To stop and remove the service, follow these steps:

nssm.exe stop nfr
nssm.exe remove nfr

nfr's People

Contributors

chrisforce1 avatar dantese avatar dependabot[bot] avatar ioj avatar kmroz avatar krhubert avatar lastsalmonman avatar tg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nfr's Issues

Events format

Currently format of events is as following:
timestamp;ip;record_type;domain;severity;threat_definition;flags

However according to API there can be more that one threat_definition and severity.

Format need to be re-specified, for example separate them with colon:
timestamp;ip;record_type;domain;severity1,severity2;threat_definition1,threat_definition2;flags

Don't log to file

Clap and riswiz log to stdout/stderr (not sure), which makes them easily runnable in a container and easily fed to journald. That's what namescore should do as well.

Support Bro conn.log processing

To send IP events to the API for scoring, we need to pick up the following from conn.log (full schema here): ts, id.orig_h, id.orig_p, id.resp_h, id.resp_p, proto, orig_bytes, and resp_bytes. NFR should only send IP data for Internet-bound destinations to the API for processing, so RFC 3330 IPv4 and RFC 5156 IPv6 destinations should be ignored.

Move command line options to config.yml

Off the top of my head we need to add these to config.yml and support processing of different files in different locations and then sending of alerts to different destinations / services.

  • Data inputs (e.g. monitor this file and it's in this format)
  • Data outputs (e.g. send alerts here using this format)

Regarding inputs, we have monitoring of dns or ip data but also need to add http and then we need to tell NFR which file to monitor and tail -f (or equivalent within Windows) and its format (e.g. msdns, named, bro, suricata)

Regarding outputs, the idea is that we generate alerts in formats that can easily be consumed by other software. Currently we use our own proprietary JSON format and we need to move towards CEF, LEEF, and other formats that are used by SIEM platforms (e.g. HP ArcSight, IBM QRadar, and Graylog).

I'll add additional inputs and outputs as issues here, but let's set up the config.yml so that its possible to monitor multiple files in different formats and then send alerts to different locations (e.g. send policy violations to ServiceNow, send threats high severity and above to Graylog, and send all threats to Slack).

Packet capture doesn't work

I'd provide a more verbose bug report, but namescore listen just sits there silently and doesn't produce any logs.

Use log15 instead of custom logging wrapper

I suggest using log15 instead of a custom wrapper. We're using log15 in our software and I think it has everything we need. No need to duplicate work which was already done by someone else.

Update code and paths from Namescore -> NFR

e.g. the --help text needs to change, mentions of Namescore throughout, and the /etc/namescore/ path. Each of these need to change to NFR, as per the commit I just made to the README with the new details (3ea9579)

Support Bro dns.log processing

If we can process the Bro dns.log format (by reading the file from the local filesystem) we can deploy NFR onto Bro IDS sensors and Corelight appliances to submit data to our API for scoring. The schema is described here and we just need to pick up the ts, id.orig_h, query, and qtype_name values for each query.

Support Suricata eve.json DNS processing

Similar to #48, we need to support local pickup and processing of DNS events from Suricata eve.json. The schema is described here and we should look for "type": "query" events and then pull timestamp, source (IP), rrname, and rrtype values to send to the API for scoring.

Listen command isn't verbose enough

I installed namescore, didn't configure it and tried to run listen. It returned without any message, which left me unsure what happened:

master ➜  namescore git:(master) namescore
namescore is application which captures DNS requests and provides
deep analysis and alerting of suspicious events,
identifying gaps in your security controls and highlighting targeted attacks.

Usage:
  namescore [command]

Available Commands:
  listen      daemon mode
  register    Acquire and register API key.
  status      Shows status of namescore

Use "namescore [command] --help" for more information about a command.
master ➜  namescore git:(master) namescore listen                              <--- HERE
master ➜  namescore git:(master) namescore status
namescore version:        0.1
Configuration status:     config file does not exist

No linter warnings

The codebase should pass a standard golang linter tooling without errors and warnings (no uncommented exported methods, etc.).

Tidy up text in debug mode

Remove .'s, namescore -> Namescore, and exitting -> exiting.

INFO[04-04|20:45:18] Configuration was successfully read. 
INFO[04-04|20:45:18] DNS sniffer was created.                 iface=eth0
INFO[04-04|20:45:18] Whitelist notice                         err="open /etc/alphasoc/whitelist.toml: no such file or directory"
INFO[04-04|20:45:18] namescore daemon started                 version=0.1
INFO[04-04|20:45:18] Handlers are started. 
DBUG[04-04|20:45:38] Sniffed:                                 FQDN=google.com IP=10.0.2.15
DBUG[04-04|20:45:39] Sniffed:                                 FQDN=14.8.217.172.in-addr.arpa IP=10.0.2.15
DBUG[04-04|20:45:42] Sniffed:                                 FQDN=whatever.com IP=10.0.2.15
DBUG[04-04|20:45:42] Sniffed:                                 FQDN=250.151.57.198.in-addr.arpa IP=10.0.2.15
DBUG[04-04|20:45:43] Sniffed:                                 FQDN=250.151.57.198.in-addr.arpa IP=10.0.2.15
^CINFO[04-04|20:45:48] Stopped sending queries. 
INFO[04-04|20:45:48] Stopped sending queries. 
INFO[04-04|20:45:48] Stopped retrieving alerts. 
INFO[04-04|20:45:50] namescore exitting                       signal=interrupt

Improve status details

namescore status looks really good, but we can make some improvements:
image

  1. Capitalize "Namescore status" and "Network interface to use" to make them consistent with the rest of the lines
  2. Show configuration path in the 2nd line:
    Configuration status: /etc/alphasoc/namescore.toml, OK

Fix utils.CreateDirsForFile()

This function is overly complicated and uses a custom FileExists() which doesn't handle errors correctly (i.e. it doesn't fail and returns true on errors other than IsNotExist). That function may be removed anyway, and instead you can just handle errors returned by os.MkdirAll()

Support MS Windows

Currently namescore runs only on Linux, but we need to support MS Windows as well.

Mechanism for alphasoc servers down

There should be introduced buffering mechanism for queries, for example when alphasoc server will be down for some time.

My proposal:
All unsuccessful attepts to send would be stored in:
/tmp/asoc/chunk_ID

Create worker gorutine, which would every time_interval check them and try to resend.

Allow PCAP files to be loaded for scoring

i.e. provide a path to a PCAP file from the command line, which then extracts the DNS query events, sends and sends them to the API for scoring. This will be useful with regard to incident response data and artifacts that consultants and security teams wish to score.

namescore versioning

One of arguments for /v1/key/request is platform name and version.

How do you handle versioning? Hardcoding?

Namescore is logging empty lines

I'm running a daemon and it keeps adding empty lines to its log file:

[root@biuro phob0s/alphasoc]# ls -la
total 24
drwx------ 3 root root 4096 Feb 28 12:01 .
drwx------ 3 root root 4096 Feb 28 11:48 ..
drwx------ 2 root root 4096 Feb 28 11:48 backup
-rw-r----- 1 root root    3 Feb 28 12:01 follow
-rw------- 1 root root    3 Feb 28 12:01 namescore.log           <-- size 3
-rw-r----- 1 root root   87 Feb 28 11:48 namescore.toml
[root@biuro phob0s/alphasoc]# ls -la
total 24
drwx------ 3 root root 4096 Feb 28 12:01 .
drwx------ 3 root root 4096 Feb 28 11:48 ..
drwx------ 2 root root 4096 Feb 28 11:48 backup
-rw-r----- 1 root root    3 Feb 28 12:01 follow
-rw------- 1 root root    4 Feb 28 12:01 namescore.log           <-- size 4
-rw-r----- 1 root root   87 Feb 28 11:48 namescore.toml
[root@biuro phob0s/alphasoc]# ls -la
total 24
drwx------ 3 root root 4096 Feb 28 12:02 .
drwx------ 3 root root 4096 Feb 28 11:48 ..
drwx------ 2 root root 4096 Feb 28 11:48 backup
-rw-r----- 1 root root    3 Feb 28 12:02 follow
-rw------- 1 root root    5 Feb 28 12:02 namescore.log           <-- size 5
-rw-r----- 1 root root   87 Feb 28 11:48 namescore.toml
[root@biuro phob0s/alphasoc]# cat namescore.log 





[root@biuro phob0s/alphasoc]# 

Configure CI

We need a circleci.yml file to run tests and make binaries for rpm/deb packages.

Fix default dirs

Lol, namescore just created /home/phob0s/ to keep its files. Make yourself at home (on the Finesti server), phob0s! ;-)

/v1/queries response handling

Do you think namescore should analyze /v1/queries response

Lets say I would receive:

{
  "received": 1000,
  "accepted": 80,
  "rejected": {
    "bad_names": 20,
    "ignored_domains": 800,
    "duplicates": 100
  }
}

Should some mechanism be introduced ( eg. print warning to syslog about every 1000 rejected queries ) ? Or new milestone ?

List network interfaces during registration

A question about network interface should list all interfaces and suggest the best matching one, i.e. this:
Network interface to bind with:
should look like this:

Available network interfaces: eth0, lo, docker0
A network interface to bind to [eth0]:

Do not check for root privileges

If I remember correctly, using a promiscuous mode on an interface may be enabled for unpriviledged users by a kernel option (or was it ACL? don't remember). Please research it if you can and if my statement is true -- remove that root check.

GELF output support

We need to support sending of alerts using GELF over TCP, as per http://docs.graylog.org/en/2.4/pages/gelf.html#. Once we have the data coming into GELF (likely encoding our content within full_message or using _field1, _field2, _fieldn to communicate threat, severity, flags, and so on) we need to put together a "content pack" which describes the data and how it should be rendered in Graylog, e.g.

https://marketplace.graylog.org/addons/9e13e6bd-5439-48ac-8065-73b24e6ca027
https://github.com/colin-stubbs/graylog-cb-defense/blob/master/content_pack.json

Just FYI, other output formats will later include CEF (IBM ArcSight) and LEEF (IBM QRadar). I'll write these up another time. They are far lower priority than Graylog.

Update README

Update README with a more comprehensive installation guide.

Tidy up registration text / flow

When run, if Namescore doesn't find a configuration file, it should prompt the user:

# /home/vagrant/go/bin/namescore
AlphaSOC Namescore Setup and API Key Generation

Select a network interface to monitor for DNS traffic
Detected interfaces:
  - lo
  - eth0

Interface to monitor: eth0

Interface to monitor: eth0
Provide your details to generate an API key and complete setup. A valid email
address is required to activate the key. By performing this request you agree to
our Terms of Service and Privacy Policy (https://www.alphasoc.com/terms-of-service)

Full name: Joey Bag O'Donuts
Organization: AlphaSOC
Email: [email protected]

Success! Check your email and click the verification link to activate your API key

API functions test

        server := os.Getenv("ASOC_TEST_SERVER")
	if server == "" {
		return
	}

	key := os.Getenv("ASOC_API_KEY")
	if key == "" {
		return
	}

I have such a code in tests of API functions, because I am running clap meanwhile:

ASOC_API_KEY=2d18a990c0587b2078fbab5faa84be02 ./clap --sick --mock mock.toml

Is this way that is okay for you?

"monitor" command

We need to be able to continuously process dns.log and other files which are running on IDS sensors and appliances processing traffic. If we are using the "read" command to read a file once and process, we should establish a "monitor" command which is similar but doing the equivalent of tail -f for the file and monitoring it continuously.

asoc.Entry should be a struct.

Please refactor asoc.Entry into a struct instead of []string. I'd suggest using net.IP and time.Time for endpoint and timestamp information respectively.

You can then implement a function which unpacks layers.DNSQuestion into it (along with timestamp and endpoint info) and methods for JSON marshalling, whitelist matching and anything else required.

There's no way to investigate possible listening problems

I ran the daemon, it started silently and apparently no data is pushed to AlphaSOC. I can check it in the backend, but a normal use won't be able to do it. Currently a user can't know if the daemon is working correctly or not. I suggest doing two things:

  1. Add a --verbose flag to the listen command
  2. Log some stats to namescore.log once every few minutes (captured packet stats, etc.)

Use goroutines where it makes sense

As discussed today -- have a think about using goroutines and channels where it makes sense, e.g. on blocking operations like interface sniffing, API communication and on scheduled tasks, like retrieving alerts, etc.

Please build a statically-linked binary

namescore depends on libpcap which has different versions between centos and ubuntu. Linking statically solves the problem in one go (that's what clap and riswiz do).

Make releases

Add --version and make proper releases (with builds). We could also add homebrew formula. goreleaser might be useful, unless there are better tools.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.