Code Monkey home page Code Monkey logo

cb-event-forwarder's People

Contributors

askthedragon avatar brianebeyer avatar crothe avatar csawan-vmware avatar d-sean avatar davidpitkin avatar dkrizhanovsky avatar dseidel-b9 avatar eekaiboon avatar jcapolino avatar jgarman avatar pstephan-cb avatar sjones-github avatar vnagendra avatar zacharyestep avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cb-event-forwarder's Issues

OS X netconn events, local_ip field

Using the latest OS X sensor, the local_ip field of exported netconn events appears to not have its endianness "fixed" before converting to a dotted decimal string, example output from a record that was exported to S3 using the latest event forwarder and latest OS X sensor:

{u'cb_server': u'cbserver',
  u'computer_name': u'effluxmini2.local',
  u'direction': u'inbound',
  u'domain': u'',
  u'event_type': u'netconn',
  u'ipv4': u'8.8.8.8',
  u'local_ip': u'55.3.0.10',   <---- BACKWARDS
  u'local_port': 24667,
  u'md5': u'5D3C79EEB4FD77EDFEB1AA6FDB7D82C9',
  u'pid': 804,
  u'port': 24667,
  u'process_guid': u'00000011-0000-0324-01d2-56f7f5883e12',
  u'protocol': 17,
  u'remote_ip': u'8.8.8.8',
  u'remote_port': 53,
  u'sensor_id': 17,
  u'timestamp': 1483689948,
  u'type': u'ingress.event.netconn'}

Support multiple output types

In event-forwarder.conf, could we support multiple output types?

Tried the following with no luck
output_type=tcp,s3, file
... With all the configurations for the options specified

also tried a different variation with output_type specified in separate lines with their respective configurations specified, which (as suspected) didn't work either..

watchlist.hit.process only fires on SOLR commit

Testing with 5.2, I'm seeing that the watchlist.hit.process event gets delivered when the process info is actually committed into SOLR and is available in the UI.

I don't even see a watchlist.storage.hit.process event being generated at all.

Support "netconn2" field in Cb 5.1

New protobuf published in Cb5.1 includes the local_ip and local_port of all netconns. Update output and ensure we don't break existing functionality.

Cb Response 5.2 changed feed hit message types

Cb Response 5.2 now sends feed hits over the message bus tagged with the feed id in the message type field. This should be re-normalized back to the appropriate feed hit type on the output side, so as to conform to the strings expected by consumers such as Splunk and QRadar.

unique_id for alert.* event types incorrectly parsed into process_link

See snippet below:

link_process=https://CBserver/#analyze/7b25edf2-ab85-46ed-be09-7c0c62ce92fc/1
process_guid=00000004-0000-38e8-01d1-ca285bd36b5c
process_id=00000004-0000-38e8-01d1-ca285bd36b5c
unique_id=7b25edf2-ab85-46ed-be09-7c0c62ce92fc
type=alert.watchlist.hit.query.process

The unique_id for events of type alert.* is not a process GUID, unlike the other event types. For alert.* event types, ignore the unique_id field and only use the process_id field to populate the link_process link.

Wrong segment ID for LEEF output in netconn events

In case of processes with a lot of netconn events, a LEEF output of a corresponding netconn contains a link to a segment 1, while this particular netconn is at other segment id.

With a following LEEF:
LEEF:1.0|CB|CB|5.1|ingress.event.netconn|cb_server=cbserver computer_name=HOSTNAME direction=inbound domain= dst=IP.180 dstPort=41447 event_type=netconn ipv4=IP.180 link_process=https://CB-SERVER/#analyze/00000014-0000-0ccc-01d1-85424831d984/1 local_ip=IP.243 local_port=443 md5=3A31ECC4DD96E63D4193FD13CE2233C3 pid=3276 port=443 process_guid=00000014-0000-0ccc-01d1-85424831d984 proto=6 protocol=6 remote_ip=IP.180 remote_port=41447 sensor_id=20 src=IP.243 srcPort=443 timestamp=1469447484 type=ingress.event.netcon

I was able to find a given netconn event in the GUI under segment id 389 (I had to look for it manually),
https://CB-SERVER/#/analyze/00000014-0000-0ccc-01d1-85424831d984/389/cb.urlver=1&q=process_name%3Awspsrv.exe%20sensor_id%3A20&sort=start%20desc&rows=10&start=0

Event type filtering on the raw event exchange

Currently, enabling the raw event exchange forces the user to consume all message types, even if they only want a few high-volume event types. Add filtering into the output pipeline so only the event types the user wants to consume are output.

Feed hits do not include the "title" field from the feed

The feed hit provided by Carbon Black over the event bus does not include the "title" from the feed report. That context would be extremely useful for Splunk and other SIEM integrations.

Technically this is challenging because that would require interacting asynchronously with the Cb REST API - we currently don't even need an API key for the event-forwarder to function. Events would have to be enqueued locally and the title retrieved from the feed document before they could be emitted as output.

Hadoop HDFS output type?

Need more details, but to help customers that are pushing into "big data" - how can we push straight into their analysis pipeline

Add cross-process support

Determine best way to publish cross-process events - do we just convert the event to JSON or do further processing?

Add more context to "storage" event types

Several key fields are present in the original feed/watchlist notification messages, but are missing in the corresponding "storage" events emitted when the hit is committed to disk. Implementing this without changes in the product will require a cache of event data from the original notification so we can augment the "storage" events with that information.

Requested fields:

  • comms_ip
    ...

init script is overwritten upon event-forwarder upgrade

The cb-event-forwarder upgrade process currently overwrites /etc/init/cb-event-forwarder.conf upon an upgrade. Since /etc/init/cb-event-forwarder.conf is commonly modified when running multiple instances of event-forwarder, we should generate an rpmnew instead of overwriting it.

Allow S3 configuration to specificy a prefix (sub-folder)

Currently the S3 configuration only allows you to specify a bucket where the files will be saved, but the files are all saved with no prefix.

Please allow a configurable and optional prefix to be used for all created files, as this will give users more flexibility to store the events in non-dedicated buckets more easily.

Building the packages failes with error message

Trying to build the package fails with the following errors:

./pb_message_processor.go:17: undefined: sensor_events.CbEventMsg
./pb_message_processor.go:31: undefined: sensor_events.CbEventMsg

use of credential_profile option fails to start

If using credential_profile option, cb-event-forwarder fails to start due to failure of the pre-start process:

Jan  6 00:28:08 localhost init: Connection from private client
Jan  6 00:28:08 localhost init: cb-event-forwarder goal changed from stop to start
Jan  6 00:28:08 localhost init: cb-event-forwarder state changed from waiting to starting
Jan  6 00:28:08 localhost init: Handling starting event
Jan  6 00:28:08 localhost init: cb-event-forwarder state changed from starting to pre-start
Jan  6 00:28:08 localhost init: cb-event-forwarder pre-start process (124932)
Jan  6 00:28:08 localhost init: cb-event-forwarder pre-start process (124932) terminated with status 1
Jan  6 00:28:08 localhost init: cb-event-forwarder goal changed from start to stop
Jan  6 00:28:08 localhost init: cb-event-forwarder state changed from pre-start to stopping
Jan  6 00:28:08 localhost init: Handling stopping event
Jan  6 00:28:08 localhost init: cb-event-forwarder state changed from stopping to killed
Jan  6 00:28:08 localhost init: cb-event-forwarder state changed from killed to post-stop
Jan  6 00:28:08 localhost init: cb-event-forwarder state changed from post-stop to waiting
Jan  6 00:28:08 localhost init: Handling stopped event

If we redirect output to a file, you see

[root@localhost cb-event-forwarder]# cat /tmp/blah.log
2016/01/06 01:18:20 Could not open bucket s3-cb-lts: UserHomeNotFound: user home directory not found.
[root@localhost cb-event-forwarder]#

if we explicitly set the HOME env variable in the init script it works:

pre-start script
 export HOME=/root
 /usr/share/cb/integrations/event-forwarder/cb-event-forwarder -check /etc/cb/integrations/event-forwarder/cb-event-forwarder.conf > /tmp/blah.log 2>&1
end script

script
  export HOME=/root
  /usr/share/cb/integrations/event-forwarder/cb-event-forwarder /etc/cb/integrations/event-forwarder/cb-event-forwarder.conf &> /var/log/cb/integrations/cb-event-forwarder/cb-event-forwarder.log
end script

Source and destination IPs messed up for netconn events

It seems that source and destination IPs are messed up for netconn events. For a following LEEF:
LEEF:1.0|CB|CB|5.1|ingress.event.netconn|cb_server=cbserver computer_name=HOSTNAME direction=inbound domain= dst=IP.180 dstPort=41447 event_type=netconn ipv4=IP.180 link_process=https://CB-SERVER/#analyze/00000014-0000-0ccc-01d1-85424831d984/1 local_ip=IP.243 local_port=443 md5=3A31ECC4DD96E63D4193FD13CE2233C3 pid=3276 port=443 process_guid=00000014-0000-0ccc-01d1-85424831d984 proto=6 protocol=6 remote_ip=IP.180 remote_port=41447 sensor_id=20 src=IP.243 srcPort=443 timestamp=1469447484 type=ingress.event.netcon

HOSTNAME has local IP .243
epoch from LEEF: 1469447484 -> Mon, 25 Jul 2016 11:51:24 GMT

According to above LEEF, and QRadar is parsing this in the same way, this is a connection which source is .243/443 while destination (marked as remote) is .180/41447. Which by itself is causing suspicions, as this should have different direction. .243 is a server listening on tcp/443.

I managed to find this event in GUI, screenshot is attached.

Screenshot confirms that the connection was from .180 to .243, which is correct direction.
ev-forwarder-bug

I'm seeing hundreds of such events with broken direction.

Is there a chance to use dst IP instead on remote IP, to avoid confusion?

Testing cb-event-forwarder in debug-mode

cb-event-forwarder should be able to send a test ping or some kind of test message in debug mode.

I tried a couple of integrations.. TCP, and S3

It is not clear whether the receiver isn't configured right or the sender isn't able to send. When the problem is DNS resolution, the logs (/var/logs/cb/integrations**) are pretty clear (in debug mode) - but when that's not the problem, there is no indication in the log file about what happened to the TCP packet.

Same issue with S3, there is no indication in the log file where the process failed. In my testing, I enabled Cloudtrail, which showed no events to the bucket - which I suspect means there is some other issue on the sender's side

Add context to procstart messages

Include elements in the proc* events, specifically parent process information (not just GUID) and signature information from the binary

Compress output files

Create a configuration option that will compress the output files and add corresponding file extensions (.gz for gzip, for example).

cb-event-forwarder service consistently fails to restart

Once cb-event-forwarder starts, it seems to be really hard to restart it (due to any kind of configuration changes). This output is pretty normal in our setup. Any ideas on how to force it to restart. We obviously tried kill -15, kill -9 etc., also tried service cb-event-forwarder stop/restart.

I think the service cb-event-forwarder force-reload is perfect for our configuration changes. But by accident if we try to restart or stop, it goes into a funky state (or so we think). The error is the same regardless of the command (user pressing control+c after a while)

Logs below

[root@93ccfd89d468 /]# service cb-event-forwarder try-restart
Restarting cb-event-forwarder: 2015-09-28 10:50:08,528: cbint.utils: INFO: parsing configuration
2015-09-28 10:50:08,529: cbint.utils: INFO: section: bridge
2015-09-28 10:50:08,529: cbint.utils: INFO:    message_processor_count: 4
2015-09-28 10:50:08,529: cbint.utils: INFO:    server_name: customer2
2015-09-28 10:50:08,529: cbint.utils: INFO:    debug: 1
2015-09-28 10:50:08,529: cbint.utils: INFO:    cbapi_server: https://localhost/
2015-09-28 10:50:08,529: cbint.utils: INFO:    cbapi_token: REDACTED
2015-09-28 10:50:08,530: cbint.utils: INFO:    cbapi_ssl_verify: False
2015-09-28 10:50:08,530: cbint.utils: INFO:    rabbit_mq_username:
2015-09-28 10:50:08,530: cbint.utils: INFO:    rabbit_mq_password:
2015-09-28 10:50:08,530: cbint.utils: INFO:    output_type: udp
2015-09-28 10:50:08,530: cbint.utils: INFO:    udpout: graylog.internal.REDACTED.com:19384
2015-09-28 10:50:08,530: cbint.utils: INFO:    events_raw_sensor: 0
2015-09-28 10:50:08,530: cbint.utils: INFO:    events_watchlist: ALL
2015-09-28 10:50:08,531: cbint.utils: INFO:    events_feed: ALL
2015-09-28 10:50:08,531: cbint.utils: INFO:    events_alert: ALL
2015-09-28 10:50:08,531: cbint.utils: INFO:    events_binary_observed: 0
2015-09-28 10:50:08,531: cbint.utils: INFO:    events_binary_upload: 0
2015-09-28 10:50:08,531: cbeventforwarder: INFO: Got a shutdown of service
2015-09-28 10:50:08,531: cbint.utils: INFO: daemon stopping...
^CTraceback (most recent call last):
  File "<string>", line 18, in <module>
  File "/home/builduser/rpmbuild/BUILD/cb-event-forwarder-2.0/build/cb-event-forwarder/out00-PYZ.pyz/cbint.utils.daemon", line 215, in restart
  File "/home/builduser/rpmbuild/BUILD/cb-event-forwarder-2.0/build/cb-event-forwarder/out00-PYZ.pyz/cbint.utils.daemon", line 200, in stop
KeyboardInterrupt

Circuit breaker functionality

If the event-forwarder falls behind and the queue grows without bound, kill the amqp connection and notify the user that we are not keeping up with the incoming volume (so as to save the rest of the cb cluster)

Improve retry and reconnect code

Code currently does not handle unreachability, restart or network partition with the Carbon Black server well. Refactor and fix so that it will re-establish connectivity when the network/host is back online.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.