Code Monkey home page Code Monkey logo

cloudmarker's Introduction

Cloudmarker

Cloudmarker is a cloud monitoring tool and framework.

image

image

image

image

image

image

Contents

Table of Contents:

What is Cloudmarker?

Cloudmarker is a cloud monitoring tool and framework. It can be used as a ready-made tool that audits your Azure or GCP cloud environments as well as a framework that allows you to develop your own cloud monitoring software to audit your clouds.

As a monitoring tool, it performs the following actions:

  • Retrieves data about each configured cloud using the cloud APIs.
  • Saves or indexes the retrieved data into each configured storage system or indexing engine.
  • Analyzes the data for potential issues and generates events that represent the detected issues.
  • Saves the events to configured storage or indexing engines as well as sends the events as alerts to alerting destinations.

Each of the above four aspects of the tool can be configured via a configuration file.

For example, the tool can be configured to pull data from Azure and index its data in Elasticsearch while it also pulls data from GCP and indexes the GCP data in MongoDB. Similarly, it is possible to configure the tool to check for unencrypted disks in Azure, generate events for it, and send them as alerts by email while it checks for insecure firewall rules in both Azure and GCP, generate events for them, and save those events in MongoDB.

This degree of flexibility to configure audits for different clouds in different ways comes from the fact that Cloudmarker is designed as a combination of lightweight framework and a bunch of plugins that do the heavylifting for retrieving cloud data, storing the data, analyzing the data, generating events, and sending alerts. These four types of plugins are formally known as cloud plugins, store plugins, event plugins, and alert plugins, respectively.

As a result of this plugin-based architecture, Cloudmarker can also be used as a framework to develop your own plugins that extend its capabilities by adding support for new types of clouds or data sources, storage or indexing engines, event generation, and alerting destinations.

Why Cloudmarker?

One might wonder why we need a new project like this when similar projects exist. When we began working on this project in 2017, we were aware of similar tools that supported AWS and GCP but none that supported Azure at that time. As a result, we wrote our own tool to support Azure. We later added support for GCP as well. What began as a tiny proof of concept gradually turned into a fair amount of code, so we thought, we might as well share this project online, so that others could use it and see if they find value in it.

So far, some of the highlights of this project are:

  • It is simple. It is easy to understand how to use the four types of plugins (clouds, stores, events, and alerts) to perform an audit.
  • It is excellent at creating an inventory of the cloud environment.
  • The data inventory it creates is easy to query.
  • It is good at detecting insecure firewall rules and unencrypted disks. New detection mechanisms are coming up.

We also realize that we can add a lot more functionality to this project to make it more powerful too. See the Wishlist section below to see new features we would like to see in this project. Our project is hosted on GitHub at https://github.com/cloudmarker/cloudmarker. Contributions and pull requests are welcome.

We hope that you would give this project a shot, see if it addresses your needs, and provide us some feedback by posting a comment in our feedback thread or by creating a new issue.

Features

Since Cloudmarker is not just a tool but also a framework, a lot of its functionality can be extended by writing plugins. However, Cloudmarker also comes bundled with a default set of plugins that can be used as is without writing a single line of code. Here is a brief overview of the features that come bundled with Cloudmarker:

  • Perform scheduled or ad hoc audits of cloud environment.
  • Retrieve data from Azure and GCP.
  • Store or index retrieved data in Elasticsearch, MongoDB, Splunk, and the file system.
  • Look for insecure firewall rules and generate firewall rule events.
  • Look for unencrypted disks (Azure only) and generate events.
  • Send alerts for events via email and Slack as well as save alerts in one of the supported storage or indexing engines (see the third point above).
  • Normalize firewall rules from Azure and GCP which are in different formats to a common object model ("com") so that a single query or event rule can search for or detect issues in firewall rules from both clouds.

Wishlist

  • Add more event plugins to detect different types of insecure configuration.
  • Normalize other types of data into a common object model ("com") just like we do right now for firewall rules.

Install

Perform the following steps to set up Cloudmarker.

  1. Create a virtual Python environment and install Cloudmarker in it:

    python3 -m venv venv
    . venv/bin/activate
    pip3 install cloudmarker
  2. Run sanity test:

    cloudmarker -n

    The above command runs a mock audit with mock plugins that generate some mock data. The mock data generated can be found at /tmp/cloudmarker/. Logs from the tool are written to the standard output as well as to /tmp/cloudmarker.log.

    The -n or --now option tells Cloudmarker to run right now instead of waiting for a scheduled run.

To learn how to configure and use Cloudmarker with Azure or GCP clouds, see Cloudmarker Tutorial.

Develop

This section describes how to set up a development environment for Cloudmarker. This section is useful for those who would like to contribute to Cloudmarker or run Cloudmarker directly from its source.

  1. We use primarily three tools to perform development on this project: Python 3, Git, and Make. Your system may already have these tools. But if not, here are some brief instructions on how they can be installed.

    On macOS, if you have Homebrew installed, then these tools can be be installed easily with the following command:

    brew install python git

    On a Debian GNU/Linux system or in another Debian-based Linux distribution, they can be installed with the following commands:

    apt-get update
    apt-get install python3 python3-venv git make

    On a CentOS Linux distribution, they can be installed with these commands:

    yum install centos-release-scl
    yum install git make rh-python36
    scl enable rh-python36 bash

    Note: The scl enable command starts a new shell for you to use Python 3.

    On any other system, we hope you can figure out how to install these tools yourself.

  2. Clone the project repository and enter its top-level directory:

    git clone https://github.com/cloudmarker/cloudmarker.git
    cd cloudmarker
  3. Create a virtual Python environment for development purpose:

    make venv deps

    This creates a virtual Python environment at ~/.venv/cloudmarker. Additionally, it also creates a convenience script named venv in the current directory to easily activate the virtual Python environment which we will soon see in the next point.

    To undo this step at anytime in future, i.e., delete the virtual Python environment directory, either enter rm -rf venv ~/.venv/cloudmarker or enter make rmvenv.

  4. Activate the virtual Python environment:

    . ./venv
  5. In the top-level directory of the project, enter this command:

    python3 -m cloudmarker -n

    This generates mock data at /tmp/cloudmarker. This step serves as a sanity check that ensures that the development environment is correctly set up and that the Cloudmarker audit framework is running properly.

  6. Now that the project is set up correctly, you can create a cloudmarker.yaml to configure Cloudmarker to scan/audit your cloud or you can perform more development on the Cloudmarker source code. See Cloudmarker Tutorial for more details.
  7. If you have set up a development environment to perform more development on Cloudmarker, please consider sending a pull request to us if you think your development work would be useful to the community.
  8. Before sending a pull request, please run the unit tests, code coverage, linters, and document generator to ensure that no existing test has been broken and the pull request adheres to our coding conventions:

    make test
    make coverage
    make lint
    make docs

    To run these four targets in one shot, enter this "shortcut" target:

    make checks

    Open htmlcov/index.html with a web browser to view the code coverage report.

    Open docs/_build/html/index.html with a web browser to view the generated documentation.

Resources

Here is a list of useful links about this project:

Support

To report bugs, suggest improvements, or ask questions, please create a new issue at http://github.com/cloudmarker/cloudmarker/issues.

License

This is free software. You are permitted to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of it, under the terms of the MIT License. See LICENSE.rst for the complete license.

This software is provided WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See LICENSE.rst for the complete disclaimer.

cloudmarker's People

Contributors

dependabot[bot] avatar jaibhageria avatar mitprasoon avatar nishitm avatar prateeknischal avatar renusrijith avatar rosehgal avatar s-nirali avatar sunnysharmagts avatar susam avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloudmarker's Issues

Add scheduler

Add schedule management code in manager.py so that one can configure cloudmarker to run at a fixed time everyday.

Move EmailAlert API docs from cloudmarker.stores to cloudmarker.alerts

Commit 6ae974a added the documentation for EmailAlert to the documentation for cloudmarker.stores package. This should have been added to the documentation for cloudmarker.alerts package instead.

Running make checks should fix this because make checks invokes the make docs target which generates the correct Sphinx project files. The following steps need to be carried out to fix this:

# Create a new branch (the usual stuff).
git checkout master
git pull
git checkout -b docfix

# Create an __init__.py file for alerts package. This is necessary for
# this folder to be recognized as a package. Otherwise no documentation
# would be generated for this package. Make the content of this file
# similar to that of cloudmarker/events/__init__.py
vim cloudmarker/alerts/__init__.py

# View the changes.
git diff docs/api

# Add the changes.
git add docs/api

# Then commit, push, and create a pull request (again, the usual stuff).
git commit
git push origin docfix

Set default values for all params in MongoDBStore

The current signature of MongoDBStore looks like this:

def __init__(self, host, port, username=None, password=None, db='gcp', buffer_size=1000)

After reading @prateeknischal's review comments in an earlier pull request, I think this sigature should be like this:

def __init__(self, host='localhost', port=27017, db='cloudmarker', username=None, password=None, buffer_size=1000)

The exact order of the parameters does not matter technically because this signature is using sensible defaults for every parameter. However, just for the sake of following some convention, I have arbitrarily chosen the big-endian order here: a host contains port, a port has one or more MongoDB databases behind it, and a database has one or more users.

Usage Documentation

Is it possible to add usage documentation for this project? Like what this project for and what it does? It will be helpful to get an essence of what it does.

Remove emailalert from audit config in config.base.yaml

Running sanity check on a fresh clone of Cloudmarker leads to this error:

$ python3 -m cloudmarker -f
...
Traceback (most recent call last):
  File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
    self.run()
  File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/home/susam/git/cloudmarker/cloudmarker/workers.py", line 68, in store_worker
    store_plugin.done()
  File "/home/susam/git/cloudmarker/cloudmarker/alerts/emailalert.py", line 71, in done
    smtp_connection = smtplib.SMTP(host=self.host, port=self.port)
  File "/usr/lib/python3.5/smtplib.py", line 251, in __init__
    (code, msg) = self.connect(host, port)
  File "/usr/lib/python3.5/smtplib.py", line 335, in connect
    self.sock = self._get_socket(host, port, self.timeout)
  File "/usr/lib/python3.5/smtplib.py", line 306, in _get_socket
    self.source_address)
  File "/usr/lib/python3.5/socket.py", line 694, in create_connection
    for res in getaddrinfo(host, port, 0, SOCK_STREAM):
  File "/usr/lib/python3.5/socket.py", line 733, in getaddrinfo
    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known

This error occurs because emailalert is enabled for mockaudit by default in config.base.yaml. This should be removed from mockaudit because there is no default configuration for the EmailAlert plugin that would work well for all users. More importantly, a sanity check after a fresh clone should run without any issues. Also, see https://github.com/cloudmarker/cloudmarker/pull/55/files#r270620747 for more discussion on this.

Rename file_handler to file in baseconfig.py

The handler created with TimedRotatingFileHandler is named as file_handler in baseconfig.py. Rename this to just file to be consistent with the handler name console that is also present in the same file.

Community Feedback

This issue is actually a thread to capture feedback on the Cloudmarker project from the developer and user community.

Consolidate multiple Azure plugins into AzCloud

We seem to be having a proliferation of Azure cloud plugins. The Azure cloud plugins are heavyweight plugins because they launch multiple workers and threads. They also have overlapping concerns. For example, all of them iterate over all the subscriptions. Sometimes two of them iterate on the same type of resource. There is also a high configuration overhead because the same Azure credentials would need to be entered for each cloud plugin in the config file.

I am thinking we should change the AzCloud design a little bit so that the functionality of all other plugins become part of AzCloud plugin. Here's my proposal:

  • Move azsql.py, azvm.py, azwebapp, etc. to a library package, say, azreaders or azlib.
  • These individual modules no longer launch multiple threads and processes. They just yield each record in a single threaded manner.
  • AzCloud's worker method, i.e., _get_resources() invokes these modules in multiple processes and threads to retrieve the data.
  • The functionality of an individual module can be turned on and off via AzCloud's __init__() parameters. For example, in the params config of AzCloud, we can specify that things like we want azsql data to be pulled but we do not want azvm data to be pulled.

Note that we are similarly having a proliferation of event plugins but I think that's fine because the concerns of event plugins are much more isolated and they are very lightweight too.

This is a significant change, so I am looking for a consensus on this idea. If we have consensus, I am willing to perform the entire change myself, or if you prefer, distribute the change among ourselves.

EmailAlert payload should be human readable

Right now, the EmailAlert plugin just dumps the JSON data it receive into the email content. In order to make the emails meaningful to the recipients, the payload format should be made human-friendly.

See this comment for background on this.

Use tenant and secret as parameters in AzureCloud

The AzureCloud plugin currently accepts the following three parameters: tenant_id, client, and key.

I recommend naming them to:

  • tenant (remove _id from the name to be consistent with client)
  • client (no change required)
  • secret (to be consistent with keyword argument name for ServicePrincipalCredentials)

Also, for future reference, I am noting down the several different terminologies in use for each of these fields. We would have to clarify these differences in documentation in future to avoid any confusion.

Reference Field 1 Field 2 Field 3
ServicePrincipalCredentials parameters tenant client_id secret
az login option names --tenant --username --password
az login --help option descriptions tenant service principal client secret
How-to document on creating service principal Tenant ID Application ID Authentication key
Azure Portal Azure Active Directory » Properties » Directory ID Azure Active Directory » App Registrations » (an app) » Application ID Azure Active Directory » App Registrations » (an app) » Settings » Keys » Passwords

With different terms in use for the same fields in various official artifacts of Microsoft, I like the recommendation I have provided in this ticket. It also clearly expresses that the secret is a secret and therefore must be protected carefully.

Create cloud-independent FirewallRuleEvent

Pull request #65 creates firewall rule records with cloud-independent property names and values in the com bucket of each record.

Now create a FirewallRuleEvent plugin that uses the cloud-independent common properties to detect insecurely exposed specific ports to the entire Internet.

Remove DEBUG level logging from baseconfig.py

The logger handlers in baseconfig.py are configured to be logging with DEBUG level. The default should be INFO in the favour of keeping sane defaults. If a users want DEBUG level logging, then they must configure it explicitly.

Check if simply removing the level: DEBUG lines makes the handlers fallback to the root logger's configuration which is already configured to be level: INFO.

Check email payload size limit in EmailAlert

See these comments for background on this:

Some decisions that need to be taken:

  • What should we do if the payload size exceeds, say, 25 MB? Should we truncate the email? Should we split the payload between multiple emails?
  • Should this be handled in the EmailAlert plugin or should it be handled in the utility function util.send_email()?

Update command to run audits right now

Point 5 in the instructions provided in the README contains this command:

python3 -m cloudmarker

This does not run audits right now but instead waits until a scheduled time to start audits. This could be confusing to someone new to the project. Replace this command with the following:

python3 -m cloudmarker -n

This command runs the audits right now.

Integrate a simple dashboard

Currently, this project does not have a dashboard to see the insights from the data collected, be it either anomaly data generated by FirewallCheck plugin or for the data fetched from the cloud. This issue is more of enhancement in which I plan to research a simple dashboard that can be easily integrated with the current system and is also easily maintainable + features rich.

Remove cloudmarker.events.firewallevent.FirewallEvent plugin

Since cloudmarker.events.firewallruleevent.FirewallRuleEvent can detect weak firewall rules in both Azure and GCP clouds, we should now remove the GCP-specific cloudmarker.events.firewallevent.FirewallEvent plugin.

This would most likely involve:

git rm cloudmarker/events/firewallevent.py
git rm cloudmarker/test/test_firewallevent.py
make checks

Place closing square bracket in its own line in the FileStore output files

Right now, the output of FileStore at say, looks like this:

[
{
  "raw": {
    "data": 0
  },
  ...
  "com": {
    "type": "mock",
    "origin_worker": "mockaudit-mockcloud",
    "origin_type": "cloud",
    "cloud_worker": "mockaudit-mockcloud",
    "store_worker": "mockaudit-filestore"
  }
}]

The opening square bracket is in its own line but the closing square bracket is not. Place the square bracket in its own line, so that the output looks symmetrical.

If `logs` directory already exists, `make deps` exits with error status

This is a development environment only issue. If logs directory already exists, make deps fails with the following error:

mkdir logs
mkdir: logs: File exists
make: *** [deps] Error 1

This issue has no serious impact on developers because the dependencies are installed correctly prior to this error. In the interest of neatness, we should fix this. A simple solution is to use the mkdir -p option, e.g., mkdir -p logs which does not report error for existing directories.

Support tilde expansion on filestore path

We should support tilde expansion in any paths read from configuration as much as possible.

For example, currently the filestore plugin module accepts a directory path as path parameter to write output files.

def __init__(self, path='/tmp/cloudmarker'):
"""Initialize object of this class with specified parameters.
Arguments:
path (str): Path of directory where files are written to.
"""
self._path = path

It should expand tilde (~) or (~user) in the path to the home directory of the user.

Python has a os.path.expanduser() function that does this. This should be used to do this task. Here is an example usage:

>>> import os
>>> os.path.expanduser('~susam')
'/Users/susam'
>>> os.path.expanduser('~susam/foo')
'/Users/susam/foo'
>>> os.path.expanduser('~')
'/Users/susam'
>>> os.path.expanduser('~/foo')
'/Users/susam/foo'

Functionality to pull only certain subscription from cloud services.

Currently we provide a number of subscription we would like to pull and check for misconfigurations. There might be scenarios where user would like to pull only certain subscriptions.

This would be a good functionality specially for developers to pull only specific subscription so that it checks for configurations only for his subscription and doesn't get flooded with other subscriptions.

Rename AzureCloud to AzCloud

Pull request #81 introduces EsStore plugin to index data into Elasticsearch. In earlier discussions, we concluded that the abbreviated names (e.g., "Es" for "Elasticsearch") look good for plugins when suitable and popular abbreviations exist.

To keep the remaining plugins consistent with this preference for abbreviations, we can rename azurecloud.AzureCloud to azcloud.AzCloud.

Add more labels for issues

See the list of issue labels at https://github.com/cloudmarker/cloudmarker/issues/labels.

We have the following issue labels right now which were provided in GitHub by default:

Label Description Color
bug Something isn't working #d73a4a
duplicate This issue or pull request already exists #cfd3d7
enhancement New feature or request #63d8cd
good first issue Good for newcomers #7057ff
help wanted Extra attention is needed #008672
invalid This doesn't seem right #FFA500
question Further information is requested #d876e3
wontfix This will not be worked on #ffffff

I am planning to add the following additional labels to better label the issues:

Label Description Color
refactoring Code restructuring without affecting external behaviour ?
quality Test, coverage, or coding style improvements ?
improvement Functional improvement to an existing feature ?
documentation Any fixes or improvements in documentation ?
discussion Discussion on process and other non-coding activities ?
Priority: Low Issue will be fixed but low priority for now ?
Priority: Medium Issue will be fixed ?
Priority: High Issue will be fixed on high priority basis ?
Priority: Critical Alarming issue that needs to fixed urgently ?

Please review these labels and suggest if you would like any additional labels to be added. Please feel free to edit this comment and fill in or alter the color suggestions (if any) in the third column of the new labels.

Remove zone parameter in GCPCloud

The gcloud CLI command can get the list of VMs without requiring the zone parameter.

gcloud auth revoke
gcloud auth activate-service-account --key-file=keyfile.json
project_id=$(grep project_id keyfile.json | sed 's/.*: "\(.*\)".*/\1/')
gcloud compute instances list --project "$project_id"

We should debug the gcloud CLI code and figure out what it is doing to be able to list the VMs without requiring the zone parameter and reuse that in our project.

Rename _logger to _log in workers

The module-level logger variable is named as _log in most places but _logger in one place.

$ grep -r "logging.getLogger" cloudmarker
cloudmarker/stores/mongodbstore.py:_log = logging.getLogger(__name__)
cloudmarker/clouds/azurecloud.py:_log = logging.getLogger(__name__)
cloudmarker/workers.py:_logger = logging.getLogger(__name__)
cloudmarker/manager.py:_log = logging.getLogger(__name__)

Since _log is the more frequently used name, we should settle on using _log everywhere.

Rename _logger to _log in workers.py.

Rename CLI option -f/--force to -n/--now

The current CLI options look like this:

$ python3 -m cloudmarker -h
usage: cloudmarker [-h] [-c CONFIG [CONFIG ...]] [-f]

optional arguments:
  -h, --help            show this help message and exit
  -c CONFIG [CONFIG ...], --config CONFIG [CONFIG ...]
                        Configuration file paths
  -f, --force           set this flag to force a run

I suggest that -f, --force be changed to the following:

  -n, --now             ignore schedule and run audits now

I suggest this change so that the option name makes more sense to new users. The function cloudmarker.util.parse_cli() needs to be modified for this.

Support version comparison in AzWebAppTLSEvent

AzWebAppTLSEvent currently checks the minimum TLS version enabled in an Azure web app and generates an event if it this version does not match the string '1.2'. This is an exact string match. This would lead to incorrect results, for, say, an Azure web app that has its minimum TLS version set to '1.3'. Although, this is a more secure web app configuration, this event plugin would still generate an event for it because it fails to account for the fact that version '1.3' is more recent than version '1.2'. Therefore, this plugin should do a more sophisticated version comparison instead of an exact string match.

See this comment for more context about this issue: #154 (comment)

@mitprasoon: The key min_tls_version is a string. So this expression will be true only of the value of the key min_tls_version is 1.2
This will evaluate to false even if min_tls_version is 1.3
TLS 1.3 is already made available to its customers by many cloud vendors. I suggest to make this check configurable with 1.2 set to minimum tls version for now.

Resolve YAMLLoadWarning

To reproduce this issue, create a fresh new virtual Python environment, so that it pulls the latest versions of the dependencies including PyYAML 5.1 which is necessary to reproduce this issue:

make rmvenv venv deps

Now on running Cloudmarker, we get YAMLLoadWarning about yaml.load(). For example,

$ make test
cloudmarker/cloudmarker/baseconfig.py:110:
YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated,
as the default Loader is unsafe. Please read https://msg.pyyaml.org/loa
d for full details.
  config_dict = yaml.load(config_yaml)
...

See https://msg.pyyaml.org/loa for more details on this warning.

It looks like, replacing the yaml.load() call with yaml.safe_load() call would be a good fix for this issue.

Add AWSCloud plugin

I believe this project is all about monitoring cloud platforms, so why not add support for AWS. AWS has a robust API package boto3. Below is a snip, I will be glad to take this up if I can get the project and contribution details.

class Aws:

    def __init__(self, aws_key, aws_id, aws_reg, end_pt_url):
        self.aws_key = aws_key
        self.aws_id = aws_id
        self.aws_reg = aws_reg
        self.end_pt_url = end_pt_url

    def session_to_aws(self):
        # this gets the instances from the given aws account.
        vpcc = boto3.session.Session()
        return vpcc.resource(service_name='ec2', use_ssl=True, verify=False, aws_access_key_id=self.aws_id,
                             aws_secret_access_key=self.aws_key,
                             region_name=self.aws_reg, endpoint_url=self.end_pt_url)`

This can grab the instance, SG and all other stuff. Then we can work on serializing the data and send it to the appropriate indexing engine.

Add logs in GCPCloud

See the "Found" logs in AzureCloud plugin. GCPCloud plugin should also have similar logs.

Rename "checks" to "events"

This is a discussion. Right now an example configuration looks like this:

audits:
  mockaudit:
    clouds:
      - mockcloud
    stores:
      - filestore
    checks:
      - mockcheck
    alerts:
      - filestore

We talk about these plugins in this manner:

  • The cloud plugins generate cloud records.
  • The store plugins store the cloud records.
  • The check plugins generate event records.
  • the alert plugins alert on the event records.

I think two separate terms "check" and "event" are superfluous and unnecessary. The word "event" is an industry standard, so I propose we settle only on a single word "event" here. So my proposal is a configuration like this:

audits:
  mockaudit:
    clouds:
      - mockcloud
    stores:
      - filestore
    events:
      - mockevent
    alerts:
      - filestore

We would now talk about the plugins in this manner:

  • The cloud plugins generate cloud records.
  • The store plugins store the cloud records.
  • The event plugins generate event records.
  • the alert plugins alert on the event records.

Let us know what you think.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.