Code Monkey home page Code Monkey logo

snap's Introduction

DISCONTINUATION OF PROJECT.

This project will no longer be maintained by Intel.

This project has been identified as having known security escapes.

Intel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project.

Intel no longer accepts patches to this project.

DISCONTINUATION OF PROJECT

This project will no longer be maintained by Intel. Intel will not provide or guarantee development of or support for this project, including but not limited to, maintenance, bug fixes, new releases or updates. Patches to this project are no longer accepted by Intel. If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the community, please create your own fork of the project.

The Snap Telemetry Framework Build Status Go Report Card Join the chat on Slack

Snap is an open telemetry framework designed to simplify the collection, processing and publishing of system data through a single API. The goals of this project are to:

  • Empower systems to expose a consistent set of telemetry data
  • Simplify telemetry ingestion across ubiquitous storage systems
  • Allow flexible processing of telemetry data on agent (e.g. filtering and decoration)
  • Provide powerful clustered control of telemetry workflows across small or large clusters

  1. Overview
  2. Getting Started
  3. Documentation
  4. Community Support
  5. Contributing
  6. Code of Conduct
  7. Security Disclosure
  8. License
  9. Thank You

Overview

The Snap Telemetry Framework is a project made up of multiple parts:

  • A hardened, extensively tested daemon, snapteld, and CLI, snaptel (in this repo)
  • A growing number of maturing plugins (found in the Plugin Catalog)
  • Lots of example tasks to gather and publish metrics (found in the Examples folder)

These and other terminology are explained in the glossary.

workflow-collect-process-publish

The key features of Snap are:

  • Plugin Architecture: Snap has a simple and smart modular design. The four types of plugins (collectors, processors, publishers and streaming collectors) allow Snap to mix and match functionality based on user need. All plugins are designed with versioning, signing and deployment at scale in mind. The open plugin model allows for loading built-in, community, or proprietary plugins into Snap.

    • Collectors - Collectors gather telemetry data at determined interval. Collectors are plugins for leveraging existing telemetry solutions (Facter, CollectD, Ohai) as well as specific plugins for consuming Intel telemetry (Node, DCM, NIC, Disk) and can reach into new architectures through additional plugins (see Plugin Authoring below). Telemetry data is organized into a dynamically generated catalog of available data points.
    • Processors - Extensible workflow injection. Convert telemetry into another data model for consumption by existing systems. Allows encryption of all or part of the telemetry payload before publishing. Inject remote queries into workflow for tokens, filtering, or other external calls. Implement filtering at an agent level reducing injection load on telemetry consumer.
    • Publishers - Store telemetry into a wide array of systems. Snap decouples the collection of telemetry from the implementation of where to send it. Snap comes with a large library of publisher plugins that allow exposure to telemetry analytics systems both custom and common. This flexibility allows Snap to be valuable to open source and commercial ecosystems alike by writing a publisher for their architectures.
    • Streaming Collectors - Streaming collectors act just like collectors, but there is no determined interval of gathering metrics. They send metrics immediately when they are available over a GRPC to Snap daemon. There is also available mechanism of buffering incoming metrics configurable by params MaxMetricsBuffer and MaxCollectDuration. Check out STREAMING.md for more details.
  • Dynamic Updates: Snap is designed to evolve. Each scheduled workflow automatically uses the most mature plugin for that step, unless the collection is pinned to a specific version (e.g. get /intel/psutil/load/load1/v1). Loading a new plugin automatically upgrades running workflows in tasks. Load plugins dynamically, without a restart to the service or server. This dynamically extends the metric catalog when loaded, giving access to new measurements immediately. Swapping a newer version plugin for an old one in a safe transaction. All of these behaviors allow for simple and secure bug fixes, security patching, and improving accuracy in production.

  • Snap tribe: Snap is designed for ease of administration. With Snap tribe, nodes work in groups (aka tribes). Requests are made through agreement- or task-based node groups, designed as a scalable gossip-based node-to-node communication process. Administrators can control all Snap nodes in a tribe agreement by messaging just one of them. There is auto-discovery of new nodes and import of tasks and plugins from nodes within a given tribe. It is cluster configuration management made simple.

Snap is not intended to:

  • Operate as an analytics platform: the intention is to allow plugins for feeding those platforms
  • Compete with existing metric/monitoring/telemetry agents: Snap is simply a new option to use or reference

Getting Started

System Requirements

Snap needs Swagger for Go installed to update OpenAPI specification file after successful build. Swagger will be installed automatically during build process (make or make deps).

To install Swagger manually

Using go get (recommended):

go get -u github.com/go-swagger/go-swagger/cmd/swagger

From Debian package:

echo "deb https://dl.bintray.com/go-swagger/goswagger-debian ubuntu main" | sudo tee -a /etc/apt/sources.list
sudo apt-get update
sudo apt-get install swagger

From GitHub release:

curl -LO https://github.com/go-swagger/go-swagger/releases/download/0.10.0/swagger_linux_amd64
chmod +x swagger_linux_amd64 && sudo mv swagger_linux_amd64 /usr/bin/swagger

Snap does not have external dependencies since it is compiled into a statically linked binary. At this time, we build Snap binaries for Linux and MacOS. We also provide Linux RPM/Deb packages and MacOS X .pkg installer.

Installation

You can obtain Linux RPM/Deb packages from Snap's packagecloud.io repository. After installation, please check and ensure /usr/local/bin:/usr/local/sbin is in your path via echo $PATH before executing any Snap commands.

RedHat 6/7:

$ curl -s https://packagecloud.io/install/repositories/intelsdi-x/snap/script.rpm.sh | sudo bash
$ sudo yum install -y snap-telemetry

Ubuntu 14.04/16.04 (see known issue with Ubuntu 16.04.1 below)

$ curl -s https://packagecloud.io/install/repositories/intelsdi-x/snap/script.deb.sh | sudo bash
$ sudo apt-get install -y snap-telemetry

We only build and test packages for a limited set of Linux distributions. For distros that are compatible with RedHat/Ubuntu packages, you can use the environment variable os= and dist= to override the OS detection script. For example Linux Mint 17/17.* (use dist=xenial for Linux Mint 18/18.*):

$ curl -s https://packagecloud.io/install/repositories/intelsdi-x/snap/script.deb.sh | sudo os=ubuntu dist=trusty bash
$ sudo apt-get install -y snap-telemetry

MacOS X:

If you use homebrew, the latest version of Snap package: snap-telemetry

$ brew install snap-telemetry

If you do not use homebrew, download and install Mac pkg package:

$ curl -sfL mac.pkg.dl.snap-telemetry.io -o snap-telemetry.pkg
$ sudo installer -pkg ./snap-telemetry.pkg -target /

Tarball (choose the appropriate version and platform):

$ curl -sfL linux.tar.dl.snap-telemetry.io -o snap-telemetry.tar.gz
$ tar xf snap-telemetry.tar.gz
$ cp snapteld /usr/local/sbin
$ cp snaptel /usr/local/bin

The intelsdi-x package repo contains additional information regarding:

NOTE: snap-telemetry packages prior to 0.19.0 installed /usr/local/bin/{snapctl|snapd} and these binaries have been renamed to snaptel and snapteld. snap-telemetry packages prior to 0.18.0 symlinked /usr/bin/{snapctl|snapd} to /opt/snap/bin/{snapctl|snapd} and may cause conflicts with Ubuntu's snapd package. Ubuntu 16.04.1 snapd package version 2.13+ installs snapd/snapctl binary in /usr/bin. These executables are not related to snap-telemetry. Running snapctl from snapd package will result in the following error message:

$ snapctl
error: snapctl requires SNAP_CONTEXT environment variable

NOTE: If you prefer to build from source, follow the steps in the build documentation. The alpha binaries containing the latest master branch are available here for bleeding edge testing purposes:

Running Snap

If you installed Snap from RPM/Deb package, you can start/stop Snap daemon as a service:

RedHat 6/Ubuntu 14.04:

$ service snap-telemetry start

RedHat 7/Ubuntu 16.04:

$ systemctl start snap-telemetry

If you installed Snap from binary, you can start Snap daemon via the command:

$ sudo mkdir -p /var/log/snap
$ sudo snapteld --plugin-trust 0 --log-level 1 --log-path /var/log/snap &

To view the service logs:

$ tail -f /var/log/snap/snapteld.log

By default, Snap daemon will be running in standalone mode and listening on port 8181. To enable gossip mode, checkout the tribe documentation. For additional configuration options such as plugin signing and port configuration see snapteld documentation.

Load Plugins

Snap gets its power from the use of plugins. The plugin catalog contains a collection of all known Snap plugins with links to their repo and release pages.

First, let's download the file and psutil plugins (also make sure psutil is installed):

$ export OS=$(uname -s | tr '[:upper:]' '[:lower:]')
$ export ARCH=$(uname -m)
$ curl -sfL "https://github.com/intelsdi-x/snap-plugin-publisher-file/releases/download/2/snap-plugin-publisher-file_${OS}_${ARCH}" -o snap-plugin-publisher-file
$ curl -sfL "https://github.com/intelsdi-x/snap-plugin-collector-psutil/releases/download/8/snap-plugin-collector-psutil_${OS}_${ARCH}" -o snap-plugin-collector-psutil

Next load the plugins into Snap daemon using snaptel:

$ snaptel plugin load snap-plugin-publisher-file
Plugin loaded
Name: file
Version: 2
Type: publisher
Signed: false
Loaded Time: Fri, 14 Oct 2016 10:53:59 PDT

$ snaptel plugin load snap-plugin-collector-psutil
Plugin loaded
Name: psutil
Version: 8
Type: collector
Signed: false
Loaded Time: Fri, 14 Oct 2016 10:54:07 PDT

Verify plugins are loaded:

$ snaptel plugin list
NAME      VERSION    TYPE         SIGNED     STATUS    LOADED TIME
file      2          publisher    false      loaded    Fri, 14 Oct 2016 10:55:20 PDT
psutil    8          collector    false      loaded    Fri, 14 Oct 2016 10:55:29 PDT

See which metrics are available:

$ snaptel metric list
NAMESPACE                                VERSIONS
/intel/psutil/cpu/cpu-total/guest        8
/intel/psutil/cpu/cpu-total/guest_nice   8
/intel/psutil/cpu/cpu-total/idle         8
/intel/psutil/cpu/cpu-total/iowait       8
/intel/psutil/cpu/cpu-total/irq          8
/intel/psutil/cpu/cpu-total/nice         8
/intel/psutil/cpu/cpu-total/softirq      8
/intel/psutil/cpu/cpu-total/steal        8
/intel/psutil/cpu/cpu-total/stolen       8
/intel/psutil/cpu/cpu-total/system       8
/intel/psutil/cpu/cpu-total/user         8
/intel/psutil/load/load1                 8
/intel/psutil/load/load15                8
/intel/psutil/load/load5                 8
...

Running Tasks

To collect data, you need to create a task by loading a Task Manifest. The Task Manifest contains a specification for what interval a set of metrics are gathered, how the data is transformed, and where the information is published. For more information see task documentation.

Now, download and load the psutil example:

$ curl https://raw.githubusercontent.com/intelsdi-x/snap/master/examples/tasks/psutil-file.yaml -o /tmp/psutil-file.yaml
$ snaptel task create -t /tmp/psutil-file.yaml
Using task manifest to create task
Task created
ID: 8b9babad-b3bc-4a16-9e06-1f35664a7679
Name: Task-8b9babad-b3bc-4a16-9e06-1f35664a7679
State: Running

NOTE: In subsequent commands use the task ID from your CLI output in place of the <task_id>.

This starts a task collecting metrics via psutil, then publishes the data to a file. To see the data published to the file (CTRL+C to exit):

$ tail -f /tmp/psutil_metrics.log

Or directly tap into the data stream that Snap is collecting using snaptel task watch <task_id>:

$ snaptel task watch 8b9babad-b3bc-4a16-9e06-1f35664a7679
NAMESPACE                             DATA             TIMESTAMP
/intel/psutil/cpu/cpu-total/idle      451176.5         2016-10-14 11:01:44.666137773 -0700 PDT
/intel/psutil/cpu/cpu-total/system    33749.2734375    2016-10-14 11:01:44.666139698 -0700 PDT
/intel/psutil/cpu/cpu-total/user      65653.2578125    2016-10-14 11:01:44.666145594 -0700 PDT
/intel/psutil/load/load1              1.81             2016-10-14 11:01:44.666072208 -0700 PDT
/intel/psutil/load/load15             2.62             2016-10-14 11:01:44.666074302 -0700 PDT
/intel/psutil/load/load5              2.38             2016-10-14 11:01:44.666074098 -0700 PDT

Nice work - you're all done with this example. Depending on how you started snap-telemetry service earlier, use the appropriate command to stop the daemon:

  • init.d service: service snap-telemetry stop
  • systemd service: systemctl stop snap-telemetry
  • ran snapteld manually: sudo pkill snapteld

When you're ready to move on, walk through other uses of Snap available in the Examples folder.

Building Tasks

Documentation for building a task can be found here.

Plugin Catalog

All known plugins are tracked in the plugin catalog and are tagged as collectors, processors, publishers and streaming collectors.

If you would like to write your own, read through Author a Plugin to get started. Let us know if you begin to write one by joining our Slack channel. When you finish, please open a Pull Request to add yours to the catalog!

Documentation

Documentation for Snap will be kept in this repository for now with an emphasis of filling out the docs/ directory. We would also like to link to external how-to blog posts as people write them. Read about contributing to the project for more details.

To learn more about Snap and how others are using it, check out our blog. A good first post to read is My How-to for the Snap Telemetry Framework by @mjbrender.

Examples

More complex examples of using Snap Framework configuration, Task Manifest files and use cases are available under the Examples folder. There are also interesting examples of using Snap in every plugin repository. For the full list of plugins, review the Plugin Catalog.

Community Support

This repository is one of many in the Snap framework and has maintainers supporting it. We love contributions from our community along the way. No improvement is too small.

This note is especially important for plugins. While the Snap framework is hardened through tons of use, plugins mature at their own pace. If you have subject matter expertise related to a plugin, please share your feedback on that repository.

Contributing

We encourage contributions from the community. Snap needs:

  • Contributors: We always appreciate more eyes on the core framework and plugins
  • Feedback: try it and tell us about it on our Slack team, through a blog posts or Twitter with #SnapTelemetry
  • Integrations: Snap can collect from and publish to almost anything by authoring a plugin

To contribute to the Snap framework, see our CONTRIBUTING.md file. To give back to a specific plugin, open an issue on its repository. Snap maintainers aim to address comments and questions as quickly as possible. To get some attention on an issue, reach out to us on Slack, or open an issue to get a conversation started.

Author a Plugin

The power of Snap comes from its open architecture and its growing community of contributors. You can be one of them:

Add to the ecosystem by building your own plugins to collect, process or publish telemetry.

Become a Maintainer

Snap maintainers are here to help guide Snap, the plugins, and the community forward in a positive direction. Maintainers of Snap and the Intel created plugins are selected based on contributions to the project and recommendations from other maintainers. The full list of active maintainers can be found here.

Interested in becoming a maintainer? Check out Responsibilities of a Maintainer and open an issue here to discuss your interest.

Code of Conduct

All contributors to Snap are expected to be helpful and encouraging to all members of the community, treating everyone with a high level of professionalism and respect. See our code of conduct for more details.

Security Disclosure

The Snap team takes security very seriously. If you have any issue regarding security, please notify us by sending an email to [email protected] and not by creating a GitHub issue. We will follow up with you promptly with more information and a plan for remediation.

License

Snap is Open Source software released under the Apache 2.0 License.

Thank You

And thank you! Your contribution, through code and participation, is incredibly important to us.

snap's People

Contributors

ami-gs avatar andrzej-k avatar bodepd avatar candysmurf avatar connordoyle avatar croseborough avatar geauxvirtual avatar ircody avatar izabellaraulin avatar janczer avatar jcooklin avatar jtlisi avatar katarzyna-z avatar kdembler avatar kindermoumoute avatar kjlyon avatar lynxbat avatar marcin-krolik avatar marcin-ol avatar mbbroberg avatar mkleina avatar nanliu avatar nqn avatar pittma avatar ppalucki avatar rashmigottipati avatar sandlbn avatar skonefal avatar tiffanyfay avatar wangjialei-a avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

snap's Issues

Automatic discovery of plugins

On startup pulse should interpret a environment variable flag for paths that it will autoload plugins from.

We should also consider adding the ability to post/(load) plugins with a flag that means that they will be saved to the auto discovery path and will be available on restart.

pulse-ctl task list output order changes as tasks run

pulse-ctl task list output is not ordered by task id when it's outputted. Currently with multiple tasks running, the last task to fire is listed first. The order changes as tasks fire, which is not a big deal with two tasks, but is not ideal when many tasks will be listed.

pulse-ctl does not output a version

pulse-ctl currently set to use gitversion as version to output when pulse-ctl -v is run; however, main.gitversion is not passed in during build of pulse-ctl to set gitversion.

REST API URI's

The REST API does not return proper URI's for created or modified resources. It is not possible to store and link actions on response.

Update command line argument pattern

It seems most Linux programs follow a pattern of -a for a shorthand notation of command line arguments and --full-argument for, well, the full argument.

For example, --max_procs would change to --max-procs, --log_level to --log-level, --log_path to --log-path.

By default, the flag package will resolve both -max-procs and --max-procs. I see we are using codegangsta/cli in pulse-ctl, but still using the default flag package in pulse.go. Migrating to codegansta/cli in pulse.go would allow us to provide alternate (short names) for command line arguments.

Note: Most global flags should support being set by env.

Plugin loaded response should show what it is loaded as

This is for after #174 merges

{
  "meta": {
    "code": 500,
    "message": "plugin is already loaded",
    "type": "error",
    "version": 1
  },
  "body": {
    "message": "plugin is already loaded",
    "fields": {}
  }

The body.fields above should have the name, type, and version in them.

scripts.sh needs to be run before make

Since _workspace is not versioned in github for pulse, scripts/deps.sh needs to be before make can be run to build Pulse.

This should be added to either the Makefile or build.sh before actually building Pulse.

Add documentation providing usage and examples

The initial pulse readme.md should include usage and example documentation. We will want to link from the projects main README.md to documents in the components (plugins, client, cmd, etc..) for more detailed documentation.

Task export

We need a feature to select a task that exists and save a copy of the config in JSON or YAML format.
It should be compatible with reloading back into pulse to create a task.

Pool was drained does not create a new available plugin

It seems a task call on a plugin where the pool was drained does not create a new available plugin. We just get errors that one cannot be selected from the pool. Probably because we aren't tracking subscription to calls correctly.

REST API port

There is currently no way to change the default (8181) REST API port from the agent.

Hit count increases correctly. Misses don't.

ID  State       Hit Count   Miss Count  Create Time
1   Spinning    9       3       Sun, 14 Jun 2015 20:50:30 PDT

Looks like misses are only missed intervals but failed workflows don't count.
A failed workflow should increment the miss.

Event Endpoint / Streaming Feature Spec

Summary:
Scheduler Tasks collect metrics and execute workflow based on a schedule. This is polling or sampling a data point on a schedule and then performing workflow upon it. A very different feature is the ability to subscribe to an source of data that emits a stream of events and perform a workflow on one or many of these events. This spec names these new data sources event endpoints and the events that can be collected an event stream.

A metric namespace looks like:
/system/cpu/load (this is a metric that has to be collected on a schedule)

An event endpoint would look like:
/system/docker/events (this is an event stream, it needs to have a subscription)

Core module responsibilities:
Scheduler - creates, reports, manages event endpoint subscriptions, receives event streams and fires a workflow.
Control - ensures plugins for event endpoints are started, subscribes to event endpoint, collects events into event stream, sends to Scheduler with subscription ID.

Scheduler would own the active Subscriptions just like it owns Tasks.

Possible Flow:

  1. User/MGMT to Scheduler - please subscribe to/foo/bar/events with the time window of 1 second and maximum event stream default
  2. Scheduler to Control - subscribe to event stream with config
  3. Control:
    1. ensures plugin for event endpoint is available (running) and binds subscription ID to this plugin
    2. subscribes to event endpoint within available plugin and wires to event endpoint listener on control
  4. Plugin event endpoint
    1. starts event endpoint collection (triggered or polling with index)
    2. event endpoint collection triggers on time window or maximum event parameters and calls event endpoint listener API on control with event stream.
  5. Control
    1. receives event stream on event endpoint listener
    2. correlates event stream to the subscription
    3. calls Scheduler with event stream data
  6. Scheduler -
    1. receives event stream from Control
    2. fires workflow with event stream data

Example functions

Scheduler:

CreateEventStreamSubscription(e eventStream, wf workflow, timeWindow time.Interval, maxEvents int) EventStreamSubscription {
/*
* eventStream - the event stream to subscribe to
* workflow - the workflow to perform on the event(s)
* timeWindow - optional window of time to collect events from the event stream before triggering workflow
* maxEvents - optional high water mark
*/
}

Collector Plugin:

es := NewEventStream{}

e1 := NewEvent{
    Key: "facter-value-change",
    Property: "disks",
    Value: 5,
}
e2 := NewEvent{
    Key: "facter-value-change",
    Property: "disk-space",
    Value: "200GB",
}   
es.AddEvents(e1, e2)
err := EventListener.Send(es)   

Taxonomy:

  • event endpoint - a source of data that can be subscribed to receive events. It is provided by a collector plugin.
  • event stream - a collection of ordered events that have been received from an event endpoint.
  • event listerner callback - a callback endpoint on the Control module that can receive event streams from a collector plugin.
  • event subscription - a contract for a collector plugin to collect and send event streams to a subscriber. A subscription has a unique ID.

Additional considerations

  • Event streams should have an index per event that is maintained over the subscription

Tasks should optionally have a user provided friendly name

Something like "MyTask" or "I Like Flowers" or "CPU to InfluxDB".

Task Names should:

  1. Default to something like "Task-" when not provided to prevent duplicate IDs (task IDs don't repeat).
  2. Never be used as a primary key. Should be completely a user-friendly facade to see what they do without having to look at the workflow.

Automatic discovery of plugins: Load Only

Specific version of #178

From original issue above, via @lynxbat:

If I put a plugin in the discovery directory it will be loaded automatically on start up of the agent. It will not be unloaded if removed from the directory. I can unload through an API call but can only put it back if I restart, load through API (POST binary), or possible a new API call for Reload From Discovery Dir.

This makes the feature safe as it occurs before external changes of state to the plugin inventory are allowed. It is a read-once pattern. Also, this would work well in that all new load API calls could use this directory for persisting the plugin for next startup. It avoids conflict with trying to sync file system changes and API changes at the same time.

Inconsistent behavior on task creation

Using the sample task included in cmd/, task creation will fail if the passthru plugin is not loaded; however, task creation will succeed with the influxdb plugin not loaded.

Worker queue sizes should be exposed

I personally prefer:

  1. command line args (--collect-wpool --process-wpool --publish-wpool)
  2. EnvVars (now easy with new cli)
  3. Eventual pulsed.conf

Auto Discover bug

If there are folders in the path for auto discover it will quit the agent.
Probably a combo of not ignoring directories and an erroneous log.Fatal that should be log.Error.

INFO[0000] Starting PulseD                              
DEBU[0000] maxprocs                                      _block=main _module=pulse-agent maxprocs=1
INFO[0000] pulse agent starting                          _module=pulse-agent block=main
DEBU[0000] pevent controller created                     _block=new _module=control
DEBU[0000] metric catalog created                        _block=new _module=control
DEBU[0000] plugin manager created                        _block=new _module=control
DEBU[0000] runner created                                _block=new _module=control
DEBU[0000] started                                       _block=start _module=control-runner
DEBU[0000] metric manager linked                         _block=set-metric-manager _module=scheduler
INFO[0000] started                                       _block=start _module=control
INFO[0000] module started                                _module=pulse-agent block=main pulse-module=control
INFO[0000] scheduler started                             _block=start-scheduler _module=scheduler
INFO[0000] module started                                _module=pulse-agent block=main pulse-module=scheduler
INFO[0000] plugin load called                            _block=load _module=control path=./build/plugin/collector
INFO[0000] plugin load called                            _block=load-plugin _module=control-plugin-mgr path=collector
ERRO[0000] load plugin error                             _block=load-plugin _module=control-plugin-mgr error=fork/exec ./build/plugin/collector: permission denied
FATA[0000] fork/exec ./build/plugin/collector: permission denied  _block=main _module=pulse-agent logpath=./build/plugin plugin=&{collector 68 2147484141 {63571388044 0 0x7386c0} 0xc20804e360}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.