Code Monkey home page Code Monkey logo

insights-core's Introduction

Insights Core

Insights Core is a data collection and analysis framework that is built for extensibility and rapid development. Included are a set of reusable components for gathering data in myriad ways and providing a reliable object model for commonly useful unstructured and semi-structured data.

>>> from insights import run
>>> from insights.parsers import installed_rpms as rpm
>>> lower = rpm.Rpm("bash-4.4.11-1.fc26")
>>> upper = rpm.Rpm("bash-4.4.22-1.fc26")
>>> results = run(rpm.Installed)
>>> rpms = results[rpm.Installed]
>>> rpms.newest("bash")
0:bash-4.4.12-7.fc26
>>> lower <= rpms.newest("bash") < upper
True

Features

  • Over 200 Enterprise Linux data parsers
  • Support for Python 2.6+ and 3.3+
  • Built in support for local host collection
  • Data collection support for several archive formats

Installation

Releases can be installed via pip

$ pip install insights-core

Documentation

There are several resources for digging into the details of how to use insights-core:

To Run the Jupyter Notebooks

If you would like to execute the jupyter notebooks locally, you can install jupyter:

pip install jupyter

To start the notebook server:

jupyter notebook

This should start a web-server and open a tab on your browser. From there, you can navigate to docs/notebooks and select a notebook of interest.

Motivation

Almost everyone who deals with diagnostic files and archives such as sosreports or JBoss server.log files eventually automates the process of rummaging around inside them. Usually, the automation is comprised of fairly simple scripts, but as these scripts get reused and shared, their complexity grows and a more sophisticated design becomes worthwhile.

A general process one might consider is:

  1. Collect some unstructured data (e.g. from a command, an archive, a directory, directly from a system)
  2. Convert the unstructured data into objects with standard APIs.
  3. Optionally combine some of the objects to provide a higher level interface than they provide individually (maybe all the networking components go together to provide a high level API, or maybe multiple individual objects provide the same information. Maybe the same information can be gotten from multiple sources, not all of which are available at the same time from a given system or archive).
  4. Use the data model above at any granularity to write rules that formalize support knowledge, persisters that build database tables, metadata components that extract contextual info for other systems, and more.

Insights Core provides this functionality. It is an extensible framework for collecting and analyzing data on systems, from archives, directories, etc. in a standard way.

Insights Core versus Red Hat Insights

A common confusion about this project is how it relates to Red Hat Insights. Red Hat Insights is a product produced by Red Hat for automated discovery and remediation of issues in Red Hat products. The insights-core project is used by Red Hat Insights, but only represents the data collection and rule analysis infrastructure. This infrastructure is meant to be reusable by other projects.

So, insights-core can be used for individuals wanting to perform analysis locally, or integrated into other diagnostics systems. Parsers or rules written using insights-core can be executed in Red Hat Insights, but, it is not a requirement.

insights-core's People

Contributors

ahitacat avatar akshay196 avatar bfahr avatar brantleyr avatar chenlizhong avatar csams avatar glutexo avatar gravitypriest avatar huali027 avatar jhjaggars avatar jobselko avatar joysnow avatar jsvob avatar kylape avatar lhuett avatar mhuth avatar miclark avatar paulway avatar rasrivas-redhat avatar ryan-blakley avatar shzhou12 avatar skontar avatar songuyen avatar stevehnh avatar subpop avatar triplejqk avatar tz3070 avatar vishwanathjadhav avatar wushiqinlou avatar xiangce avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

insights-core's Issues

Remove direct dependency on netstat

From netstat man page:

This program is mostly obsolete. Replacement for netstat is ss. Replacement for netstat -r is ip route. Replacement for netstat -i is ip -s link. Replacement for netstat -g is ip maddr.

Port Scaffold Script to Master Branch

The scaffold script to build a template rule development environment was somehow removed from the code. I have added it back into the 1.x branch in this commit. This code needs to be ported to master but may require changes.

chkconfig.py can not parsed RHEL 7.3 'chkconfig --list'

Transferred from trello

The mapper can not handle below content

Note: This output shows SysV services only and does not include native
      systemd services. SysV configuration data might be overridden by native
      systemd configuration.

      If you want to list systemd services use 'systemctl list-unit-files'.
      To see services enabled on particular target use
      'systemctl list-dependencies [target]'.

netconsole     	0:off	1:off	2:off	3:off	4:off	5:off	6:off
network        	0:off	1:off	2:on	3:on	4:on	5:on	6:off
rhnsd          	0:off	1:off	2:on	3:on	4:on	5:on	6:off

xinetd based services:
	chargen-dgram: 	off
	chargen-stream:	off
	daytime-dgram: 	off
	daytime-stream:	off
	discard-dgram: 	off
	discard-stream:	off
	echo-dgram:    	off
	echo-stream:   	off
	rsync:         	on
	tcpmux-server: 	off
	time-dgram:    	off
	time-stream:   	off

Error:

Traceback (most recent call last):
  File "/opt/insights-plugins/lib/python2.7/site-packages/falafel/core/evaluators.py", line 197, in run_mappers
    self.add_result(self._execute_mapper(plugin, context),
  File "/opt/insights-plugins/lib/python2.7/site-packages/falafel/core/evaluators.py", line 76, in _execute_mapper
    return mapper(context)
  File "/opt/insights-plugins/lib/python2.7/site-packages/falafel/mappers/chkconfig.py", line 66, in __init__
    super(ChkConfig, self).__init__(*args, **kwargs)
  File "/opt/insights-plugins/lib/python2.7/site-packages/falafel/core/__init__.py", line 112, in __init__
    self.parse_content(context.content)
  File "/opt/insights-plugins/lib/python2.7/site-packages/falafel/mappers/chkconfig.py", line 104, in parse_content
    states = self.level_states['xinetd']
KeyError: 'xinetd'

Content for Custom Rules

(This is part of a list of the major stuff that needs to be done as part of a 'Custom Rules' MVP; before we can say Custom Rules work.)

This issue cuts across every part of Insights so I'm putting it here. It is also just easier to put all the Custom Rules stuff in one place.

By content here i mean the text / markup/down for a Custom Rule; the stuff that is or controls what gets displayed in the frontend.

There needs to be a mechanism that gets content from the place that the customer writes it to the Insights frontend.

From our interviews with potential users of Custom Rules, it seems likely that the person who writes the rule will write all parts of the rule (plugin, content, and remediation). So having different mechanisms for enabling/installing different parts of a custom rule would be a burden on the customer that we would want to avoid if we can.

We currently use two such mechanisms: upload to the API for content and remediation, and "install" into production for plugins.

We could upload all three parts of a custom rule to the API from whereever they write their rules (some script we give them does the upload), and then download the plugin part of the rule back to just that customer's systems after the client downloads the new insights-core egg.

Alternately we could expect the customer to install all three parts of a rule (content, plugin, remediation) to
each system they want that rule to run on (and make it easy to do so). When the collector runs, it looks for and runs the rules installed on that system, and uploads the content and remediation if the rule hits.

Either way it is a big change to one interface or another, the API in the first case, and the engine-frontend interface for the second.

In our conversations with potential users I felt that they seemed to prefer the second alternative (they install rules on individual hosts), though this may have been because they were all SysAdminy types who prefer installing stuff over uploading stuff.

Background:

In our current system, Rule Plugins (the python) is completely separate from the Rule Content. Rule Plugins only know the name of a rule. When a rule runs in the engine, and "hits" for a given machine, it produces a 'record' containing the name of the rule and data about the machine. No Content. Content is uploaded separately to the frontend through the API. Content and "data about the machine" are not combined till a customer wants to look at that rule hit. I don't know what would happen if a Rule Plugin produced a Rule name for which no Content had been uploaded, but it doesn't matter because we control both plugins that are run and the content that is uploaded.

Vgdisplay spec does not match sos report or rule

From @bfahr on May 22, 2017 23:9

The current VgDisplay mapper correctly parses the output for the vgdisplay command. However the sos report uses vgdisplay -vv which will include additional information on logical and physical volumes. The current mapper appears to have issues parsing the sos report data.

Also the cmirror_perf_issue rule utilizes a local mapper for vgdisplay which does parse this information, however since the spec does not use the -vv switch, no logical volume information is provided in the insights archive so the rule won't ever trigger in insights, but should work on sos reports.

The spec may need to be changed to add the -vv or -v switch if this information is not available via other specs. The shared mapper VgDisplay needs to be updated so that it will work with sos reports and the spec if updated. The rule needs to be revised to replace the local vgdisplay mapper with the shared mapper.

Copied from original issue: RedHatInsights/falafel#232

Improve the examples in lvm.py

The examples of how to use the Lvs, Vgs and Pvs parsers in insights/parsers/lvm.py give very little information about how to use them, and this needs to be improved.

It'd also be good if this broke some of the data in these parser classes into separate properties rather than all being in the data dictionary.

include and run custom rules during data collection on individual hosts

(This is part of a list of the major stuff that needs to be done as part of a 'Custom Rules' MVP; before we can say Custom Rules work).

In the collector (client), a mechanism that notices that there are Custom Rules "installed" on the machine, loads and runs those rules, and then includes the results of those rules with the rest of the data it has collected.

Get 100% test coverage of insights/core/__init__.py

At the moment the stats are:

Name                           Stmts   Miss Branch BrPart  Cover   Missing
--------------------------------------------------------------------------
insights/core/__init__.py        330     28    106      9    90%   44, 57-60, 296, 333-334, 400, 415-418, 967, 970-975, 978, 981-991, 41->44, 42->44, 52->51, 62->51, 295->296, 385->exit, 399->400, 915->878, 960->exit

Items to do:

  • get_module_names() never hits return False in name_filter().
  • get_module_names() should check what happens when walk_packages sets ispkg to True. What does that mean?
  • get_module_names() should check what happens when name_filter() returns False.
  • get_module_names() should test AttributeError handling.
  • Scannable should test registering two scanners with the same name.
  • Scannable should test __contains__() method.
  • AlternativesOutput should test handling of non-matched lines past "Current best version is".
  • Scannable should test parse() method - and parsers should use it.
  • LogFileOuptut should test registering two scanners with the same name.
  • LogFileOutput should test token_scan() method.
  • ErrorCollector class needs to be thoroughly tested (967-991)

Transaction check error w/python-requests on falafel-1.38.0-25 installation

From @wduffeeb on May 24, 2017 16:52

When installing falafel-1.38.0-25 from insights-cli repo on a RHEL 7.2 (or 7.3, and presumably 7.1) system that already has python-requests installed (version python-requests-2.6.0-1.el7_1.noarch, seems to come installed with Red Hat CSB) there is a transaction check error on multiple files. Example line from error:

file /usr/lib/python2.7/site-packages/requests/__init__.py from install of requests-2.13.0-8.noarch conflicts with file from package python-requests-2.6.0-1.el7_1.noarch

Details:
http://pastebin.test.redhat.com/487458

Thanks!

Copied from original issue: RedHatInsights/falafel#242

Use consistent heading and class style for parser catalogue

The proposal is to eventually replace the current .rst file for parsers with one of the form:

{{ module title }}
==================

{{ for each class in the module:}}
{{ file or command }}
---------------------
.. autoclass:: insights.parsers.{{module}}.{{parser class name}}
   :members:
   :show-inheritance:
{{ endfor }}

This provides a more structured way to organise the information currently presented in an ad-hoc fashion within the parsers catalogue.

"runnable rules"

(This is part of a list of the major stuff that needs to be done as part of a 'Custom Rules' MVP; before we can say Custom Rules work.)

For people to be able to generate Custom Rules, they are going to need to be able to run them. They will not have a large body of sosreports available to them, nor the accumulated data from lots of Insights archives.

They can test their rules by building py.tests as we do, but that assumes they know python and py.test, and forces them to learn how to use our integration tests framework.

They can set up some test boxes and install their fledgling Custom rules, and look at the result in Insights. But what if nothing shows up, what do they do then.

At minimum we need a way to run a rule against the data on this box. See results, see errors, see debugging information.

It would be nice to be able to run against another box, sorta like ansible. See results, see errors, see debugging information.

It would be nice if you could run something like our integration tests without having to know python or py.test or our integration testing framework.

run parsers and combiners (and rules) during collection

(This is part of a list of the major stuff that needs to be done as part of a 'Custom Rules' MVP; before we can say Custom Rules work, though I think we want to do this one even if we don't do Custom Rules.)

In the collector (client), run the parsers and combiners, and upload their results instead of the raw specs results.

  1. the parsers and combiners become a published API that we can no longer change at will. Customers will be writing rules depending on the existing parsers and combiners.
  2. the result of any individual parser or combiner might be needed by Custom Rules, Server Rules, both Custom and Server, or neither. If the results of a parser or combiner is not needed by the Server, it should not be uploaded (for security and space reasons). If neither, they can just not be run.

Enhance httpd_conf to support nested sections #84

Need a new PR to 1.x branch corresponding to #84,
However, before carrying out this, it's necessary to update the insights-plugins to replace the deprecated interface (e.g. get_valid_setting) with the new one (e.g. get_active_setting).

Drop pyparsing dependency

From @jhjaggars on May 24, 2017 17:13

This currently includes rewriting all the RabbitMQ mappers. The multipath mapper already has a non-pyparsing implementation, but it just needs to be merged.

Copied from original issue: RedHatInsights/falafel#248

For Fava, alter the doc for each Parser and Combiner

(This is part of a list of the major stuff that needs to be done as part of a 'Custom Rules' MVP; before we can say Custom Rules work).

For Fava to be successful, we need to alter the documentation for each Parser and Combiner to not be Python specific. It needs to be general enough that someone who knows no Python, but does know the basics of Fava can use it.

It also needs to be easier for anyone to find it.

Stacked Decorators are Bad

From @kylape on May 4, 2017 21:1

We shouldn't let developers stack insights decorators. Currently, they only work if all specs in a stack are pattern specs or if only one spec in a stack is present, and even then, the expected output is hard to reason about.

Example:

@mapper('foo')
@mapper('bar')
class FooBar(Mapper):
    pass

Here's the current code that attempts to handle the pattern spec case. Note this forces the evaluator to assume all mapper outputs are lists.

def collect_results(results_dict):
    plugin_output = defaultdict(dict)
    shared_output = {}
    for producer, output in results_dict.iteritems():
        if producer.shared:
            if not producer._reducer:
                is_pattern = pattern_file(producer.symbolic_names[0])
                shared_output[producer] = output if is_pattern else output[0]
            else:
                shared_output[producer] = output
        else:
            plugin = sys.modules[producer.__module__]
            plugin_output[plugin].update(autobox(producer.symbolic_names, output))
    return plugin_output, shared_output

Current stacked mappers:

(falafel-mark2) [10026 csams@localhost mappers]$ ag '@mapper.*\n@mapper'
lvm.py
77:@mapper('pvs')
78:@mapper('pvs_noheadings')
141:@mapper('vgs')
142:@mapper('vgs_noheadings')
211:@mapper('lvs')
212:@mapper('lvs_noheadings')

grub_conf.py
100:@mapper('grub2.cfg')
101:@mapper("grub.conf")

xinetd_conf.py
99:@mapper("xinetd.conf")
100:@mapper("xinetd.d")

limits_conf.py
43:@mapper("limits.conf")
44:@mapper("limits.d")
85:@mapper("limits.conf")
86:@mapper("limits.d")

httpd_conf.py
91:@mapper('httpd.conf', filters=['IncludeOptional'])
92:@mapper('httpd.conf.d')

modprobe.py
5:@mapper('modprobe.conf')
6:@mapper('modprobe.d')

pvs, vgs, and lvs work, but only because of the NoneGroup hack. grub works because both versions are simple file specs and neither should show up at the same time. httpd_conf works because its httpd.conf spec was changed to a pattern file spec. xinetd, limits_conf, and modprobe are currently broken.

Copied from original issue: RedHatInsights/falafel#190

Asserts should be try/except in Prod

Transfered from trello

There are some assert statements in the falafel code that should be replaced with an appropriate try/except block. Here's a preliminary list from the mappers directory:

crontab.py:            assert len(parts) == 6, "Crontab line appears corrupted, not enough parts: %r" % line
lsblk.py:            assert 'TYPE' in self.data
mdstat.py:    assert tokens.pop(0) == "Personalities"
mdstat.py:    assert tokens.pop(0) == ":"
mdstat.py:        assert token.startswith('[') and token.endswith(']')
mdstat.py:    assert device_name.startswith("md")
mdstat.py:    assert tokens.pop(0) == ":"
mdstat.py:        assert active_string == "inactive"
mdstat.py:        assert len(subtokens) > 1
mdstat.py:        assert comp_name
mdstat.py:    assert len(upstring) == len(component_list)
mdstat.py:        assert up_indicator == 'U' or up_indicator == "_"
netstat.py:        assert self.name in NETSTAT_SECTION_ID
netstat.py:        assert 'PID/Program name' in self.datalist[-1]
netstat.py:            assert ':' in local_addr
netstat.py:        assert ACTIVE_INTERNET_CONNECTIONS in self.data
netstat.py:        assert ACTIVE_INTERNET_CONNECTIONS in self.datalist
nfnetlink_queue.py:            assert len(parts) == 9
redhat_release.py:        assert len(content) == 1

Design and develop client runtime API

From @kylape on May 24, 2017 17:10

This will be used to execute mappers on the client. This also includes moving the upload code to insights-core.

Copied from original issue: RedHatInsights/falafel#245

Uname not properly parsed for RHEL 6.3 sosreports

For rhel 6.3 (sos-2.2) sosreport, the files of command output under sos_commands folders are all end with newline. Below two examples for you to compare(you can check hostname, blkid, uname_-a):

Example rhel 6.3 sosreport : https://api.access.redhat.com/rs/cases/01857736/attachments/bd1496bd-317f-43af-ad9f-fb802751667a
Example rhel 6.8 sosreport https://api.access.redhat.com/rs/cases/01857704/attachments/39b557e2-22ee-41bd-876e-a961d2985d65

Some mappers rise error for this.

Example uname.py error:

ERROR:eval:Mapper failed
Traceback (most recent call last):
  File "/opt/insights-plugins/lib/python2.7/site-packages/falafel/core/evaluators.py", line 197, in run_mappers
    self.add_result(self._execute_mapper(plugin, context),
  File "/opt/insights-plugins/lib/python2.7/site-packages/falafel/core/evaluators.py", line 76, in _execute_mapper
    return mapper(context)
  File "/opt/insights-plugins/lib/python2.7/site-packages/falafel/mappers/uname.py", line 195, in __init__
    super(Uname, self).__init__(context)
  File "/opt/insights-plugins/lib/python2.7/site-packages/falafel/core/__init__.py", line 112, in __init__
    self.parse_content(context.content)
  File "/opt/insights-plugins/lib/python2.7/site-packages/falafel/mappers/uname.py", line 223, in parse_content
    raise UnameError("Uname string appears invalid", uname_line)
UnameError: Uname string appears invalid:''

Root cause is the uname.py does not take account in the newline at the end of file.

The error raised by below code, you will see the code content[-1 to get the last line. It is empty line.

uname_line = content[-1]  # read the last line instead of the first
        uname_parts = uname_line.split(' ')
        if len(uname_parts) < 3:
            ver_rel_match = re.match("[0-9](\.[0-9]+){2}-[0-9]+", uname_parts[0])
            if not ver_rel_match:
                raise UnameError("Uname string appears invalid", uname_line)
            data['kernel'] = uname_parts[0]

For rhel 6.4 (sos-2.2-38.el6.noarch) and above, the output of commands do not include the newline at the end of the file.

According above, the insights-cli does not work well for rhel 6.3 and previous version.

Support Python Version 2.6.x

From @jhjaggars on May 24, 2017 17:8

This change really means supporting both runtimes, we aren't looking to take advantage of language feature changes.

Copied from original issue: RedHatInsights/falafel#244

Lvs mapper fails on lists with log and image LVs

Sample data:

LVS_MATCH2 = """
      LV             VG       LSize   Region  Log      Attr       Devices
      lv0            vg0       52.00m 511.00k lv0_mlog mwi-a-m--- lv0_mimage_0(0),lv0_mimage_1(0)
      [lv0_mimage_0] vg0       52.00m      0           iwi-aom--- /dev/sdb1(0)
      [lv0_mimage_1] vg0       52.00m      0           iwi-aom--- /dev/sdb2(0)
      [lv0_mlog]     vg0        4.00m      0           lwi-aom--- /dev/sdb3(3)
      lv1            vg0        3.50t   2.00m lv1_mlog mwi-a-m--- lv1_mimage_0(0),lv1_mimage_1(0)
      [lv1_mimage_0] vg0        3.50t      0           iwi-aom--- /dev/sdb1(13)
      [lv1_mimage_1] vg0        3.50t      0           iwi-aom--- /dev/sdb2(13)
      [lv1_mlog]     vg0        4.00m      0           lwi-aom--- /dev/sdb3(0)
      lv2            vg0     5122.00g   4.00m lv2_mlog mwi-a-m--- lv2_mimage_0(0),lv2_mimage_1(0)
      [lv2_mimage_0] vg0     5122.00g      0           iwi-aom--- /dev/sdb1(13)
      [lv2_mimage_1] vg0     5122.00g      0           iwi-aom--- /dev/sdb2(13)
      [lv2_mlog]     vg0        8.00m      0           lwi-aom--- /dev/sdb3(0)
      lv_root        vg_test1   6.71g      0           -wi-ao---- /dev/sda2(0)
      lv_swap        vg_test1 816.00m      0           -wi-ao---- /dev/sda2(1718)
""".strip()

Contents:

lvs.data
Out[6]: 
{'content': [{'Attr': 'mwi-a-m---',
   'Devices': 'lv0_mimage_0(0),lv0_mimage_1(0)',
   'LSize': '52.00m',
   'LV': 'lv0',
   'Log': 'lv0_mlog',
   'Region': '511.00k',
   'VG': 'vg0'},
  {'Attr': '/dev/sdb1(0)',
   'LSize': '52.00m',
   'LV': '[lv0_mimage_0]',
   'Log': 'iwi-aom---',
   'Region': '0',
   'VG': 'vg0'},
  {'Attr': '/dev/sdb2(0)',
   'LSize': '52.00m',
   'LV': '[lv0_mimage_1]',
   'Log': 'iwi-aom---',
   'Region': '0',
   'VG': 'vg0'},
  {'Attr': '/dev/sdb3(3)',
   'LSize': '4.00m',
   'LV': '[lv0_mlog]',
   'Log': 'lwi-aom---',
   'Region': '0',
   'VG': 'vg0'},
...

Note attributes in 'Log' property due to bad parsing.

Should we convert this multinode parser (metadata.json) to class type ?

This is related to PR: https://github.com/RedHatInsights/insights-core/pull/161

AFAIK:
Fact-1: we like class type parser better than the function type ones.
Fact-2: We'd love to do our best to get the plugins(the @rule reducers) out of troubles.( Let it handle less things.)

Well, based on Fact-1, I am converting this multinode parser from func to class type(suggested by @PaulWay ).
What did I do:
I wrap all the "metadata.json" parsing funcs into one class MetaData,
and add (rhev, osp, docker, metadata) as MetaData's attributes.
To fit this in insights-plugin side, plugins can be changed like this:
an example(https://github.com/RedHatInsights/insights-plugins/pull/143)

I test above changes by upload a rhev-coordinator archive to my local insight-engine server, and it works.
(Not guarantee of fully test.)

When thinking about Fact-2, yes, this change becomes a burden to plugins.
(The existence of metadata.json file means a hit parser for all the @rules which requires it.)

So my question here is:
Should we convert this multinode parser to class type ? Or someone has better ideas to achieve this.

Create log transaction-id for log messages associated with a single transaction.

When analyzing logs (e.g., splunk logs) across systems, it can be very convenient to have some message_id or transaction_key to collect all log messages related to a given request.

Having a message_id/transaction_key/something that is consistently stored on all that activity on both the insights-upload and insights-content servers and calling systems would be awesome.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.