Code Monkey home page Code Monkey logo

infra-integrations-sdk's Introduction

New Relic Community Plus header

BuildStatus Widget GoReport Widget GoDocWidget

Golang SDK for New Relic integrations

Infrastructure monitoring provided by New Relic offers flexible, dynamic server monitoring, including integrations for many popular services.

If our on-host integrations don't meet your needs, we provide two options for creating your own:

  • Our Flex integration tool: a simple way to report custom metrics by creating a configuration file that defines what data will be reported. This is recommended for most use cases.
  • Our Integrations SDK: a more robust solution. We give you access to the complete set of tools we use to build our integrations and report all infrastructure integrations data types.

The Integrations SDK helps take the complexity out of building an integration by providing a set of useful Go language functions and data structures. For instance, some common use cases like reading values from command-line arguments or environment variables, initializing a structure with all the necessary fields for an integration defined by our SDK, or generating and printing a JSON to stdout, are covered and simplified by this package.

If you want to know more or you need specific documentation about the structures and functions provided by this package, you can take a look at the official package documentation in godoc.org (see below).

SDK v4 Internal Release Notice

This is an internal release of the new SDK v4. It contains breaking changes, therefore it's highly recommended to take a look at the migration guide from v3 to v4.

Most of the documentation hasn't been updated yet to reflect the changes made in this new release.

Installation

Before starting to write Go code, we suggest taking a look at golang's documentation to set up the environment and familiarize yourself with the golang language.

The minimum supported Go version is 1.13. You can check your Go version executing the following command in a bash shell:

$ go version

You can download the SDK code to your GOPATH with the following command:

$ go get github.com/newrelic/infra-integrations-sdk/...

Read the SDK documentation to learn about all the packages and functions it provides. If you need ideas or inspiration to start writing integrations, follow the tutorial.

API specification

You can find the latest API documentation generated from the source code in godoc.

Agent API

Infrastructure on-host integrations are executed periodically by the infrastructure agent. The integration stdout is consumed by the agent. stdout data is formatted as JSON.

The agent supports different JSON data-structures called integration protocols:

  • v1: Legacy data structure to monitor local entity.
  • v2: This version allows to monitor remote entities and keep support for previous local entity. Official doc
  • v3: Improves remote entities support. See protocol v3 documentation.
  • v4: Adds support for dimensional metrics format and introduces new metric types: count, summary, cumulative-count and cumulative-rate.

Host Entity vs Entities

Entity is a specific thing we collect data about. We used this vague term because we want to support hosts, pods, load balancers, DBs, etc. in a generic way. In the previous SDK v3, we had the Local Entity and Remote Entities.

In this new version the reporting host is called HostEntity, and it's optional to add data to it. It represents the host where the agent is running on. If your entity belongs to a different host or it's something abstract that is not attached to the host where the integration runs, then you can create an Entity which requires a unique name and an entity type in order to be created.

You can add metrics, events and inventory on both types of entities.

Upgrading from SDK v3 to v4

https://github.com/newrelic/infra-integrations-sdk/blob/master/docs/v3tov4.md

SDK and agent-protocol compatibility

SDK v1 and v2 use protocol-v1.

SDK v3 could use either protocol-v2 or protocol-v3.

SDK v4 only uses protocol-v4.

Libraries

JMX support

The Integrations Golang SDK supports getting metrics through JMX by calling the jmx.Open(), jmx.OpenWithSSL, jmx.Query(), and jmx.Close() functions. This JMX support relies on the nrjmx tool. Follow the steps in the nrjmx repository to build it and set the NR_JMX_TOOL environment variable to point to the location of the nrjmx executable. If the NR_JMX_TOOL variable is not set, the SDK uses /usr/bin/nrjmx by default.

HTTP support

The GoSDK provides a helper HTTP package to create secure HTTPS clients that require loading credentials from a Certificate Authority Bundle (stored in a file or in a directory). You can read more here.

Tools and FAQs related to previous SDK v3.

https://github.com/newrelic/infra-integrations-sdk/blob/master/docs/v3_tools_faqs.md

Support

Should you need assistance with New Relic products, you are in good hands with several support diagnostic tools and support channels.

If the issue has been confirmed as a bug or is a feature request, file a GitHub issue.

Support Channels

Privacy

At New Relic we take your privacy and the security of your information seriously, and are committed to protecting your information. We must emphasize the importance of not sharing personal data in public forums, and ask all users to scrub logs and diagnostic information for sensitive information, whether personal, proprietary, or otherwise.

We define “Personal Data” as any information relating to an identified or identifiable individual, including, for example, your name, phone number, post code or zip code, Device ID, IP address, and email address.

For more information, review New Relic’s General Data Privacy Notice.

Contribute

We encourage your contributions to improve this project! Keep in mind that when you submit your pull request, you'll need to sign the CLA via the click-through using CLA-Assistant. You only have to sign the CLA one time per project.

If you have any questions, or to execute our corporate CLA (which is required if your contribution is on behalf of a company), drop us an email at [email protected].

A note about vulnerabilities

As noted in our security policy, New Relic is committed to the privacy and security of our customers and their data. We believe that providing coordinated disclosure by security researchers and engaging with the security community are important means to achieve our security goals.

If you believe you have found a security vulnerability in this project or any of New Relic's products or websites, we welcome and greatly appreciate you reporting it to New Relic through HackerOne.

If you would like to contribute to this project, review these guidelines.

To all contributors, we thank you! Without your contribution, this project would not be what it is today.

License

infra-integrations-sdk is licensed under the Apache 2.0 License.

infra-integrations-sdk's People

Contributors

alejandrodnm avatar alijared avatar alvarocabanas avatar ardias avatar areina avatar arvdias avatar brushknight avatar carlosroman avatar coreyarnold avatar cpheps avatar cristianciutea avatar davidbrota avatar davidgit avatar ferranorriols avatar gsanchezgavier avatar heaje avatar ikavanaghnr avatar invidian avatar magdapaj avatar mariomac avatar matiasburni avatar noly avatar rogercoll avatar roobre avatar rubenruizdegauna avatar smoya avatar varas avatar vvydier avatar xino12 avatar xqueralt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

infra-integrations-sdk's Issues

Document SDK v4

Feature Description

Once #283 is implemented and the Design Change: Entity synthesis for prometheus-based OHIs has been tested and working, create a document with all the v4 fields, usage and consequences (data generated).

  • Host entity
  • New entity
  • Metrics/events
  • ignore_host_entity
  • common, metadata, labels

Feel free to add more items into the list.

The document should be uploaded in the repository and Confluence.

JMX Package Can Silently Drop Stderr When Command Fails

Description of the problem

I spent a couple hours on this but due to the package variables (global sadness) and the fine grained locking, I realized I would likely have ended up completely rewriting the package. So instead I'll simply highlight the problem and allow someone to make an incremental change.

The JMX command is run here with cmd.Wait:

https://github.com/newrelic/infra-integrations-sdk/blob/master/jmx/jmx.go#L99

If that fails, it sends on err channel in the goroutine. Open command will not return a non-nil error since that's in a separate goroutine.

So then a doQuery func is run where it will send a string to os.Pipe, which since its a valid buffered pipe will succeed if the command is in the process of failing (still listening on the pipe) but has not yet exited. So I think there might be race condition.

https://github.com/newrelic/infra-integrations-sdk/blob/master/jmx/jmx.go#L135

So then we have receiveResult func, which has a select. The select is random. So the first case statement could evaluate and then you'll get "got empty result for query" error message so while there is data in stderr pipe its silently dropped.

https://github.com/newrelic/infra-integrations-sdk/blob/master/jmx/jmx.go#L179

Integrations SDK version

v2.1

Golang version

go 1.10

OS
  • Red Hat 7.5

(Oct) SDK v4 Beta: Support for entities registration and dimensional metrics Champion

Note: Points to the same codebase target as Register/Entity work. Parallelizing this work could lead to merge conflicts and slowdowns in development

Make DM pipeline production ready (error handling, rate limiting handling, perf testing)

Check if the Telemetry SDK can be configured to send data to the agent instead of the DM endpoint.

Proper reporting of errors for integrations (this was discussed during the Flex GA MMF development).

It includes public documentation for customers who are developing their own integrations or the FIT team. And enablement of the support team.

Epic description: https://docs.google.com/document/d/1MBvijlZBJx96ICbtj7_y9dZSegz1oaIqrKRm4DFIyj4/edit

Action Items identified from Blue Medora’s HAProxy PoC are included in the above MMF defintion): DMI - BM's feedback about SDK v4

[v4] Integration package naming inconsistencies

  • The Entity is which actually the Data field of the v4 Protocol.
  • And the metadata.Metadata field inside the Entity actually contains the definition of the Entity.
  • The Integration doesn't have any func to adds metrics that are not related to any Entity. A workaround to this is being added in #291

Common dimension not aligned with telemetry SDK

Description

The SDK v4 is not generating a valid payload compared with the one the agent expects.
In SDK v4, each entity can provide a map of common dimensions . Powerdns example:

"data": [
        {
            "common": {
                "scrapedTargetKind": "user_provided",
                "scrapedTargetName": "localhost: 9122",
                "scrapedTargetURL": "http: //localhost:9122/metrics",
                "targetName": "localhost:9122"
            },
...

The problem is that the agent defines this common field as the following data structure:

type Common struct {
	Timestamp  *int64                 `json:"timestamp"`
	Interval   *int64                 `json:"interval.ms"`
	Attributes map[string]interface{} `json:"attributes"`
}

Why to change the SDK and not the agent mapping?

SDK v4 payload data is transformed by the agent to a valid structure for the NR telemetry API. The Telemetry api defines the common structure similar to the agent one:

type metricCommonBlock struct {
	timestamp time.Time
	interval time.Duration
	forceIntervalValid bool
	attributes MapEntry
}

Telemetry expected payload example.

Expected behaviour

If we want to align the integrations SDK with the telemetry SDK, the commonDimension field should be aligned with the telemetry CommonBlock. The entity common data should be:

"data": [
        {
            "common": {
                "attributes": {
                                "scrapedTargetKind": "user_provided",
                                "scrapedTargetName": "localhost: 9122",
                                "scrapedTargetURL": "http: //localhost:9122/metrics",
                                "targetName": "localhost:9122"
                },
            },
...

Common timestamp and interval will be added but not covered in this issue as they are currently populated by the agent.

Release

Those change should not break current integrations as CommonDimensions should be added with AddCommonDimension. Nonetheless, CommonDimension field is public/exported, thus we should sync with core integrations to assure CommonDimension is not used directly.

Missing "EventSet" implentation / Feature Request

It would appear based on the help docs: https://docs.newrelic.com/docs/infrastructure/integrations-sdk/file-specifications/integration-executable-file-specifications#event-data that integration binarys should be able to emit "Event Set" data to allow for tracking of "one-off messages for key activities on a system" ... However I do not see any code in this SDK that would facilitate / exemplify how the format of that JSON might be.

sdk version
  • dev
  • 1.0
  • ...
Golang version
  • All of them
  • 1.8
  • 1.7
  • ...
OS
  • All of them
  • Amazon Linux, all versions
  • CentOS, version 6 or higher
  • Debian, version 7 ("Wheezy") or higher
  • Red Hat Enterprise Linux (RHEL), version 6 or higher
  • Ubuntu, versions 12.04, 14.04, and 16.04 (LTS versions)
  • Windows Server, 2008 and 2012 and their service packs

Downloading sdkv4

SDKv4 has been released but we had issues importing it

$go get github.com/newrelic/infra-integrations-sdk/[email protected]
go get github.com/newrelic/infra-integrations-sdk/[email protected]: github.com/newrelic/[email protected]: invalid version: module contains a go.mod file, so major version must be compatible: should be v0 or v1, not v4

$go get github.com/newrelic/[email protected]
go get github.com/newrelic/[email protected]: github.com/newrelic/[email protected]: invalid version: module contains a go.mod file, so major version must be compatible: should be v0 or v1, not v4

If I run simply $go get github.com/newrelic/infra-integrations-sdk I get version:
github.com/newrelic/infra-integrations-sdk v3.6.5+incompatible // indirect

Should we add in the go.mod the version of the package?
something like this

golang/go#35732 (comment)

Feature Request: Ability to Override/Mock out globalLogger in log Package

Description of the problem

It would be very helpful to be able to mock out/override the global logger in the log package. It's not trivial right now to test error or other cases that just log rather than return a certain value.

Having a function in the log package that can set the global logger to a struct that implements the Logger interface would be helpful.

Something similar to this:

func SetGlobalLogger(logger Logger) {
    globalLogger = logger
}
Integrations SDK version

N/A

Golang version

N/A

OS

N/A

Storer file name not unique for multiple integration execution

Description

When the storer is created it uses the name of the integration as the file name.
So if you have multiple discoveries, it spawns the same integration many times, therefore they all try to read and write to the same storer json file.

Impact: all OHIs that use RATEs, & DELTAs with container discovery.

Internal discussion: https://newrelic.slack.com/archives/C5A2QGLKT/p1613449983171900

Expected Behavior

Each executed integration should have their own store file.

Steps to Reproduce

Setup multiple integration manifest from same on-host integration.
Run the containerize agent.
Observe integration metrics in NR with incorrect value.

Your Environment

On a containerize environment with multiple yml from same on-host integrations (e.g. Flex, haproxy)

Additional context

For SDK < v4

Field to prevent metrics/events to be attached to the host entity

Problem

The agent has a binary behaviour regarding the metrics from an integration, whether attach them all to the host entity or any (FlagDMRegisterEnable = "dm_register_enabled”). Most of the integrations should be fine without attaching the host entity as the backend perform the entity synthesis but if in the near future we want to decouple core integrations (cpu, mem, etc), the agent should be able to differentiate them.

Feature Description

Add a field to each entity data to let know the agent whether to add the host entity or not. For example, ignore_host_entity defaulting to false:

{
  "protocol_version":"4",                      # protocol version number
  "integration":{                              # this data will be added to all metrics and events as attributes,                                               
                                               # and also sent as inventory
    "name":"integration name",
    "version":"integration version"
  },
  "data":[                                    # List of objects containing entities, metrics, events and inventory
    {
      "ignore_host_entity": true                # don't attach metrics to the host entity
      "metrics":[                             # list of metrics using the dimensional metric format
        {
          "name":"redis.metric1",
          "type":"count",                     # gauge, count, summary, cumulative-count, rate or cumulative-rate
          "value":93, 
          "attributes":{}                     # set of key-value pairs that define the dimensions of the metric
        }
      ],
      "common":{...}                          # Map of dimensions common to every entity metric. Only string supported.
      "inventory":{...},                      # Inventory remains the same
      "events":[...]                          # Events remain the same
    }
  ]
}

Additional context

newrelic/infrastructure-agent#865

NewEntity does add the new entity to the integration

Description

When calling integration.NewEntity in an integration, the entity is not added to the list of entities for the integration. This results in the integration data being missing

Expected Behavior

The entity should be added to the list of integrations and the data serialized.

Steps to Reproduce

go run redis.go -metrics -hostname localhost 

{"protocol_version":"4","integration":{"name":"com.myorg.redis","version":"0.1.0"},"data":[]}

Your Environment

MacOS 10.15.7
infra-integrations-sdk v4.1.0

Additional context

The issue is fixed by creating the entity and adding :

i.Entities = append(i.Entities,entity)
go run redis.go -metrics -hostname localhost 

{"protocol_version":"4","integration":{"name":"com.myorg.redis","version":"0.1.0"},"data":[{"common":{},"entity":{"name":"redis_01","displayName":"RedisServer","type":"my-redis","metadata":{}},"metrics":[{"timestamp":1631239424,"name":"query.instantaneousOpsPerSecond","type":"gauge","attributes":{},"value":0}],"inventory":{},"events":[]}]}

Cache collisions for metric.RATE metrics

I am getting the following error from the sdk for the custom integration I am writing which uses the metric.RATE option:

WARN[0000] Error setting value: Samples for queue.jobsPerSecond are too close in time, skipping sampling 
WARN[0000] Error setting value: Samples for queue.jobsPerSecond are too close in time, skipping sampling 
WARN[0000] Error setting value: Samples for queue.jobsPerSecond are too close in time, skipping sampling 

The above example came from trying to add a metric named queue.jobsPerSecond to 4 different MetricSets. The problem seems to stem from the fact that names are global in the cache rather than being unique per MetricSet, which means that MetricSets which use the same metric name will have a collision.

Adding a unique prefix per MetricSet to the key would get rid of the problem.

Integrations SDK version
  • 2.0
Golang version
  • All of them
OS
  • All of them

JMX: handle commented lines

Description

JMX server endpoint may return "Java comment" lines like # An error report file with more information is saved as:.

In this case JMX will return an error like error: invalid character '#' looking for beginning of value.

It'd be great to handle this case properly and log this as a warning.

Ideally an integration execution instance should be able to keep fetching the rest of the requested data/queries defined.

Nice to haves

  1. A JMX server error may report several lines. Current jmx package is limited to read just 1 line.
    It'd be great to get the whole error message logged as a single entry.

  2. Circuit breaker. Whenever several queries fail in a row (in this case we are handling "java comment" lines, but this could be extrapolated to other errors), the jmx package client (let's call it that way, although API is a set of awful global functions sharing global state) should prevent next queries to be submitted and log an error instead. This will avoid worsening the JMX endpoint scenario, as it already returned N errors in a row.

Update metrics API endpoint for FedRAMP - Infra agent

For FedRAMP customers there is a special gov-infra-api.newrelic.com domain.

In order to move Infra to dimensional metrics in a FedRAMP approved way, we need to make sure the dimensional metrics capability is following the same approach: going straight to CHI and avoid CloudFlare and cells.

Testing internal notification feature

Description of the problem

This form is for integrations-sdk bug reports and feature requests only.
This is NOT a help site. Do not ask help questions here.
If you need help, please use newrelic support?.

Describe the bug or feature request in detail.

Test workflow on v3 branch

Add a workflow that runs unit tests on v3 branch.

This should be valuable for reviewers of v3 fixes branches

include jmx fix on v3

Description

This bug was fixed for v4 but needs to be ported to v3 so the nri-kafa, nri-cassandra and nri-jmx can use it.

Expected Behavior

Keep a v3 branch that includes the hotfixes. At this moment the last version of v3 is v3.6.5 . I would expect to have a tag v3.6.6 with this fix.

Provide default path for nrjmx when running under windows

Is your feature request related to a problem? Please describe.

When running under windows, the SDK does not provide a default path, making it harder to configure integrations that use JMX (cassandra, kafka, etc). Kafka provides it's own flag for it, Cassandra does not so it needs to be provided a "hidden" env var which is not ideal

Feature Description

Add a default value for nrjmx path when running under windows, similar to what is done for Linux

Priority

Please help us better understand this feature request by choosing a priority from the following options:
[Really Want]

Clean up and publish SDK v4

Make sure that SDK v4 branch is ready and all the documentation is correct. Once happy then we can publish this and tag it as the new version of the SDK.

Feature Request: Provide Standardized JSON Encoder/Decoder

Dozens of OHIs have to manage JSON Decoding

Over the course of the last 3 weeks, there have been a couple high severity issues with both decoding JSON responses and Marshalling them. Specific commits/issues like the following demonstrate the need to have a more standardized method for managing JSON responses.

Examples

Details

The issues listed above have created critical situations where data was not being reported from the servers/nodes. The larger issue being, unless an actual person is monitoring the data streams, there is often no way to know that these issues are occurring.

Although some marshaling is managed by the SDK, a method to allow all OHIs the ability to benefit from one standardized encoding, decoding, unmarshaling, and marshaling solution could reduce the bug-tail that is currently being seen in production sites. Some of this occurs because the expected interfaces, as defined from the software, can randomly return these, JSON valid, but inappropriate types to convert.

OS
  • All of them

Setters and document the "common" attribute

There is a common field in the payload that could be used for attributes shared by entities and metrics. This is not documented and also there are no methods instrumented to add this values.

Would be good to document this attribute behavior with some examples, and also add the setters to this.

Tutorial specifies v3 however the sample code uses v4

NOTE: # https://github.com/newrelic/infra-integrations-sdk/blob/master/docs/tutorial.md#building-a-redis-integration-using-the-integration-golang-sdk-v30 however https://github.com/newrelic/infra-integrations-sdk/blob/master/docs/tutorial-code/multiple-entities/redis-multi.go specifies v4

Description

NOTE: # When trying to build a custom integration that Flex is not able to meet requirements I'm running into the following log message

time=“2021-11-04T20:14:13Z” level=debug msg=“Missing event_type field for metric.” action=EmitDataSet component=PluginRunner integration= metric=“map[attributes:map[] label.env:production label.role:cache name:query.instantaneousOpsPerSecond timestamp:1.636056853e+09 type:gauge value:2112]”
$ ./myorg-redis-multi  --pretty --metrics
{
	"protocol_version": "4",
	"integration": {
		"name": "com.myorganization.redis-multi",
		"version": "0.1.0"
	},
	"data": [
		{
			"common": {},
			"entity": {
				"name": "instance-1",
				"displayName": "redis",
				"type": "instance-1",
				"metadata": {}
			},
			"metrics": [
				{
					"timestamp": 1636057006,
					"name": "query.instantaneousOpsPerSecond",
					"type": "gauge",
					"attributes": {},
					"value": 2112
				}
			],
			"inventory": {},
			"events": []
		},
		{
			"common": {},
			"entity": {
				"name": "instance-2",
				"displayName": "redis",
				"type": "my-instance",
				"metadata": {}
			},
			"metrics": [
				{
					"timestamp": 1636057006,
					"name": "query.instantaneousOpsPerSecond",
					"type": "gauge",
					"attributes": {},
					"value": 2112
				}
			],
			"inventory": {},
			"events": []
		}
	]
}

This output does not include event_type, however the tutorial shows event_type.

Expected Behavior

The sample code is able to submit metrics without errors

Troubleshooting or NR Diag results

Steps to Reproduce

Install newrelic-infra 1.20.5

in /etc/newrelic-infra.yml

set

log_file: /tmp/newrelic.log
verbose: 1

grep event_type /tmp/newrelic.log

grab https://github.com/newrelic/infra-integrations-sdk/blob/master/docs/tutorial-code/multiple-entities/redis-multi.go

  • copy the yaml files to the correct place and run

Your Environment

$ cat /etc/lsb-release 
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04.6 LTS"
$ dpkg --list |grep newrelic
ii  newrelic-infra                    1.20.5                              amd64        New Relic Infrastructure provides flexible, dynamic server monitoring. With real-time data collection and a UI that scales from a handful of hosts to thousands, Infrastructure is designed for modern Operations teams with fast-changing systems.

Update metrics API endpoint for FedRAMP - POMI

Currently dimensional metrics are going through CloudFlare and are routed to a Cell. Both of these mechanisms are not FedRAMP approved. For FedRAMP customers there is a special gov-infra-api.newrelic.com domain that doesn't use CloudFlare or cells.

In order to move Infra to dimensional metrics in a FedRAMP approved way, we need to make sure the dimensional metrics capability is following the same approach: going straight to CHI and avoid CloudFlare and cells.

Instead of using the current domain: metric-api.newrelic.com, we will be using infra-api.newrelic.com. For POMI we need to update the default url.

invalid character '\\'' looking for beginning of object key string

Description

I am unable to decipher what string formatting is unacceptable

Steps to Reproduce

this is the output from my program, it is just mock up output for now:
{"integration_version":"0.1.0","protocol_version":"2","data":[{"metrics":[{"some-data":4000,"event_type":"CustomSample"}],"inventory":{"instance":{"version":"3.0.1"}},"events":[{"category":"status","summary":"restart"}]}],"name":"com.myorganization.svctest"}

Your Environment

Amazon Linux2

Additional context

May 31 15:20:17 ip-10-0-0-224.ec2.internal newrelic-infra-service[3961]: time="2023-05-31T15:20:17Z" level=warning msg="Cannot emit integration payload" component=integrations.runner.Runner error="invalid character '\'' looking for beginning of object key string" integration_name=svctest payload="{'integration_version': '0.1.0', 'protocol_version': '2', 'data': [{'metrics': [{'some-data': 4000, 'event_type': 'CustomSample'}], 'inventory': {'instance': {'version': '3.0.1'}}, 'events': [{'category': 'status', 'summary': 'restart'}]}], 'name': 'com.myorganization.svctest'}" runner_uid=89e32cfcbd

For Maintainers Only or Hero Triaging this bug

Suggested Priority (P1,P2,P3,P4,P5):
Suggested T-Shirt size (S, M, L, XL, Unknown):

Implement the go module

Create go module pipeline generate code with thrift file. We need a target that will generate thrift code (better in docker so we can run locally without installing thrift dependencies). In the pipeline call this target and make sure there are no differences.

  • Create a make generate target that spawns the container with the repo mounted as a volume. When I run this target I will have the thrift code generated
  • Call the target and make sure there is no difference in the generated code

Make ttl of temporary stored metric value configurable

Is your feature request related to a problem? Please describe.

There may be cases where the rate calculation is not done because the time between arrival of metric values is more that the default (and fixed) TTL of 60 seconds

Feature Description

Make the TTL configurable , with the default set to the current 60 seconds value

Priority

Please help us better understand this feature request by choosing a priority from the following options:
[Nice to Have]

Jmx closed connection [possible bug]

After a timeout of a single query it seems that all the connections are closed:

func TestJmxNoTimeoutQuery(t *testing.T) {

	defer Close()

	if err := openWait("", "", "", "", openAttempts); err != nil {
		t.Error(err)
	}

	if _, err := Query(cmdTimeout, 0); err != nil {
		t.Error(err)
	}
	if _, err := Query(cmdBigPayload, timeoutMillis); err != nil {
		t.Error(err)
	}
}

This produces:

    jmx_test.go:146: timeout waiting for query: timeout
    jmx_test.go:149: EOF

While the first one is expected, the second one it is not, since the timeout is of the single query and there is no reason to fail the second one. We had users complaining for this behaviour.

However, this could be expected behaviour if we close the connection and we prefer to fail since the library is not able to "trash" the result for the timeout query that is not interesting anymore. Further investigation is needed

HTTP client can't be configured to accept invalid hostnames

Description of the problem

The HTTP client currently does not have many configuration options as far as tolerance of invalid certificates. A few customers have requested the ability to accept certificates that don't match the hostname of the server they are connecting to (newrelic/nri-elasticsearch#45).

Integrations SDK version
  • dev
  • 1.0
  • ...
Golang version
  • All of them
  • 1.8
  • 1.7
  • ...
OS
  • All of them
  • Amazon Linux, all versions
  • CentOS, version 6 or higher
  • Debian, version 7 ("Wheezy") or higher
  • Red Hat Enterprise Linux (RHEL), version 6 or higher
  • Ubuntu, versions 12.04, 14.04, and 16.04 (LTS versions)
  • Windows Server, 2008 and 2012 and their service packs

Build no longer works on Travis-ci

The Travis CI build is broken as we do not pin the version of Testify.

Description

The project needs to move to go mod so that we can pin the version of Testify. This also involves updating the build so that it runs on Github Actions.

Expected Behavior

The build to pass.

[v3] metrics on the a storer file are not garbage collected

Description

There is no mechanism that removes metrics (after a TTL expired) from the storer file used to save this metric into the disk to calculate the deltas on each execution.

On initialization the integrations checks the TTL of the entire file remove it if it has expired. But there are cases where the metric identifiers are based on ephemeral entities causing that new entries to this file are constantly being created and if the integration keeps it's execution normally the current clean up mechanism is never triggered, making this file grow without control and having the integration loading in on each execution.

Expected Behavior

Metrics on the storer should be garbage collected if they are not being updated after a TTL.

Similar thing is done in the Kubernetes integration

Steps to Reproduce

Your Environment

One example happens on nri-varnish where backend entities could be ephemeral in some environments.
more context in this issue

Additional context

Deal with inf upper buond values

Currently if the upper bound of a bucket is inf the bucket is discarded. I believe we should take care of such range in a better way to avoid losing values. Currently with a metric like:

# HELP powerdns_recursor_response_time_seconds Histogram of PowerDNS recursor response times in seconds.
# TYPE powerdns_recursor_response_time_seconds histogram
powerdns_recursor_response_time_seconds_bucket{le="0.001"} 0
powerdns_recursor_response_time_seconds_bucket{le="0.01"} 0
powerdns_recursor_response_time_seconds_bucket{le="0.1"} 0
powerdns_recursor_response_time_seconds_bucket{le="1"} 0
powerdns_recursor_response_time_seconds_bucket{le="+Inf"} 0

we would lose everything in the range 1<x<inf

// ignore +Inf buckets

Use jmx.Query example bug

Description of the problem

This form is for integrations-sdk bug reports and feature requests only.
This is NOT a help site. Do not ask help questions here.
If you need help, please use newrelic support?.

Describe the bug or feature request in detail.
Use example bug :
error ”./main.go:14:79: cannot use 5 * time.Second (type time.Duration) as type int in argument to jmx.Query“

A code snippet, screenshot, and small-test help us understand.

Place the appropriate label to the issue: bug, feature, enhancement, ...

Integrations SDK version
  • dev
  • 1.0
  • ...
Golang version
  • All of them
  • 1.8
  • 1.7
  • ...
OS
  • All of them
  • Amazon Linux, all versions
  • CentOS, version 6 or higher
  • Debian, version 7 ("Wheezy") or higher
  • Red Hat Enterprise Linux (RHEL), version 6 or higher
  • Ubuntu, versions 12.04, 14.04, and 16.04 (LTS versions)
  • Windows Server, 2008 and 2012 and their service packs

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.