Code Monkey home page Code Monkey logo

nri-elasticsearch's Introduction

New Relic Open Source community plus project banner.

New Relic integration for elasticsearch

The New Relic integration for Elasticsearch captures critical performance metrics and inventory reported by Elasticsearch clusters. Data on the cluster, nodes, shards, and indices is collected.

Inventory data is obtained from the elasticsearch.yml file, and metrics and additional inventory data is obtained from the REST API.

Installation and usage

For installation and usage instructions, see our documentation web site.

Compatibility

For compatibility and requirements, see our documentation web site.

Building

Golang is required to build the integration. We recommend Golang 1.11 or higher.

After cloning this repository, go to the directory of the Elasticsearch integration and build it:

$ make

The command above executes the tests for the Elasticsearch integration and builds an executable file called nri-elasticsearch under the bin directory.

To start the integration, run nri-elasticsearch:

$ ./bin/nri-elasticsearch

If you want to know more about usage of ./bin/nri-elasticsearch, pass the -help parameter:

$ ./bin/nri-elasticsearch -help

Testing

To run the tests execute:

$ make test

Spin-up a local testing environment

Running a local testing environment is pretty easy. You just need a k8s cluster environment running (or access to a remote one). Then, run:

helm repo add elastic https://helm.elastic.co 
helm install elasticsearch elastic/elasticsearch --values ./values.yaml

For example, having as values:

replicas: 1
minimumMasterNodes: 1

secret:
  enabled: true
  password: "testPass" 

Then, connect to the integration port-forwarding the 9200 service port:

kubectl port-forward service/elasticsearch-master 9200:9200
go run ./src/... -metrics=true -hostname=localhost -username=elastic -password='testPass' -use_ssl=true --tls_insecure_skip_verify=true

Support

Should you need assistance with New Relic products, you are in good hands with several support diagnostic tools and support channels.

New Relic offers NRDiag, a client-side diagnostic utility that automatically detects common problems with New Relic agents. If NRDiag detects a problem, it suggests troubleshooting steps. NRDiag can also automatically attach troubleshooting data to a New Relic Support ticket.

If the issue has been confirmed as a bug or is a Feature request, please file a Github issue.

Support Channels

Privacy

At New Relic we take your privacy and the security of your information seriously, and are committed to protecting your information. We must emphasize the importance of not sharing personal data in public forums, and ask all users to scrub logs and diagnostic information for sensitive information, whether personal, proprietary, or otherwise.

We define “Personal Data” as any information relating to an identified or identifiable individual, including, for example, your name, phone number, post code or zip code, Device ID, IP address, and email address.

For more information, review New Relic’s General Data Privacy Notice.

Contribute

We encourage your contributions to improve this project! Keep in mind that when you submit your pull request, you'll need to sign the CLA via the click-through using CLA-Assistant. You only have to sign the CLA one time per project.

If you have any questions, or to execute our corporate CLA (which is required if your contribution is on behalf of a company), drop us an email at [email protected].

A note about vulnerabilities

As noted in our security policy, New Relic is committed to the privacy and security of our customers and their data. We believe that providing coordinated disclosure by security researchers and engaging with the security community are important means to achieve our security goals.

If you believe you have found a security vulnerability in this project or any of New Relic's products or websites, we welcome and greatly appreciate you reporting it to New Relic through HackerOne.

If you would like to contribute to this project, review these guidelines.

To all contributors, we thank you! Without your contribution, this project would not be what it is today.

License

nri-elasticsearch is licensed under the MIT License.

nri-elasticsearch's People

Contributors

alejandrodnm avatar alvarocabanas avatar ardias avatar arvdias avatar camdencheek avatar carlosroman avatar carlossscastro avatar cpheps avatar cristianciutea avatar davidbrota avatar davidgit avatar dependabot-preview[bot] avatar dependabot[bot] avatar fryckbos avatar gsanchezgavier avatar jfjoly avatar kang-makes avatar marcelschlapfer avatar marcsanmi avatar mariomac avatar matiasburni avatar msahota14 avatar newrelic-coreint-bot avatar paologallinaharbur avatar renovate[bot] avatar rhullah avatar sigilioso avatar tangollama avatar welchdante avatar xino12 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nri-elasticsearch's Issues

Skipping SSL validation is not as easy as it could be

Is your feature request related to a problem? Please describe.

Currently in order to skip server certificate verification, we need to setup an ssl_alternative_hostname and a ca_bundle_dir (or file).

Feature Description

The ssl_alternative_hostname might not be working in all cases, so we should add a simple boolean flag (InsecureSkipVerify) to skip all server certificate validation (which is what most users want) like most other integrations have,

Priority

[Really Want]

[Repolinter] Open Source Policy Issues

Repolinter Report

🤖This issue was automatically generated by repolinter-action, developed by the Open Source and Developer Advocacy team at New Relic. This issue will be automatically updated or closed when changes are pushed. If you have any problems with this tool, please feel free to open a GitHub issue or give us a ping in #help-opensource.

This Repolinter run generated the following results:

❗ Error ❌ Fail ⚠️ Warn ✅ Pass Ignored Total
0 1 0 6 0 7

Fail #

readme-starts-with-community-plus-header #

The README of a community plus project should have a community plus header at the start of the README. If you already have a community plus header and this rule is failing, your header may be out of date, and you should update your header with the suggested one below. For more information please visit https://opensource.newrelic.com/oss-category/. Below is a list of files or patterns that failed:

  • README.md: The first 5 lines do not contain the pattern(s): Open source Community Plus header (see https://opensource.newrelic.com/oss-category).
    • 🔨 Suggested Fix: prepend the latest code snippet found at https://github.com/newrelic/opensource-website/wiki/Open-Source-Category-Snippets#code-snippet-2 to file

Passed #

Click to see rules

license-file-exists #

Found file (LICENSE). New Relic requires that all open source projects have an associated license contained within the project. This license must be permissive (e.g. non-viral or copyleft), and we recommend Apache 2.0 for most use cases. For more information please visit https://docs.google.com/document/d/1vML4aY_czsY0URu2yiP3xLAKYufNrKsc7o4kjuegpDw/edit.

readme-file-exists #

Found file (README.md). New Relic requires a README file in all projects. This README should give a general overview of the project, and should point to additional resources (security, contributing, etc.) where developers and users can learn further. For more information please visit https://github.com/newrelic/open-by-default.

readme-contains-link-to-security-policy #

Contains a link to the security policy for this repository (README.md). New Relic recommends putting a link to the open source security policy for your project (https://github.com/newrelic/<repo-name>/security/policy or ../../security/policy) in the README. For an example of this, please see the "a note about vulnerabilities" section of the Open By Default repository. For more information please visit https://nerdlife.datanerd.us/new-relic/security-guidelines-for-publishing-source-code.

readme-contains-discuss-topic #

Contains a link to the appropriate discuss.newrelic.com topic (README.md). New Relic recommends directly linking the your appropriate discuss.newrelic.com topic in the README, allowing developer an alternate method of getting support. For more information please visit https://nerdlife.datanerd.us/new-relic/security-guidelines-for-publishing-source-code.

code-of-conduct-should-not-exist-here #

New Relic has moved the CODE_OF_CONDUCT file to a centralized location where it is referenced automatically by every repository in the New Relic organization. Because of this change, any other CODE_OF_CONDUCT file in a repository is now redundant and should be removed. Note that you will need to adjust any links to the local CODE_OF_CONDUCT file in your documentation to point to the central file (README and CONTRIBUTING will probably have links that need updating). For more information please visit https://docs.google.com/document/d/1y644Pwi82kasNP5VPVjDV8rsmkBKclQVHFkz8pwRUtE/view. Did not find a file matching the specified patterns. All files passed this test.

third-party-notices-file-exists #

Found file (THIRD_PARTY_NOTICES.md). A THIRD_PARTY_NOTICES.md file can be present in your repository to grant attribution to all dependencies being used by this project. This document is necessary if you are using third-party source code in your project, with the exception of code referenced outside the project's compiled/bundled binary (ex. some Java projects require modules to be pre-installed in the classpath, outside the project binary and therefore outside the scope of the THIRD_PARTY_NOTICES). Please review your project's dependencies and create a THIRD_PARTY_NOTICES.md file if necessary. For JavaScript projects, you can generate this file using the oss-cli. For more information please visit https://docs.google.com/document/d/1y644Pwi82kasNP5VPVjDV8rsmkBKclQVHFkz8pwRUtE/view.

Increase Index Limit to 500

Description of the problem

Increase hardcoded index limit to 500.

OS
  • All of them
  • Amazon Linux, all versions
  • CentOS, version 6 or higher
  • Debian, version 7 ("Wheezy") or higher
  • Red Hat Enterprise Linux (RHEL), version 6 or higher
  • Ubuntu, versions 12.04, 14.04, and 16.04 (LTS versions)
  • Windows Server, 2008 and 2012 and their service packs

Improve logging while running under Newrelic infra agent

Description

It is difficult to configure the elasticsearch integration because if you make changes the only indication that if something is working or not is by checking whether data is making it into Newrelic.

What I would like to see is some logging output that would tell me if the nri-elasticsearch plugin is actually pulling data or if not then why not. Like a verbose option to say if it is getting no data, or timing out, or seeing a SSL error, and if it is getting data then optionally telling how much data it is pulling in bytes or number of characters or something like that.

I don't know if the nri-elasticsearch is responsible for this while running under newrelic infra or if this is the correct place to request it. But I figure it is a good idea to put it out there because it does cause a lot of people to trip up. While talking to other people in my org they all experienced mysterious "no data" results that were confusing at different times.

Acceptance Criteria

Enabling a optional verbose mode while running under newrelic infra or, if such a option exists, documenting how to use it.

Describe Alternatives

Executing the nri-elasticsearch on the command line with the options you would put in /etc/newrelic-infra/integrations.d/elasticsearch-config.yml is useful at troubleshooting issues. But this is sub-optimal because with modern cloud-based infrastructure it is not always easy to get on the host running the integration to test things manually.

Dependencies

Probably something to do with newrelic infra agent itself. I don't know how logging works for plugins like this.

Additional context

Estimates

I am hoping that it is a "S". Just adding a env option that can be used in elasticsearch-config.yml or in a container that would give us better logging.

For Maintainers Only or Hero Triaging this bug

Suggested Priority (P1,P2,P3,P4,P5):
Suggested T-Shirt size (S, M, L, XL, Unknown):

The nri-elasticsearch fails to execute after Linux hardening

The NRI rpm places executables on a filesystem that is recommended to be mounted as noexec.

Description

The New Relic Elasticsearch was installed via rpm from repository:

baseurl=https://download.newrelic.com/infrastructure_agent/linux/yum/el/7/$basearch

The rpm contains following files:

# rpm -qlv nri-elasticsearch-4.3.3-1.x86_64
-rw-r--r--    1 root    root                     1757 Nov 27 02:00 /etc/newrelic-infra/integrations.d/elasticsearch-config.yml.sample
-rwxr-xr-x    1 root    root                  7009219 Nov 27 02:00 /var/db/newrelic-infra/newrelic-integrations/bin/nri-elasticsearch
-rw-r--r--    1 root    root                      370 Nov 27 02:00 /var/db/newrelic-infra/newrelic-integrations/elasticsearch-definition.yml
-rw-r--r--    1 root    root                      380 Nov 27 02:00 /var/db/newrelic-infra/newrelic-integrations/elasticsearch-win-definition.yml 

System without noexec option on /var is identified as vulnerable with: unix-partition-mounting-weakness

On system with noexec option on /var the NRI Elasticsearch fails with:

fork/exec /var/db/newrelic-infra/newrelic-integrations/bin/nri-elasticsearch: permission denied

Expected Behavior

Executables are placed in /usr/bin or another place where execution is allowed.
The NRI starts and sends data on a system where /var is mounted with noexec option.

Steps to Reproduce

  1. Set noexec option for /var
# grep '/var' /etc/fstab
UUID="<uuid>"     /var    ext4    defaults,nodev,noexec,nosuid,nofail     0 0
  1. Remount the /var
# mount | grep '/var'
/dev/mapper/rootvg-varlv on /var type ext4 (rw,nosuid,nodev,noexec,relatime,seclabel,data=ordered)
  1. Install Newrelic Infra and NRI Elasticsearch from download.newrelic.com
# rpm -qa | grep relic
newrelic-infra-1.14.2-1.el7.x86_64
# rpm -qa | grep nri
nri-elasticsearch-4.3.3-1.x86_64
  1. Start NewRelic Infra
# systemctl restart newrelic-infra
  1. Look for the error in journalctl
# journalctl -u newrelic-infra | grep denied

Your Environment

newrelic-infra-1.14.2-1.el7.x86_64
nri-elasticsearch-4.3.3-1.x86_64
System has set up noexec option for /var , /tmp , /var/tmp (that are in /etc/fstab).
RHEL 7.9
System is required to be CIS L2 compliant (https://www.cisecurity.org/cis-benchmarks/)
System has FIPS enabled (https://www.nist.gov/publications/security-requirements-cryptographic-modules-includes-change-notices-1232002)
System needs to pass vulnerability scans.

[Repolinter] Open Source Policy Issues

Repolinter Report

🤖This issue was automatically generated by repolinter-action, developed by the Open Source and Developer Advocacy team at New Relic. This issue will be automatically updated or closed when changes are pushed. If you have any problems with this tool, please feel free to open a GitHub issue or give us a ping in #help-opensource.

This Repolinter run generated the following results:

❗ Error ❌ Fail ⚠️ Warn ✅ Pass Ignored Total
0 0 0 7 0 7

Passed #

Click to see rules

license-file-exists #

Found file (LICENSE). New Relic requires that all open source projects have an associated license contained within the project. This license must be permissive (e.g. non-viral or copyleft), and we recommend Apache 2.0 for most use cases. For more information please visit https://docs.google.com/document/d/1vML4aY_czsY0URu2yiP3xLAKYufNrKsc7o4kjuegpDw/edit.

readme-file-exists #

Found file (README.md). New Relic requires a README file in all projects. This README should give a general overview of the project, and should point to additional resources (security, contributing, etc.) where developers and users can learn further. For more information please visit https://github.com/newrelic/open-by-default.

readme-starts-with-community-plus-header #

The first 5 lines contain all of the requested patterns. (README.md). The README of a community plus project should have a community plus header at the start of the README. If you already have a community plus header and this rule is failing, your header may be out of date, and you should update your header with the suggested one below. For more information please visit https://opensource.newrelic.com/oss-category/.

readme-contains-link-to-security-policy #

Contains a link to the security policy for this repository (README.md). New Relic recommends putting a link to the open source security policy for your project (https://github.com/newrelic/<repo-name>/security/policy or ../../security/policy) in the README. For an example of this, please see the "a note about vulnerabilities" section of the Open By Default repository. For more information please visit https://nerdlife.datanerd.us/new-relic/security-guidelines-for-publishing-source-code.

readme-contains-discuss-topic #

Contains a link to the appropriate discuss.newrelic.com topic (README.md). New Relic recommends directly linking the your appropriate discuss.newrelic.com topic in the README, allowing developer an alternate method of getting support. For more information please visit https://nerdlife.datanerd.us/new-relic/security-guidelines-for-publishing-source-code.

code-of-conduct-should-not-exist-here #

New Relic has moved the CODE_OF_CONDUCT file to a centralized location where it is referenced automatically by every repository in the New Relic organization. Because of this change, any other CODE_OF_CONDUCT file in a repository is now redundant and should be removed. Note that you will need to adjust any links to the local CODE_OF_CONDUCT file in your documentation to point to the central file (README and CONTRIBUTING will probably have links that need updating). For more information please visit https://docs.google.com/document/d/1y644Pwi82kasNP5VPVjDV8rsmkBKclQVHFkz8pwRUtE/view. Did not find a file matching the specified patterns. All files passed this test.

third-party-notices-file-exists #

Found file (THIRD_PARTY_NOTICES.md). A THIRD_PARTY_NOTICES.md file can be present in your repository to grant attribution to all dependencies being used by this project. This document is necessary if you are using third-party source code in your project, with the exception of code referenced outside the project's compiled/bundled binary (ex. some Java projects require modules to be pre-installed in the classpath, outside the project binary and therefore outside the scope of the THIRD_PARTY_NOTICES). Please review your project's dependencies and create a THIRD_PARTY_NOTICES.md file if necessary. For JavaScript projects, you can generate this file using the oss-cli. For more information please visit https://docs.google.com/document/d/1y644Pwi82kasNP5VPVjDV8rsmkBKclQVHFkz8pwRUtE/view.

Missing metrics on recent elasticsearch versions

Missing metrics on recent elasticsearch versions

Description

The following metrics are not being reported using the integration with elasticsearch7.17.1 or elascticsearch8.1.0:

  • fs.bytesReadsInBytes
  • fs.iOOperations
  • fs.reads
  • fs.writesInBytes
  • fs.writeOperations
  • get.requestsDcoumentExists
  • get.requestsDcoumentMissing
  • jvm.gc.majorCollectionsYoungGenerationObjects
  • jvm.gc.majorCollectionsYoungGenerationObjectsInMilliseconds
  • threadpool.activefetchShardStarted
  • threadpool.bulkActive
  • threadpool.bulkQueue
  • threadpool.bulkRejected
  • threadpool.bulkThreads
  • threadpool.indexActive
  • threadpool.indexQueue
  • threadpool.indexRejected

Expected Behavior

Those metrics being reported in the same way are being reported for previous elasticsearch versions.

NR Diag results

Steps to Reproduce

  • Run an elasticsearch cluster (7.17.1+) and install the nri-elasticsearch integration.
  • This can be easily addressed using coreint-canaries.

Your Environment

  • minikube kubernetes cluster for testing purposes.

Additional context

Some elasticsearch stats have changed in recent versions. Example:

Fragment from _nodes/stats request in elasticsearch6.0.0:

{
// ...
            "thread_pool": {
                "bulk": {
                    "active": 0,
                    "completed": 20,
                    "largest": 6,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 6
                },
                "fetch_shard_started": {
                    "active": 0,
                    "completed": 0,
                    "largest": 0,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 0
                },
                "fetch_shard_store": {
                    "active": 0,
                    "completed": 0,
                    "largest": 0,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 0
                },
                "flush": {
                    "active": 0,
                    "completed": 0,
                    "largest": 0,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 0
                },
                "force_merge": {
                    "active": 0,
                    "completed": 0,
                    "largest": 0,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 0
                },
                "generic": {
                    "active": 0,
                    "completed": 157,
                    "largest": 8,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 8
                },
                "get": {
                    "active": 0,
                    "completed": 0,
                    "largest": 0,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 0
                },
                "index": {
                    "active": 0,
                    "completed": 0,
                    "largest": 0,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 0
                },
                "listener": {
                    "active": 0,
                    "completed": 0,
                    "largest": 0,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 0
                },
                "management": {
                    "active": 1,
                    "completed": 73,
                    "largest": 3,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 3
                },
                "ml_autodetect": {
                    "active": 0,
                    "completed": 0,
                    "largest": 0,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 0
                },
                "ml_datafeed": {
                    "active": 0,
                    "completed": 0,
                    "largest": 0,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 0
                },
                "ml_utility": {
                    "active": 0,
                    "completed": 0,
                    "largest": 0,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 0
                },
                "refresh": {
                    "active": 0,
                    "completed": 92,
                    "largest": 1,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 1
                },
                "search": {
                    "active": 0,
                    "completed": 0,
                    "largest": 0,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 0
                },
                "security-token-key": {
                    "active": 0,
                    "completed": 0,
                    "largest": 0,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 0
                },
                "snapshot": {
                    "active": 0,
                    "completed": 0,
                    "largest": 0,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 0
                },
                "warmer": {
                    "active": 0,
                    "completed": 0,
                    "largest": 0,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 0
                },
                "watcher": {
                    "active": 0,
                    "completed": 0,
                    "largest": 0,
                    "queue": 0,
                    "rejected": 0,
                    "threads": 0
                }
            }
            "fs": {
                "data": [
                    {
                        "available_in_bytes": 48402522112,
                        "free_in_bytes": 51619237888,
                        "mount": "/ (overlay)",
                        "path": "/usr/share/elasticsearch/data/nodes/0",
                        "total_in_bytes": 62725623808,
                        "type": "overlay"
                    }
                ],
                "io_stats": {},
                "least_usage_estimate": {
                    "available_in_bytes": 48402067456,
                    "path": "/usr/share/elasticsearch/data/nodes/0",
                    "total_in_bytes": 62725623808,
                    "used_disk_percent": 22.835255326983585
                },
                "most_usage_estimate": {
                    "available_in_bytes": 48402067456,
                    "path": "/usr/share/elasticsearch/data/nodes/0",
                    "total_in_bytes": 62725623808,
                    "used_disk_percent": 22.835255326983585
                },
                "timestamp": 1647511922101,
                "total": {
                    "available_in_bytes": 48402522112,
                    "free_in_bytes": 51619237888,
                    "total_in_bytes": 62725623808
                }
            },
//...
}

Fragment from the same request for elasticsearch8.1.0:

{
// ...
            "thread_pool": {
                "analyze": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "auto_complete": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "azure_event_loop": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "ccr": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "fetch_shard_started": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "fetch_shard_store": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "flush": {
                    "threads": 1,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 1,
                    "completed": 3
                },
                "force_merge": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "generic": {
                    "threads": 11,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 11,
                    "completed": 577
                },
                "get": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "management": {
                    "threads": 2,
                    "queue": 0,
                    "active": 1,
                    "rejected": 0,
                    "largest": 2,
                    "completed": 85
                },
                "ml_datafeed": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "ml_job_comms": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "ml_utility": {
                    "threads": 2,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 2,
                    "completed": 376
                },
                "refresh": {
                    "threads": 1,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 1,
                    "completed": 307
                },
                "repository_azure": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "rollup_indexing": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "search": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "search_coordination": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "search_throttled": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "searchable_snapshots_cache_fetch_async": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "searchable_snapshots_cache_prewarming": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "security-crypto": {
                    "threads": 2,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 2,
                    "completed": 2
                },
                "security-token-key": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "snapshot": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "snapshot_meta": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "system_critical_read": {
                    "threads": 1,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 1,
                    "completed": 1
                },
                "system_critical_write": {
                    "threads": 3,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 3,
                    "completed": 3
                },
                "system_read": {
                    "threads": 3,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 3,
                    "completed": 94
                },
                "system_write": {
                    "threads": 3,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 3,
                    "completed": 45
                },
                "vector_tile_generation": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "warmer": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "watcher": {
                    "threads": 0,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 0,
                    "completed": 0
                },
                "write": {
                    "threads": 6,
                    "queue": 0,
                    "active": 0,
                    "rejected": 0,
                    "largest": 6,
                    "completed": 47
                }
            },
            "fs": {
                "timestamp": 1647506922692,
                "total": {
                    "total_in_bytes": 62725623808,
                    "free_in_bytes": 53532381184,
                    "available_in_bytes": 50315665408
                },
                "data": [
                    {
                        "path": "/usr/share/elasticsearch/data",
                        "mount": "/ (overlay)",
                        "type": "overlay",
                        "total_in_bytes": 62725623808,
                        "free_in_bytes": 53532381184,
                        "available_in_bytes": 50315665408
                    }
                ],
                "io_stats": {}
            }
// ...
}

Unsupported data type '[]interface {}' for yaml key

While runnung nri-elasticsearch integration I'm getting this errors:
[DEBUG] Unsupported data type '[]interface {}' for yaml key discovery.seed_hosts
[DEBUG] Unsupported data type '[]interface {}' for yaml key cluster.initial_master_nodes

Description

My config file for elasticsearch have these lines:

discovery.seed_hosts:

  • ip-address1
  • ip-address2

cluster.initial_master_nodes:

  • node-name1
  • node-name2

Your Environment

Simple 2-node cluster

I've checked elastic docs for typos, etc, but found that it's fine

[Repolinter] Open Source Policy Issues

Repolinter Report

🤖This issue was automatically generated by repolinter-action, developed by the Open Source and Developer Advocacy team at New Relic. This issue will be automatically updated or closed when changes are pushed. If you have any problems with this tool, please feel free to open a GitHub issue or give us a ping in #help-opensource.

This Repolinter run generated the following results:

❗ Error ❌ Fail ⚠️ Warn ✅ Pass Ignored Total
0 0 0 7 0 7

Passed #

Click to see rules

license-file-exists #

Found file (LICENSE). New Relic requires that all open source projects have an associated license contained within the project. This license must be permissive (e.g. non-viral or copyleft), and we recommend Apache 2.0 for most use cases. For more information please visit https://docs.google.com/document/d/1vML4aY_czsY0URu2yiP3xLAKYufNrKsc7o4kjuegpDw/edit.

readme-file-exists #

Found file (README.md). New Relic requires a README file in all projects. This README should give a general overview of the project, and should point to additional resources (security, contributing, etc.) where developers and users can learn further. For more information please visit https://github.com/newrelic/open-by-default.

readme-starts-with-community-plus-header #

The first 5 lines contain all of the requested patterns. (README.md). The README of a community plus project should have a community plus header at the start of the README. If you already have a community plus header and this rule is failing, your header may be out of date, and you should update your header with the suggested one below. For more information please visit https://opensource.newrelic.com/oss-category/.

readme-contains-link-to-security-policy #

Contains a link to the security policy for this repository (README.md). New Relic recommends putting a link to the open source security policy for your project (https://github.com/newrelic/<repo-name>/security/policy or ../../security/policy) in the README. For an example of this, please see the "a note about vulnerabilities" section of the Open By Default repository. For more information please visit https://nerdlife.datanerd.us/new-relic/security-guidelines-for-publishing-source-code.

readme-contains-forum-topic #

Contains a link to the appropriate forum.newrelic.com topic (README.md). New Relic recommends directly linking the your appropriate forum.newrelic.com topic in the README, allowing developer an alternate method of getting support. For more information please visit https://nerdlife.datanerd.us/new-relic/security-guidelines-for-publishing-source-code.

code-of-conduct-should-not-exist-here #

New Relic has moved the CODE_OF_CONDUCT file to a centralized location where it is referenced automatically by every repository in the New Relic organization. Because of this change, any other CODE_OF_CONDUCT file in a repository is now redundant and should be removed. Note that you will need to adjust any links to the local CODE_OF_CONDUCT file in your documentation to point to the central file (README and CONTRIBUTING will probably have links that need updating). For more information please visit https://docs.google.com/document/d/1y644Pwi82kasNP5VPVjDV8rsmkBKclQVHFkz8pwRUtE/view. Did not find a file matching the specified patterns. All files passed this test.

third-party-notices-file-exists #

Found file (THIRD_PARTY_NOTICES.md). A THIRD_PARTY_NOTICES.md file can be present in your repository to grant attribution to all dependencies being used by this project. This document is necessary if you are using third-party source code in your project, with the exception of code referenced outside the project's compiled/bundled binary (ex. some Java projects require modules to be pre-installed in the classpath, outside the project binary and therefore outside the scope of the THIRD_PARTY_NOTICES). Please review your project's dependencies and create a THIRD_PARTY_NOTICES.md file if necessary. For JavaScript projects, you can generate this file using the oss-cli. For more information please visit https://docs.google.com/document/d/1y644Pwi82kasNP5VPVjDV8rsmkBKclQVHFkz8pwRUtE/view.

Metrics are all exposed as gauges so monitoring is extremely difficult, such as indices.indexingOperationsFailed

Description

Some metrics such as indices.indexingOperationsFailed are useful for monitoring but they are treated as a gauge and so it is a forever increasing value. For example after an incident it may stand at 12000 but stay at that value for a long time with everything working fine. However, because the value of 12000 is stored in New Relic One it's not something you can alert on. You cannot say, "Alert if the value increases again".

Expected Behavior

Usually these metrics would be a delta so you can alert on a delta value and also graph the failures over time and see 0s where everything is working fine.

NR Diag results

N/A

Steps to Reproduce

Install and look at failed indexing operations graph. Trigger some manual indexing that fails and see that the graph begins to have a large value and flat lines on that large value and never returns to 0.

Your Environment

Elasticsearch 7.x

Additional context

N/A

Integration dashboard charts missing temporal units in axis labels

Description

In the New Relic integration dashboard created for this integration, the charts for "Time Spent Indexing Documents by Node " and "Time Spent Deleting Documents by Node" are missing units in the Y-axis labels.

Expected Behavior

As a user I expect to see units in the chart labels so that I can understand what the charts are telling me.

Steps to Reproduce

  1. Install nri-elasticsearch integration.
  2. Visit the dashboard in New Relic > Third Party Services > Elasticsearch.
  3. Note that the two time-based charts do not show units (the unit seems to be milliseconds in both cases, as seen in https://github.com/newrelic/nri-elasticsearch/blob/master/spec.csv#L49).

Your Environment

  • nri-elasticsearch 4.5.1
  • newrelic-infra 1.20.7

Additional context

SC 23209

Number of index metrics

Description

The metric indices.numberIndices is miss leading since it currently sends:

(integer) The number of documents as reported by Lucene. This excludes deleted documents and counts any nested documents separately from their parents. It also excludes documents which were indexed recently and do not yet belong to a segment.

according to the api docs. And not the number of indices documented.

Expected Behavior

TBD

NR Diag results

Steps to Reproduce

Your Environment

Additional context

Node Naming

Description of the problem

Investigate using the name field rather than the hostname field for naming nodes.

OS
  • All of them
  • Amazon Linux, all versions
  • CentOS, version 6 or higher
  • Debian, version 7 ("Wheezy") or higher
  • Red Hat Enterprise Linux (RHEL), version 6 or higher
  • Ubuntu, versions 12.04, 14.04, and 16.04 (LTS versions)
  • Windows Server, 2008 and 2012 and their service packs

Metric naming case inconsistency - The fs.unallocatedBytesInBYtes metric has a capital Y in Bytes while the other related metrics do not

Description

The metric fs.unallocatedBytesInBYtes has a capital Y in Bytes, while the other related metrics like fs.totalSizeInBytes do not.

Our documentation currently shows it as a lower case Y causing additional confusion:

https://docs.newrelic.com/docs/integrations/host-integrations/host-integrations-list/elasticsearch-monitoring-integration/#elasticsearch-node-metrics

Expected Behavior

Metric naming/case should be consistent

NR Diag results

N/A

Steps to Reproduce

Can see it in the code:

https://github.com/newrelic/nri-elasticsearch/search?q=unallocatedBytesInBYtes

Your Environment

N/A

Additional context

N/A

used_disk_percent metric

Description of the problem

Missing used_disk_percent metric

The used_disk_percent metric is missing from nri-elasticsearch. This is found in the fs parameter, an example from nodeStatsMetricsResult.json:

 		"fs": {
			"timestamp": 1533662802396,
			"total": {
				"total_in_bytes": 164705353728,
				"free_in_bytes": 162707742720,
				"available_in_bytes": 162707742720,
				"operations": 10,
				"read_kilobytes": 39485,
				"read_operations": 39486,
				"write_kilobytes": 298341,
				"write_operations": 2384089
			},
			"least_usage_estimate": {
				"path": "/var/lib/elasticsearch/nodes/0",
				"total_in_bytes": 164705353728,
				"available_in_bytes": 162707742720,
				"used_disk_percent": 1.2128391474747815
			},
			"most_usage_estimate": {
				"path": "/var/lib/elasticsearch/nodes/0",
				"total_in_bytes": 164705353728,
				"available_in_bytes": 162707742720,
				"used_disk_percent": 1.2128391474747815

Is it possible this could be added?

OS
  • All of them
  • Amazon Linux, all versions
  • CentOS, version 6 or higher
  • Debian, version 7 ("Wheezy") or higher
  • Red Hat Enterprise Linux (RHEL), version 6 or higher
  • Ubuntu, versions 12.04, 14.04, and 16.04 (LTS versions)
  • Windows Server, 2008 and 2012 and their service packs

Broken entity explorer charts

Issue

Even if the integration correctly reports data, the dashboards reachable through the entity explorer are not populated correclty.

However, those under third party dashboards and data explorer works.

Description

We have found an issue with the integration. It seems the entities are shown in NR1 Explorer, although they are the same as the ones in the 3rd Party dashboard, they are being indexed with a different Ids and that’s why the NR1 Explorer data is not showing up.

It is still possible to see all the data in NR1 through the 3rd party dashboard or the Data Explorer, just no directly through the Entity Explorer.

Expected Behavior

Charts should work in both UI

How to reproduce it

https://newrelic.atlassian.net/browse/GTSE-7910

lack of index _stats

wouldn't be possible to add index/_stats metrics in order to have deep stats for each index?

thanks

Metrics and Inventory being sent in "different" entities

Description

Metrics and Inventory data is being sent in 2 "entities" in the same payload. Because they have different attributes the backend generates 2 identifiers.
The entity that carries the Inventory, gets tagged with the correct nr.entityType and so get's index and is visible in NR1.
The entity that carries the metrics does not get tagged with an nr.entityType attribute and so does NOT get indexed and is not visible in NR1.

This leads to the entities shown in the Explorer to not have metrics associated with it, and the metrics shown in the 3rd Party dashboard in the Infrastructure view correspond to the entities that are not shown in the Explorer.

Expected Behavior

I would expect that the entities shown in the Explorer to have the same metrics as shown in the 3rd Party dashboard in Infrastructure.

Additional context

This seems to be caused by the fact that metrics and inventory are being set in different entities created in different ways:

Metrics:

entity, err := i.EntityReportedVia(fmt.Sprintf("%s:%d", args.Hostname, args.Port), name, namespace, entityIDAttrs...)

Inventory:

localNodeEntity, err := i.Entity(localNodeName, "node")

Crashes on "closed" indices

If a monitored Elastichsearch cluster has a "closed" index then the integration command fails.

Description

$ /var/db/newrelic-infra/custom-integrations/bin/nri-elasticsearch [...]
[INFO] Collecting cluster metrics.
[INFO] Collecting node metrics
[INFO] Collecting common metrics.
[ERR] There was an error populating metrics for common metrics: status code 400 - received error of type 'index_closed_exception' from Elasticsearch: closed
[INFO] Collecting indices metrics
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x6b9c4f]

goroutine 1 [running]:
main.setIndicesStatsMetricsResponse(0xc0001282c0, 0xc000534000, 0x31, 0x3f, 0x0, 0xc0003fb0c0, 0x18, 0x0)
	/go/src/github.com/newrelic/nri-elasticsearch/metrics.go:164 +0x18f
main.populateIndicesMetrics(0xc0001282c0, 0x7ad960, 0xc000116bc0, 0x0, 0xc0003fb0c0, 0x18, 0x1, 0x1)
	/go/src/github.com/newrelic/nri-elasticsearch/metrics.go:131 +0x192
main.populateMetrics(0xc0001282c0, 0x7ad960, 0xc000116bc0, 0x704909, 0x0)
	/go/src/github.com/newrelic/nri-elasticsearch/metrics.go:33 +0x311
main.main()
	/go/src/github.com/newrelic/nri-elasticsearch/elasticsearch.go:50 +0x14c

Expected Behavior

Closed indices don't prevent metrics collection.

Steps to Reproduce

Run nri-elasticsearch with an Elasticsearch cluster having a closed index.

Your Environment

nri-elasticsearch v4.3.3 with Elasticsearch v7.9.0

SearchGuard compatibility

Description of the problem

When using SearchGuard, all of the cluster listeners use https. This works when the hostname matches the certificate, but in HA/cloud environments there is often a load balancer (or DNS alias) in front of a pool of nodes instead. This means that the request is to localhost (or even to the internal name) but the certificate is from foo.bar.baz.

Without a flag to "accept any certificate", the integration cannot connect to the node.

OS
  • All of them
  • Amazon Linux, all versions
  • CentOS, version 6 or higher
  • Debian, version 7 ("Wheezy") or higher
  • Red Hat Enterprise Linux (RHEL), version 6 or higher
  • Ubuntu, versions 12.04, 14.04, and 16.04 (LTS versions)
  • Windows Server, 2008 and 2012 and their service packs

Elasticsearch DB infra metrics are not getting captured in kuberentes environment

Elasticsearch agent is not able to capture cluster and node level metrics in k8s env but in local system or docker env its able to capture all the metrics.

Env - Robin kubernetes environment.

APM Agent Language - nri-elasticsearch, nri-couchbase - Elasticsearch and Couchbase Infra Agent
APM Agent Version - v4.3.4
Operating System - kubernetes
Operating System Version
Frameworks your app is using - Collector is in Java and database storage is elasticsearch.

indices.numberIndices metric has a misleading name

Description

NR doc says that indices.numberIndices is "The number of documents across all primary shards assigned to the node"
The metric name is wrong and misleading.

Expected Behavior

Rename indices.numberIndices to indices.numberDocs or something similar to reflect the proper name.

[Repolinter] Open Source Policy Issues

Repolinter Report

🤖This issue was automatically generated by repolinter-action, developed by the Open Source and Developer Advocacy team at New Relic. This issue will be automatically updated or closed when changes are pushed. If you have any problems with this tool, please feel free to open a GitHub issue or give us a ping in #help-opensource.

This Repolinter run generated the following results:

❗ Error ❌ Fail ⚠️ Warn ✅ Pass Ignored Total
0 2 0 5 0 7

Fail #

readme-starts-with-community-plus-header #

The README of a community plus project should have a community plus header at the start of the README. If you already have a community plus header and this rule is failing, your header may be out of date, and you should update your header with the suggested one below. For more information please visit https://opensource.newrelic.com/oss-category/. Below is a list of files or patterns that failed:

  • README.md: The first 5 lines do not contain the pattern(s): Open source Community Plus header (see https://opensource.newrelic.com/oss-category).
    • 🔨 Suggested Fix: prepend [![Community Plus header](https://github.com/newrelic/opensource-website/raw/master/src/images/categories/Community_Plus.png)](https://opensource.newrelic.com/oss-category/#community-plus) to file

code-of-conduct-file-does-not-exist #

New Relic has moved the CODE_OF_CONDUCT file to a centralized location where it is referenced automatically by every repository in the New Relic organization. Because of this change, any other CODE_OF_CONDUCT file in a repository is now redundant and should be removed. For more information please visit https://docs.google.com/document/d/1y644Pwi82kasNP5VPVjDV8rsmkBKclQVHFkz8pwRUtE/view. Found files. Below is a list of files or patterns that failed:

  • CODE_OF_CONDUCT.md
    • 🔨 Suggested Fix: Remove file

Passed #

Click to see rules

license-file-exists #

Found file (LICENSE). New Relic requires that all open source projects have an associated license contained within the project. This license must be permissive (e.g. non-viral or copyleft), and we recommend Apache 2.0 for most use cases. For more information please visit https://docs.google.com/document/d/1vML4aY_czsY0URu2yiP3xLAKYufNrKsc7o4kjuegpDw/edit.

readme-file-exists #

Found file (README.md). New Relic requires a README file in all projects. This README should give a general overview of the project, and should point to additional resources (security, contributing, etc.) where developers and users can learn further. For more information please visit https://github.com/newrelic/open-by-default.

readme-contains-link-to-security-policy #

Contains a link to the security policy for this repository (README.md). New Relic recommends putting a link to the open source security policy for your project (https://github.com/newrelic/<repo-name>/security/policy or ../../security/policy) in the README. For an example of this, please see the "a note about vulnerabilities" section of the Open By Default repository. For more information please visit https://nerdlife.datanerd.us/new-relic/security-guidelines-for-publishing-source-code.

readme-contains-discuss-topic #

Contains a link to the appropriate discuss.newrelic.com topic (README.md). New Relic recommends directly linking the your appropriate discuss.newrelic.com topic in the README, allowing developer an alternate method of getting support. For more information please visit https://nerdlife.datanerd.us/new-relic/security-guidelines-for-publishing-source-code.

third-party-notices-file-exists #

Found file (THIRD_PARTY_NOTICES.md). A THIRD_PARTY_NOTICES.md file can be present in your repository to grant attribution to all dependencies being used by this project. This document is necessary if you are using third-party source code in your project, with the exception of code referenced outside the project's compiled/bundled binary (ex. some Java projects require modules to be pre-installed in the classpath, outside the project binary and therefore outside the scope of the THIRD_PARTY_NOTICES). Please review your project's dependencies and create a THIRD_PARTY_NOTICES.md file if necessary. For JavaScript projects, you can generate this file using the oss-cli. For more information please visit https://docs.google.com/document/d/1y644Pwi82kasNP5VPVjDV8rsmkBKclQVHFkz8pwRUtE/view.

Elasticsearch Integration failed with this erro

Elasticsearch Integration failed with this error:

install error encountered: encountered an error while validating receipt of data for elasticsearch-open-source-integration: timed out waiting for validation to succeed

Description

I have tried to install newrelics agent and elastic search integration agent on a ubuntu machine. It installed 'Infrastructure Agent' and 'Logs Integration' but failed to install 'Elasticsearch Integration'

Expected Behavior

It should install the 'Elasticsearch Integration' agent on the machine

Troubleshooting or NR Diag results

Failure - Infra/Config/IntegrationsMatch
Found matching integration files with some errors:
Integration Configuration File '/etc/newrelic-infra/integrations.d/elasticsearch-config.yml' is missing key 'integration_name'
See https://docs.newrelic.com/docs/integrations/integrations-sdk/getting-started/integration-file-structure-activation for more information.
Provide any other relevant log data.
TIP: Scrub logs and diagnostic information for sensitive information

Steps to Reproduce

Please be as specific as possible.
Run this command
curl -Ls https://download.newrelic.com/install/newrelic-cli/scripts/install.sh | bash && sudo NEW_RELIC_API_KEY=<KEY> NEW_RELIC_ACCOUNT_ID=<ACCOUNT_ID> /usr/local/bin/newrelic install

Your Environment

20.04.2-Ubuntu

Increase Index Limit

Description of the problem

Increase index limit to 1000 or 1500.

We've reached a limit in the amount of indices we can send to NR through the ES integration. We currently compiled a version with the limit increased to 1000 indices and it seems to be working fine.
Not sure if this limit is because of message size limitations in the call to NR but would it be possible to get it increased? I can open up a PR if needed.

OS
  • All of them
  • Amazon Linux, all versions
  • CentOS, version 6 or higher
  • Debian, version 7 ("Wheezy") or higher
  • Red Hat Enterprise Linux (RHEL), version 6 or higher
  • Ubuntu, versions 12.04, 14.04, and 16.04 (LTS versions)
  • Windows Server, 2008 and 2012 and their service packs

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Pending Branch Automerge

These updates await pending status checks before automerging. Click on a checkbox to abort the branch automerge, and create a PR instead.

  • fix(deps): update module github.com/stretchr/testify to v1.9.0

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

docker-compose
tests/integration/docker-compose.yml
  • elasticsearch 8.12.2
  • elasticsearch 8.12.2
dockerfile
build/Dockerfile
  • golang 1.21.3-bookworm
tests/integration/Dockerfile
  • golang 1.21.3-bookworm
github-actions
.github/workflows/on_prerelease.yaml
  • newrelic/coreint-automation v2
.github/workflows/on_push_pr.yaml
  • newrelic/coreint-automation v2
.github/workflows/on_release.yaml
  • newrelic/coreint-automation v2
.github/workflows/repolinter.yml
  • newrelic/coreint-automation v2
.github/workflows/security.yaml
  • newrelic/coreint-automation v2
.github/workflows/trigger_prerelease.yaml
  • newrelic/coreint-automation v2
gomod
go.mod
  • go 1.21
  • github.com/newrelic/infra-integrations-sdk v3.8.2+incompatible
  • github.com/stretchr/testify v1.8.4
  • github.com/xeipuuv/gojsonschema v1.2.0
  • gopkg.in/yaml.v2 v2.4.0

  • Check this box to trigger a request for Renovate to run again on this repository

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.