Code Monkey home page Code Monkey logo

gnmic's Introduction

github release Github all releases Go Report Doc build


gnmic (pronoun.: gee·en·em·eye·see) is a gNMI CLI client that provides full support for Capabilities, Get, Set and Subscribe RPCs with collector capabilities.

Documentation available at https://gnmic.openconfig.net

Features

  • Full support for gNMI RPCs
    Every gNMI RPC has a corresponding command with all of the RPC options configurable by means of the local and global flags.
  • Flexible collector deployment
    gnmic can be deployed as a gNMI collector that supports multiple output types (NATS, Kafka, Prometheus, InfluxDB,...).
    The collector can be deployed either as a single instance, as part of a cluster, or used to form data pipelines.
  • Support gRPC tunnel based dialout telemetry
    gnmic can be deployed as a gNMI collector with an embedded tunnel server.
  • gNMI data manipulation
    gnmic collector has data transformation capabilities that can be used to adapt the collected data to your specific use case.
  • Dynamic targets loading
    gnmic support target loading at runtime based on input from external systems.
  • YANG-based path suggestions
    Your CLI magically becomes a YANG browser when gnmic is executed in prompt mode. In this mode the flags that take XPATH values will get auto-suggestions based on the provided YANG modules. In other words - voodoo magic 🤯
  • Multi-target operations
    Commands can operate on multiple gNMI targets for bulk configuration/retrieval/subscription.
  • Multiple configuration sources
    gnmic supports flags, environment variables as well as file based configurations.
  • Inspect raw gNMI messages
    With the prototext output format you can see the actual gNMI messages being sent/received. Its like having a gNMI looking glass!
  • (In)secure gRPC connection
    gNMI client supports both TLS and non-TLS transports so you can start using it in a lab environment without having to care about the PKI.
  • Dial-out telemetry
    The dial-out telemetry server is provided for Nokia SR OS.
  • Pre-built multi-platform binaries
    Statically linked binaries made in our release pipeline are available for major operating systems and architectures. Making installation a breeze!
  • Extensive and friendly documentation
    You won't be in need to dive into the source code to understand how gnmic works, our documentation site has you covered.

Quick start guide

Installation

bash -c "$(curl -sL https://get-gnmic.openconfig.net)"

Capabilities request

gnmic -a 10.1.0.11:57400 -u admin -p admin --insecure capabilities

Get request

gnmic -a 10.1.0.11:57400 -u admin -p admin --insecure \
      get --path /state/system/platform

Set request

gnmic -a 10.1.0.11:57400 -u admin -p admin --insecure \
      set --update-path /configure/system/name \
          --update-value gnmic_demo

Subscribe request

gnmic -a 10.1.0.11:57400 -u admin -p admin --insecure \
      sub --path "/state/port[port-id=1/1/c1/1]/statistics/in-packets"

Prompt mode

The prompt mode is an interactive mode of the gnmic CLI client for user convenience.

# clone repository with YANG models (Openconfig example)
git clone https://github.com/openconfig/public
cd public

# Start gnmic in prompt mode and read in all the modules:

gnmic --file release/models \
      --dir third_party \
      --exclude ietf-interfaces \
      prompt

gnmic's People

Contributors

acidsailor avatar alibresco avatar bclasse avatar bewing avatar dependabot[bot] avatar dpajin avatar drewelliott avatar dwiesner avatar fahadnaeemkhan avatar fredsod avatar hansthienpondt avatar hellt avatar karimra avatar leongwang avatar lucabrasi83 avatar marco-minervino avatar marcushines avatar matt852 avatar mc-dn avatar mwdomino avatar neoul avatar netixx avatar peejaychilds avatar sgg avatar steiler avatar sulrich avatar takenaga avatar vincentbernat avatar wenovus avatar yunhel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gnmic's Issues

Listen mode / inconsistent values

Hi,

I am using GNMIC in listen mode with kafka outputs. I have been able to get quite a lot of data from Nokia SROS routers, and it works great. However there is one type of data that is having a behavior I cannot understand.

If I configure the following xpath to be pushed to GNMIC, it works great
path "/state/service/ies[service-name=*]/interface[interface-name=*]/sap[sap-id=*]/ingress/qos/sap-ingress/queue[queue-id=*]/statistics/unicast-priority/in-profile-forwarded-octets" { }

Examples of the values:
23613948018273006
23613982742093735
23614007805662369
23614023078204906
23614056403821302
23614080486235529
23614094566975779
23614129159760329
23614153893683102

BUT, if I add the second following xpath, then some of the values are incorrect, and it looks like the incorrect values gets a double value from a correct one.
path "/state/service/ies[service-name=*]/interface[interface-name=*]/sap[sap-id=*]/ingress/qos/sap-ingress/queue[queue-id=*]/statistics/unicast-priority/out-profile-forwarded-octets" { }

Examples of the values:
23613732301932382
47227511020932088
47227539973052120
23613802485665836
23613826566537386

The xpath I have configured here have the longest value from all I have configured. I was wondering if it could be linked.
I have spent quite some time on trying to find the reason why I have these inconsistent values, so let me know if you think about anything.

Feature Request: using "/" in the subscription-name

It seems that it is not possible to use "/" in the subscription names.
If I configure something like this:

subscriptions:  # container for subscriptions
  /system/state:
    paths:
       - "/system/state/hostname"
       - "/system/state/boot-time"
    stream-mode: sample
    sample-interval: 10s

this fails with the following error in the logs:

2022/11/25 15:23:37.160529 [gnmic] failed to subscribe: missing path(s) in subscription ''

`gnmic diff` for comparing SetRequests, SetRequest & Notifications

I'm wondering if gnmic would be open to incorporating this functionality that's currently implemented in ygot here (README has examples): https://github.com/openconfig/ygot/tree/master/gnmidiff

Proposed commands (open to changing command names):

Compares intent between two SetRequests

gnmic diff setrequest cmd/demo/setrequest2.textproto cmd/demo/setrequest.textproto

Compares whether after a SetRequest has been applied, the notifications that you get back from the device are consistent with the input SetRequest

gnmic diff setrequest-to-notifs cmd/demo/setrequest.textproto cmd/demo/notifs.textproto

Any suggestions on how to improve usability are appreciated. I think the meaning of these commands might be fuzzy at first, especially the last one. I'm thinking of just making it clear in the command description.

Be able to set Kafka event key

I would like a config parameter so that gnmic does not set the kafka key to null.
Either the config parameter can be boolean and you use source + subscription_name as event key or if i could specifiy what fields to use as key.

I want this so that i can guarantee message ordering if i have a kafka topic with more then 1 partition.
The issue i get today with remote_write towards cortex is that i get err: out of order sample

drop or allow processors produce empty event slice

When using a processor such as drop or allow, it emits empty slices for events which match the condition.

Expected behaviour: Events which are empty shouldn't be sent to outputs

I cloned the repo and did a go mod tidy ; go build from main and tested to ensure #61 was included. I was able to confirm that empty maps are no longer showing up with this latest build, however the empty slice remains.

Let me know if you require more info, I am happy to build from a dev branch to test.

with commented out allow processor

gnmic --config reproduce.yml sub

---
port: 50051
targets:
  10.232.185.12:
    name: example_junos_device
    insecure: true
    skip-verify: true
    subscriptions:
      - physical_interfaces
    outputs:
      - stdout
outputs:
  stdout:
    type: file
    file-type: stdout
    format: event
    event-processors:
      - interface-group
    #!   - allow-example

processors:
  interface-group:
    event-group-by:
      tags:
        - source
        - subscription-name
        - interface_name
  allow-example:
    event-allow:
      condition: '.tags.interface_name == "xe-4/1/0"'

subscriptions:
  physical_interfaces:
    paths:
      - /interfaces/interface[name=xe-4/1/0]/state/counters
      - /interfaces/interface[name=xe-4/1/1]/state/counters
    stream-mode: SAMPLE
    mode: stream
    encoding: PROTO
[
  {
    "name": "physical_interfaces",
    "timestamp": 1678825121253000000,
    "tags": {
      "interface_name": "xe-4/1/0",
      "source": "example_junos_device",
      "subscription-name": "physical_interfaces"
    },
    "values": {
      "/interfaces/interface/init_time": 1655218074,
      "/interfaces/interface/state/counters/carrier-transitions": 3,
      "/interfaces/interface/state/counters/in-broadcast-pkts": 15979531,
      "/interfaces/interface/state/counters/in-multicast-pkts": 22770053,
      "/interfaces/interface/state/counters/in-octets": 7929788976078,
      "/interfaces/interface/state/counters/in-pkts": 18958486714,
      "/interfaces/interface/state/counters/in-unicast-pkts": 18922354198,
      "/interfaces/interface/state/counters/out-broadcast-pkts": 9787761,
      "/interfaces/interface/state/counters/out-multicast-pkts": 76305229,
      "/interfaces/interface/state/counters/out-octets": 10629111199515,
      "/interfaces/interface/state/counters/out-pkts": 22230775663,
      "/interfaces/interface/state/counters/out-unicast-pkts": 22014540064,
      "/interfaces/interface/state/high-speed": 10000,
      "/interfaces/interface/state/parent_ae_name": ""
    }
  }
]
[
  {
    "name": "physical_interfaces",
    "timestamp": 1678825123270000000,
    "tags": {
      "interface_name": "xe-4/1/1",
      "source": "example_junos_device",
      "subscription-name": "physical_interfaces"
    },
    "values": {
      "/interfaces/interface/init_time": 1655218075,
      "/interfaces/interface/state/counters/carrier-transitions": 1,
      "/interfaces/interface/state/counters/in-broadcast-pkts": 31381,
      "/interfaces/interface/state/counters/in-multicast-pkts": 9076873,
      "/interfaces/interface/state/counters/in-octets": 4336832372284,
      "/interfaces/interface/state/counters/in-pkts": 9474114045,
      "/interfaces/interface/state/counters/in-unicast-pkts": 9465006085,
      "/interfaces/interface/state/counters/out-broadcast-pkts": 15098464,
      "/interfaces/interface/state/counters/out-multicast-pkts": 9076245,
      "/interfaces/interface/state/counters/out-octets": 2423800201161,
      "/interfaces/interface/state/counters/out-pkts": 7871520067,
      "/interfaces/interface/state/counters/out-unicast-pkts": 7832503512,
      "/interfaces/interface/state/high-speed": 10000,
      "/interfaces/interface/state/parent_ae_name": ""
    }
  }
]

with allow processor

gnmic --config reproduce.yml sub

---
port: 50051
targets:
  10.232.185.12:
    name: example_junos_device
    insecure: true
    skip-verify: true
    subscriptions:
      - physical_interfaces
    outputs:
      - stdout
outputs:
  stdout:
    type: file
    file-type: stdout
    format: event
    event-processors:
      - interface-group
      - allow-example

processors:
  interface-group:
    event-group-by:
      tags:
        - source
        - subscription-name
        - interface_name
  allow-example:
    event-allow:
      condition: '.tags.interface_name == "xe-4/1/0"'

subscriptions:
  physical_interfaces:
    paths:
      - /interfaces/interface[name=xe-4/1/0]/state/counters
      - /interfaces/interface[name=xe-4/1/1]/state/counters
    stream-mode: SAMPLE
    mode: stream
    encoding: PROTO
[
  {
    "name": "physical_interfaces",
    "timestamp": 1678825389160000000,
    "tags": {
      "interface_name": "xe-4/1/0",
      "source": "example_junos_device",
      "subscription-name": "physical_interfaces"
    },
    "values": {
      "/interfaces/interface/init_time": 1655218074,
      "/interfaces/interface/state/counters/carrier-transitions": 3,
      "/interfaces/interface/state/counters/in-broadcast-pkts": 15979656,
      "/interfaces/interface/state/counters/in-multicast-pkts": 22770390,
      "/interfaces/interface/state/counters/in-octets": 7929854166625,
      "/interfaces/interface/state/counters/in-pkts": 18958678574,
      "/interfaces/interface/state/counters/in-unicast-pkts": 18922545404,
      "/interfaces/interface/state/counters/out-broadcast-pkts": 9787890,
      "/interfaces/interface/state/counters/out-multicast-pkts": 76306097,
      "/interfaces/interface/state/counters/out-octets": 10629239524576,
      "/interfaces/interface/state/counters/out-pkts": 22231104460,
      "/interfaces/interface/state/counters/out-unicast-pkts": 22014866110,
      "/interfaces/interface/state/high-speed": 10000,
      "/interfaces/interface/state/parent_ae_name": ""
    }
  }
]
[] <------ this 

On devices with many interfaces it becomes very noisy

[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[]
[
  {
    "name": "physical_interfaces",
    "timestamp": 1678826366707000000,
    "tags": {
      "interface_name": "xe-4/1/0",
      "source": "example_junos_device",
      "subscription-name": "physical_interfaces"
    },
    "values": {
      "/interfaces/interface/init_time": 1655218074,
      "/interfaces/interface/state/counters/carrier-transitions": 3,
      "/interfaces/interface/state/counters/in-broadcast-pkts": 15980213,
      "/interfaces/interface/state/counters/in-multicast-pkts": 22771626,
      "/interfaces/interface/state/counters/in-octets": 7930092097044,
      "/interfaces/interface/state/counters/in-pkts": 18959410297,
      "/interfaces/interface/state/counters/in-unicast-pkts": 18923274892,
      "/interfaces/interface/state/counters/out-broadcast-pkts": 9788335,
      "/interfaces/interface/state/counters/out-multicast-pkts": 76309254,
      "/interfaces/interface/state/counters/out-octets": 10629720929160,
      "/interfaces/interface/state/counters/out-pkts": 22232311635,
      "/interfaces/interface/state/counters/out-unicast-pkts": 22016064341,
      "/interfaces/interface/state/high-speed": 10000,
      "/interfaces/interface/state/parent_ae_name": ""
    }
  }
]
[]
[]
[]
[]
[]
[]
[]
[]

Issue: Merge processor does not work as expected

In my setup merge processor is able to merge only two values separated, but not more

My configuraton is the following:

subscriptions:  # container for subscriptions
  system_cpus_cpu_state_total:
    paths:
      - "/system/cpus/cpu[index=ALL]/state/total"
    stream-mode: sample
    sample-interval: 10s

processors:
  set-timestamp-processor:
    event-override-ts:
      precision: s
  event-merge-processor:
    event-merge:
      # if always is set to true, 
      # the updates are merged regardless of the timestamp values
      always: true
      debug: true

outputs:
  output-file:
    # required
    type: file 
    # filename to write telemetry data to.
    # will be ignored if `file-type` is set
    filename: /output/output.txt
    #file-type: event
    format: event
    event-processors:
      - set-timestamp-processor
      - event-merge-processor

The results in the file are shown below. Only first record has two metrics merged, others are still separated.

[{"name":"system_cpus_cpu_state_total","timestamp":1669390794,"tags":{"cpu_index":"ALL","source":"clab-srlceos01-ceos","subscription-name":"system_cpus_cpu_state_total"},"values":
{"/system/cpus/cpu/state/total/max":8,"/system/cpus/cpu/state/total/max-time":3338625909619632128}}]
[{"name":"system_cpus_cpu_state_total","timestamp":1669390794,"tags":{"cpu_index":"ALL","source":"clab-srlceos01-ceos","subscription-name":"system_cpus_cpu_state_total"},"values":{"/system/cpus/cpu/state/total/avg":2}}]
[{"name":"system_cpus_cpu_state_total","timestamp":1669390794,"tags":{"cpu_index":"ALL","source":"clab-srlceos01-ceos","subscription-name":"system_cpus_cpu_state_total"},"values":{"/system/cpus/cpu/state/total/interval":1000000000000}}]
[{"name":"system_cpus_cpu_state_total","timestamp":1669390794,"tags":{"cpu_index":"ALL","source":"clab-srlceos01-ceos","subscription-name":"system_cpus_cpu_state_total"},"values":{"/system/cpus/cpu/state/total/min-time":3338625979619226624}}]
[{"name":"system_cpus_cpu_state_total","timestamp":1669390794,"tags":{"cpu_index":"ALL","source":"clab-srlceos01-ceos","subscription-name":"system_cpus_cpu_state_total"},"values":{"/system/cpus/cpu/state/total/instant":4}}]
[{"name":"system_cpus_cpu_state_total","timestamp":1669390794,"tags":{"cpu_index":"ALL","source":"clab-srlceos01-ceos","subscription-name":"system_cpus_cpu_state_total"},"values":{"/system/cpus/cpu/state/total/min":1}}]

Is this expected behavior?

Limiting verbosity of loader results

I'm using file-based discovery to load a growing number of subscription targets pulled from an inventory system. Each time the loader runs, it emits a log message like

2023/02/01 21:49:48.721392 [file_loader] result: map[targeta:{"name":"targeta","address":"192.0.2.1:1234","username":"gnmic","password":"****","timeout":2000000000,"skip-verify":true,"tags":[]} targetb:{"name":"targetb","address":"192.0.2.2:1234","username":"gnmic","password":"****","timeout":2000000000,"skip-verify":true,"tags":[]}]

With the full list of loaded targets every time the file is checked. This will grow to an unmanageable length pretty quickly. I'd like the ability to change the verbosity of these messages (maybe "loaded N targets") or turn them off entirely. Alternatively, logging only the targets that have changed would be nice.

Junos VMX set error on YAML file format but not JSON format

I seem to have hit a strange issue where if I apply some simple Juniper OSPF configuration using a JSON file then all works as expected but when converting the JSON to YAML then it doesn't work. I've figured out the root cause of the issue since it's visible when checking what is being applied from the GNMIC logs.

Here is what is sent from a working JSON file:

val:{json_ietf_val:"{\"ospf\":{\"area\":[{\"name\":\"0.0.0.0\",\"interface\":[{\"name\":\"ge-0/0/1.0\"}]}]}}"}]'

{
  "source": "172.21.20.7:57400",
  "timestamp": 1672749977491496118,
  "time": "2023-01-03T12:46:17.491496118Z",
  "results": [
    {
      "operation": "UPDATE",
      "path": "juniper:configuration/protocols"
    }
  ]
}

Now observe when using YAML how the format is slightly different when the YAML is serialized back to JSON. We can see name:0.0.0.0 is in a different place or order:

val:{json_ietf_val:"{\"ospf\":{\"area\":[{\"interface\":[{\"name\":\"ge-0/0/1.0\"}],\"name\":\"0.0.0.0\"}]}}"}]

2023/01/03 12:46:24.795500 [gnmic] target "172.21.20.7:57400" set request failed: target "172.21.20.7:57400" SetRequest failed: rpc error: code = InvalidArgument desc = syntax error

I can see the difference in the way the YAML is serialized back to JSON is causing the difference in the set operation. However, I don't know how can I manipulate the serialization from YAML to JSON in GNMIC?

JSON file (working):

{
  "ospf": {
    "area": [
      {
        "name": "0.0.0.0",
        "interface": [
          {
            "name": "ge-0/0/1.0"
          }
        ]
      }
    ]
  }
}

YAML File which is an exact representation of the same JSON file (not working):

---
ospf:
  area:
  - name: 0.0.0.0
    interface:
    - name: ge-0/0/1.0

Feature Request: make InfluxDB health check optional

Would it be possible to make InfluxDB health check optional?
The use case for this is writing to InfluxDB or other places over Telegraf instance. Telegraf's input plugins like influxdb_listener or influxdb_v2_listener or http_v2_listener do not support "/health" endpoint for this purpose.
The use case for using Telegraf is post processing of the submitted measurements or writing to multiple outputs

Listen mode: source/hostname

Hello,

I am using gnmic in listen mode for dial-out telemetry with Nokia SROS routers. I noticed that the node hostname is not streamed with data. I get instead the source with the format ip_address:port
Is there a way to associate the hostname of the node instead of the ip_address? I am aware it is possible to stream the hostname, but what I would like to do is to have the hostname directly associated with the data

Question: Templated Set Request for CLI origin

Hi,
I've been trying to find a way to use request-file for CLI origin for Juniper device and 'gnmic' encloses payload with double quote.
This ascii_val seems not accepted as valid by the device.
Is there any way that work around this 'double quote' of ascii_val?

>gnmic -a bpchoi-vptx0-vmm:50051 -u xxxx -p xxxx--insecure --timeout 120s --log set --request-file intf_1_native_cli_request.yaml
2023/04/14 12:12:19.670919 [gnmic] version=0.29.0, commit=08d41dd, date=2023-02-21T09:10:29Z, gitURL=https://github.com/openconfig/gnmic, docs=https://gnmic.openconfig.net/
2023/04/14 12:12:19.670943 [gnmic] using config file ""
2023/04/14 12:12:19.671292 [gnmic] adding target {"name":"bpchoi-vptx0-vmm:50051","address":"bpchoi-vptx0-vmm:50051","username":"xxxx","password":"****","timeout":120000000000,"insecure":true,"skip-verify":false,"buffer-size":100,"retry-timer":10000000000,"log-tls-secret":false,"gzip":false,"token":""}
2023/04/14 12:12:19.671665 [config] trying to find variable file "/home/jnpr/intf_1_native_cli_request_vars.yaml"
2023/04/14 12:12:19.672782 [gnmic] sending gNMI SetRequest: prefix='', delete='[]', replace='[]', update='[path:{origin:"cli"} val:{ascii_val:""interfaces { et-0/0/1 { disable; speed 100g; } }""}]', extension='[]' to bpchoi-vptx0-vmm:50051
2023/04/14 12:12:19.672807 [gnmic] creating gRPC client for target "bpchoi-vptx0-vmm:50051"
2023/04/14 12:12:21.462737 [gnmic] target "bpchoi-vptx0-vmm:50051" set request failed: target "bpchoi-vptx0-vmm:50051" SetRequest failed: rpc error: code = InvalidArgument desc = syntax error;
target "bpchoi-vptx0-vmm:50051" set request failed: target "bpchoi-vptx0-vmm:50051" SetRequest failed: rpc error: code = InvalidArgument desc = syntax error;
Error: one or more requests failed

Thank you in advance,
Phil

rpc error: code = InvalidArgument desc = request must contain a prefix

Hi,

Can someone help me understand why I'm getting the following errors?

I can get the output from capabilities request . But when I try to stream data for all counters , I'm getting invalid prefix.

gmckee@lab-dev-vm1:~/Projects/scripts/public$ gnmic -a 10.127.5.15 --port 9339 --skip-verify capabilities
gNMI version: 0.7.0
supported models:
  - nvidia-wjh, NVIDIA, 1.0.1
  - nvidia-if-ethernet-counters-ext, NVIDIA, 1.0.0
  - openconfig-interfaces, OpenConfig, 2.3.2
  - openconfig-if-ethernet, OpenConfig, 2.9.0
  - openconfig-if-ethernet-ext, OpenConfig, 0.1.1
  - openconfig-system, OpenConfig, 0.5.0
  - openconfig-lldp, OpenConfig, 0.2.1
  - openconfig-platform, OpenConfig, 0.13.0
supported encodings:
  - JSON

gmckee@lab-dev-vm1:~/Projects/scripts/public$ gnmic -a 10.127.5.15 --port 9339 --skip-verify subscribe --path "openconfig-if-ethernet:/ethernet/interface/state/counters" --stream-mode on-change --heartbeat-interval 1m -d
2023/02/06 22:53:50.939535 /home/runner/work/gnmic/gnmic/app/app.go:216: [gnmic] version=0.28.0, commit=8315400, date=2022-12-07T17:02:16Z, gitURL=https://github.com/openconfig/gnmic, docs=https://gnmic.openconfig.net
2023/02/06 22:53:50.940112 /home/runner/work/gnmic/gnmic/app/app.go:221: [gnmic] using config file ""
2023/02/06 22:53:50.941067 /home/runner/work/gnmic/gnmic/app/app.go:259: [gnmic] set flags/config:
address:
- 10.127.5.15
api: ""
capabilities-version: false
cluster-name: default-cluster
config: ""
debug: true
diff-compare: []
diff-model: []
diff-path: []
diff-prefix: ""
diff-qos: "0"
diff-ref: ""
diff-sub: false
diff-target: ""
diff-type: ALL
dir: []
encoding: json
exclude: []
file: []
format: ""
generate-camel-case: false
generate-config-only: false
generate-path: ""
generate-snake-case: false
get-model: []
get-path: []
get-prefix: ""
get-processor: []
get-target: ""
get-type: ALL
get-values-only: false
getset-condition: any([true])
getset-delete: ""
getset-get: ""
getset-model: []
getset-prefix: ""
getset-replace: ""
getset-target: ""
getset-type: ALL
getset-update: ""
getset-value: ""
gzip: false
insecure: false
instance-name: ""
listen-max-concurrent-streams: "256"
listen-prometheus-address: ""
log: true
log-file: ""
log-tls-secret: false
max-msg-size: 536870912
no-prefix: false
password: ""
path-config-only: false
path-descr: false
path-path-type: xpath
path-search: false
path-state-only: false
path-types: false
path-with-non-leaves: false
path-with-prefix: false
port: "9339"
print-request: false
prompt-description-bg-color: dark_gray
prompt-description-with-prefix: false
prompt-description-with-types: false
prompt-max-suggestions: "10"
prompt-prefix-color: dark_blue
prompt-suggest-all-flags: false
prompt-suggest-with-origin: false
prompt-suggestions-bg-color: dark_blue
proto-dir: []
proto-file: []
proxy-from-env: false
retry: 10s
set-delete: []
set-delimiter: ':::'
set-dry-run: false
set-prefix: ""
set-replace: []
set-replace-file: []
set-replace-path: []
set-replace-value: []
set-request-file: []
set-request-replace: []
set-request-update: []
set-request-vars: ""
set-target: ""
set-update: []
set-update-file: []
set-update-path: []
set-update-value: []
skip-verify: true
subscribe-backoff: 0s
subscribe-heartbeat-interval: 1m0s
subscribe-history-end: ""
subscribe-history-snapshot: ""
subscribe-history-start: ""
subscribe-lock-retry: 5s
subscribe-mode: stream
subscribe-model: []
subscribe-name: []
subscribe-output: []
subscribe-path:
- openconfig-if-ethernet:/ethernet/interface/state/counters
subscribe-prefix: ""
subscribe-qos: "0"
subscribe-quiet: false
subscribe-sample-interval: 0s
subscribe-set-target: false
subscribe-stream-mode: on-change
subscribe-suppress-redundant: false
subscribe-target: ""
subscribe-updates-only: false
subscribe-watch-config: false
targets-file: ""
timeout: 10s
tls-ca: ""
tls-cert: ""
tls-key: ""
tls-max-version: ""
tls-min-version: ""
tls-version: ""
token: ""
upgrade-use-pkg: false
use-tunnel-server: false
username: ""

2023/02/06 22:53:50.949789 /home/runner/work/gnmic/gnmic/app/app.go:269: [gnmic] address='[10.127.5.15]'([]string)
2023/02/06 22:53:50.949815 /home/runner/work/gnmic/gnmic/app/app.go:269: [gnmic] debug='true'(bool)
2023/02/06 22:53:50.949864 /home/runner/work/gnmic/gnmic/app/app.go:269: [gnmic] port='9339'(string)
2023/02/06 22:53:50.949895 /home/runner/work/gnmic/gnmic/app/app.go:269: [gnmic] skip-verify='true'(bool)
2023/02/06 22:53:50.949904 /home/runner/work/gnmic/gnmic/app/app.go:269: [gnmic] subscribe-heartbeat-interval='1m0s'(string)
2023/02/06 22:53:50.949928 /home/runner/work/gnmic/gnmic/app/app.go:269: [gnmic] subscribe-path='[openconfig-if-ethernet:/ethernet/interface/state/counters]'([]string)
2023/02/06 22:53:50.949940 /home/runner/work/gnmic/gnmic/app/app.go:269: [gnmic] subscribe-stream-mode='on-change'(string)
2023/02/06 22:53:50.949993 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=backoff, changed=false, isSetInFile=false
2023/02/06 22:53:50.950004 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=heartbeat-interval, changed=true, isSetInFile=true
2023/02/06 22:53:50.950013 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=help, changed=false, isSetInFile=false
2023/02/06 22:53:50.950021 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=history-end, changed=false, isSetInFile=false
2023/02/06 22:53:50.950029 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=history-snapshot, changed=false, isSetInFile=false
2023/02/06 22:53:50.950038 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=history-start, changed=false, isSetInFile=false
2023/02/06 22:53:50.950047 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=lock-retry, changed=false, isSetInFile=false
2023/02/06 22:53:50.950054 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=mode, changed=false, isSetInFile=false
2023/02/06 22:53:50.950063 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=model, changed=false, isSetInFile=false
2023/02/06 22:53:50.950071 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=name, changed=false, isSetInFile=false
2023/02/06 22:53:50.950079 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=output, changed=false, isSetInFile=false
2023/02/06 22:53:50.950089 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=path, changed=true, isSetInFile=true
2023/02/06 22:53:50.950096 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=prefix, changed=false, isSetInFile=false
2023/02/06 22:53:50.950104 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=qos, changed=false, isSetInFile=false
2023/02/06 22:53:50.950112 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=quiet, changed=false, isSetInFile=false
2023/02/06 22:53:50.950120 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=sample-interval, changed=false, isSetInFile=false
2023/02/06 22:53:50.950128 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=set-target, changed=false, isSetInFile=false
2023/02/06 22:53:50.950136 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=stream-mode, changed=true, isSetInFile=true
2023/02/06 22:53:50.950144 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=suppress-redundant, changed=false, isSetInFile=false
2023/02/06 22:53:50.950152 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=target, changed=false, isSetInFile=false
2023/02/06 22:53:50.950160 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=updates-only, changed=false, isSetInFile=false
2023/02/06 22:53:50.950168 /home/runner/work/gnmic/gnmic/config/config.go:364: [config] cmd=subscribe, flagName=watch-config, changed=false, isSetInFile=false
2023/02/06 22:53:50.950254 /home/runner/work/gnmic/gnmic/config/subscriptions.go:72: [config] subscriptions: map[default-1675724030:{"name":"default-1675724030","paths":["openconfig-if-ethernet:/ethernet/interface/state/counters"],"mode":"stream","stream-mode":"on-change","encoding":"json","heartbeat-interval":60000000000}]
2023/02/06 22:53:50.950281 /home/runner/work/gnmic/gnmic/config/outputs.go:60: [config] outputs: map[default-stdout:map[file-type:stdout format: type:file]]
2023/02/06 22:53:50.950290 /home/runner/work/gnmic/gnmic/config/inputs.go:57: [config] inputs: map[]
2023/02/06 22:53:50.950298 /home/runner/work/gnmic/gnmic/config/actions.go:49: [config] actions: map[]
2023/02/06 22:53:50.950307 /home/runner/work/gnmic/gnmic/config/processors.go:45: [config] processors: map[]
2023/02/06 22:53:50.950378 /home/runner/work/gnmic/gnmic/config/targets.go:45: [config] targets: map[10.127.5.15:{"name":"10.127.5.15","address":"10.127.5.15:9339","username":"","password":"****","timeout":10000000000,"insecure":false,"tls-cert":"","tls-key":"","skip-verify":true,"buffer-size":100,"retry-timer":10000000000,"log-tls-secret":false,"gzip":false,"token":""}]
2023/02/06 22:53:50.950402 /home/runner/work/gnmic/gnmic/app/outputs.go:27: [gnmic] starting output type file
2023/02/06 22:53:50.950446 /home/runner/work/gnmic/gnmic/app/gnmi_client_subscribe.go:73: [gnmic] queuing target "10.127.5.15"
2023/02/06 22:53:50.950453 /home/runner/work/gnmic/gnmic/app/gnmi_client_subscribe.go:75: [gnmic] subscribing to target: "10.127.5.15"
2023/02/06 22:53:50.950507 /home/runner/work/gnmic/gnmic/app/collector.go:41: [gnmic] starting target &{Config:{"name":"10.127.5.15","address":"10.127.5.15:9339","username":"","password":"****","timeout":10000000000,"insecure":false,"tls-cert":"","tls-key":"","skip-verify":true,"buffer-size":100,"retry-timer":10000000000,"log-tls-secret":false,"gzip":false,"token":""} Subscriptions:map[default-1675724030:{"name":"default-1675724030","paths":["openconfig-if-ethernet:/ethernet/interface/state/counters"],"mode":"stream","stream-mode":"on-change","encoding":"json","heartbeat-interval":60000000000}] m:0xc000302340 conn:<nil> Client:<nil> SubscribeClients:map[] subscribeCancelFn:map[] pollChan:0xc000aba120 subscribeResponses:0xc000118cc0 errors:0xc000118d20 stopped:false StopChan:0xc000aba180 Cfn:<nil> RootDesc:<nil>}
2023/02/06 22:53:50.950517 /home/runner/work/gnmic/gnmic/app/collector.go:59: [gnmic] starting target "10.127.5.15" listener
2023/02/06 22:53:50.950561 /home/runner/work/gnmic/gnmic/outputs/file/file_output.go:203: [file_output:default-stdout] initialized file output: {"Cfg":{"FileName":"","FileType":"stdout","Format":"json","Multiline":true,"Indent":"  ","Separator":"\n","OverrideTimestamps":false,"AddTarget":"","TargetTemplate":"","EventProcessors":null,"MsgTemplate":"","ConcurrencyLimit":1000,"EnableMetrics":false,"Debug":false}}
2023/02/06 22:53:50.950612 /home/runner/go/pkg/mod/google.golang.org/[email protected]/grpclog/logger.go:53: [gnmic] [core] [Channel #1] Channel created
2023/02/06 22:53:50.950630 /home/runner/go/pkg/mod/google.golang.org/[email protected]/grpclog/logger.go:53: [gnmic] [core] [Channel #1] original dial target is: "10.127.5.15:9339"
2023/02/06 22:53:50.950643 /home/runner/go/pkg/mod/google.golang.org/[email protected]/grpclog/logger.go:53: [gnmic] [core] [Channel #1] dial target "10.127.5.15:9339" parse failed: parse "10.127.5.15:9339": first path segment in URL cannot contain colon
2023/02/06 22:53:50.950651 /home/runner/go/pkg/mod/google.golang.org/[email protected]/grpclog/logger.go:53: [gnmic] [core] [Channel #1] fallback to scheme "passthrough"
2023/02/06 22:53:50.950671 /home/runner/go/pkg/mod/google.golang.org/[email protected]/grpclog/logger.go:53: [gnmic] [core] [Channel #1] parsed dial target is: {Scheme:passthrough Authority: Endpoint:10.127.5.15:9339 URL:{Scheme:passthrough Opaque: User: Host: Path:/10.127.5.15:9339 RawPath: ForceQuery:false RawQuery: Fragment: RawFragment:}}
2023/02/06 22:53:50.950680 /home/runner/go/pkg/mod/google.golang.org/[email protected]/grpclog/logger.go:53: [gnmic] [core] [Channel #1] Channel authority set to "10.127.5.15:9339"
2023/02/06 22:53:50.950794 /home/runner/go/pkg/mod/google.golang.org/[email protected]/grpclog/logger.go:53: [gnmic] [core] [Channel #1] Resolver state updated: {
  "Addresses": [
    {
      "Addr": "10.127.5.15:9339",
      "ServerName": "",
      "Attributes": null,
      "BalancerAttributes": null,
      "Type": 0,
      "Metadata": null
    }
  ],
  "ServiceConfig": null,
  "Attributes": null
} (resolver returned new addresses)
2023/02/06 22:53:50.950820 /home/runner/go/pkg/mod/google.golang.org/[email protected]/grpclog/logger.go:53: [gnmic] [core] [Channel #1] Channel switches to new LB policy "pick_first"
2023/02/06 22:53:50.950836 /home/runner/go/pkg/mod/google.golang.org/[email protected]/grpclog/logger.go:53: [gnmic] [core] [Channel #1 SubChannel #2] Subchannel created
2023/02/06 22:53:50.950861 /home/runner/go/pkg/mod/google.golang.org/[email protected]/grpclog/logger.go:53: [gnmic] [core] [Channel #1 SubChannel #2] Subchannel Connectivity change to CONNECTING
2023/02/06 22:53:50.950875 /home/runner/go/pkg/mod/google.golang.org/[email protected]/grpclog/logger.go:53: [gnmic] [core] [Channel #1 SubChannel #2] Subchannel picks a new address "10.127.5.15:9339" to connect
2023/02/06 22:53:50.950890 /home/runner/go/pkg/mod/google.golang.org/[email protected]/grpclog/logger.go:53: [gnmic] [core] pickfirstBalancer: UpdateSubConnState: 0xc0006d8040, {CONNECTING <nil>}
2023/02/06 22:53:50.950909 /home/runner/go/pkg/mod/google.golang.org/[email protected]/grpclog/logger.go:53: [gnmic] [core] [Channel #1] Channel Connectivity change to CONNECTING
2023/02/06 22:53:50.976621 /home/runner/go/pkg/mod/google.golang.org/[email protected]/grpclog/logger.go:53: [gnmic] [core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY
2023/02/06 22:53:50.976639 /home/runner/go/pkg/mod/google.golang.org/[email protected]/grpclog/logger.go:53: [gnmic] [core] pickfirstBalancer: UpdateSubConnState: 0xc0006d8040, {READY <nil>}
2023/02/06 22:53:50.976644 /home/runner/go/pkg/mod/google.golang.org/[email protected]/grpclog/logger.go:53: [gnmic] [core] [Channel #1] Channel Connectivity change to READY
2023/02/06 22:53:50.976654 /home/runner/work/gnmic/gnmic/app/gnmi_client_subscribe.go:215: [gnmic] target "10.127.5.15" gNMI client created
2023/02/06 22:53:50.977046 /home/runner/work/gnmic/gnmic/app/gnmi_client_subscribe.go:218: [gnmic] sending gNMI SubscribeRequest: subscribe='subscribe:{subscription:{path:{origin:"openconfig-if-ethernet" elem:{name:"ethernet"} elem:{name:"interface"} elem:{name:"state"} elem:{name:"counters"}} mode:ON_CHANGE heartbeat_interval:60000000000}}', mode='STREAM', encoding='JSON', to 10.127.5.15
2023/02/06 22:53:50.978018 /home/runner/work/gnmic/gnmic/app/collector.go:111: [gnmic] target "10.127.5.15": subscription default-1675724030 rcv error: rpc error: code = InvalidArgument desc = request must contain a prefix &gnmi.SubscribeRequest{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(0xc000057a88)}, sizeCache:0, unknownFields:[]uint8(nil), Request:(*gnmi.SubscribeRequest_Subscribe)(0xc0015ab018), Extension:[]*gnmi_ext.Extension(nil)}
2023/02/06 22:53:50.978034 /home/runner/work/gnmic/gnmic/app/collector.go:111: [gnmic] target "10.127.5.15": subscription default-1675724030 rcv error: retrying in 10s
2023/02/06 22:54:00.980265 /home/runner/work/gnmic/gnmic/app/collector.go:111: [gnmic] target "10.127.5.15": subscription default-1675724030 rcv error: rpc error: code = InvalidArgument desc = request must contain a prefix &gnmi.SubscribeRequest{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(0xc000057a88)}, sizeCache:0, unknownFields:[]uint8(nil), Request:(*gnmi.SubscribeRequest_Subscribe)(0xc002c68010), Extension:[]*gnmi_ext.Extension(nil)}
2023/02/06 22:54:00.980319 /home/runner/work/gnmic/gnmic/app/collector.go:111: [gnmic] target "10.127.5.15": subscription default-1675724030 rcv error: retrying in 10s
2023/02/06 22:54:10.981806 /home/runner/work/gnmic/gnmic/app/collector.go:111: [gnmic] target "10.127.5.15": subscription default-1675724030 rcv error: rpc error: code = InvalidArgument desc = request must contain a prefix &gnmi.SubscribeRequest{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(0xc000057a88)}, sizeCache:0, unknownFields:[]uint8(nil), Request:(*gnmi.SubscribeRequest_Subscribe)(0xc002c687f8), Extension:[]*gnmi_ext.Extension(nil)}
2023/02/06 22:54:10.981851 /home/runner/work/gnmic/gnmic/app/collector.go:111: [gnmic] target "10.127.5.15": subscription default-1675724030 rcv error: retrying in 10s
2023/02/06 22:54:20.983900 /home/runner/work/gnmic/gnmic/app/collector.go:111: [gnmic] target "10.127.5.15": subscription default-1675724030 rcv error: rpc error: code = InvalidArgument desc = request must contain a prefix &gnmi.SubscribeRequest{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(0xc000057a88)}, sizeCache:0, unknownFields:[]uint8(nil), Request:(*gnmi.SubscribeRequest_Subscribe)(0xc0014cd890), Extension:[]*gnmi_ext.Extension(nil)}
2023/02/06 22:54:20.983947 /home/runner/work/gnmic/gnmic/app/collector.go:111: [gnmic] target "10.127.5.15": subscription default-1675724030 rcv error: retrying in 10s

Feature Request: configurable precision for export to InfluxDB

Current configuration assumes ns precision when writing to InfluxDB. However, if the "override ts" processor is used with the different precision, for example s, this will result with wrong timestamps in InfluxDB, because seconds are passed as nanoseconds.

The ideal would be having a configurable option for timestamp precision for export to InfluxDB and normalization of the event timestamp before passing it to the influxdb client library.

if-feature corner case

Hi,

The following scenario doesn't generate the right output for if-feature.

At the end of this file, there is an augment statement that contains an if-feature field. This one should be applied to paths present in that section, but it is currently not taken into account when generating paths.

    augment "/srl_nokia-netinst:network-instance" {
        uses segment-routing-top {
            if-feature srl-feat:segment-routing;
        }
    }
  {
    "path": "/network-instance[name=*]/segment-routing/mpls/local-prefix-sid[prefix-sid-index=*]/node-sid",
    "path-with-prefix": "/srl_nokia-netinst:network-instance[name=*]/srl_nokia-sr:segment-routing/mpls/local-prefix-sid[prefix-sid-index=*]/node-sid",
    "type": "boolean",
    "description": "If set, the prefix SID(s) identity the router as a whole.",
    "default": "true",
    "namespace": "urn:srl_nokia/segment-routing",
    "if-features": [
      "srl-feat:segment-routing-shared-sid" # srl-feat:segment-routing is also expected here
    ]
  },

subscription name per xpath

I was wondering if it is possible to use event message name or subscription_name on a per xpath basis

I know I can do something like this...

subscriptions:
 ipv4-bgp-routes:
    paths:
      - "/state/service/vprn[service-name=*]/route-table/unicast/ipv4/statistics/bgp"
 ipv4-bgp-vpn-routes:
    paths:
      - "/state/service/vprn[service-name=*]/route-table/unicast/ipv4/statistics/bgp-vpn"
 ipv4-direct-routes:
    paths:
      - "/state/service/vprn[service-name=*]/route-table/unicast/ipv4/statistics/direct"

rather than this...

subscriptions:
  route-counts:
      - "/state/service/vprn[service-name=*]/route-table/unicast/ipv4/statistics/bgp"
      - "/state/service/vprn[service-name=*]/route-table/unicast/ipv4/statistics/bgp-vpn"
      - "/state/service/vprn[service-name=*]/route-table/unicast/ipv4/statistics/direct"

but wondering if there are negatives consequences to this as number of xpaths per router could be many. For example I can see Active RPC count on SROS router is increased.

Any suggestions, basically I'm looking for a way to easily identify a metric name (perhaps in addition to using path-base processor to trim event value keys in response).

thanks!

Question: Ability to customise subscriptions with targets loaded via consul

Is it possible to customise the subscriptions applied to targets that are loaded via consul?

E.g. For my Junos devices, I want to subscribe to BGP metrics and Interfaces, but for my Cisco devices I only want to subscribe to interfaces.

At present, it appears that all devices loaded by consul are automatically configured to subscribe to all subscriptions. Is it possible via tags or otherwise to dynamically select what topics to subscribe a device to?

Incorrect IP registered in Consul for Prometheus Output

I'm running consul and gnmic in docker, across 2 VM's. gnmic is configured to use consul for clustering and register it's prometheus output too, the clustering component works great however as i'm running in docker my listen statement is :7890 and :9804 for the api and prometheus output endpoints, I have to define the service-address in the clustering configuration to have the right IP registered in consul. However there's no such config option in the prometheus output section gnmic appears to use the IP of the consul server when registering the prometheus output.

How can I define the service-address for the prometheus output in the same way I can for the clustering?

# clustering config
clustering:
  cluster-name: cluster
  instance-name: gnmic01
  service-address: "10.249.1.215"
  targets-watch-timer: 30s
  leader-wait-timer: 60s
  locker:
    type: consul
    address: 10.249.0.250:8500

# Configure outputs
outputs:
  prometheus:
    type: prometheus
    listen: :9804 
    path: /metrics 
    event-processors: 
      - interface-descriptions
    service-registration:
      address: 10.249.0.250:8500

Removing the full path from the field in InfluxDB output

How do I remove the full path from the field name in the influxdb output ? I just want the name e.g. in-broadcast-pkts not the entire /interfaces/interface/state/counters/in-broadcast-pkts is this possible ?

from(bucket: "network-counters")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "interface_counters")
  |> filter(fn: (r) => r["source"] == "10.127.5.11")
  |> filter(fn: (r) => r["_field"] == "/interfaces/interface/state/counters/in-broadcast-pkts")
  |> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)
  |> yield(name: "mean")

gnmic does not support use of keys on subscription path when using a config file

The following command via CLI works:

gnmic -a <host>:<port> -u <user> -p <password> --insecure --path "/components/component[name=PowerSupply2]" -e proto subscribe --mode once

If the same subscription above is attempted via config file, it does not work:

insecure: true
encoding: proto
log: true

targets:
  <host>:<port>:
    username: <username>
    password: <password>
    subscriptions:
      - power_supply_sub_once:
    outputs:
      - nats_output

subscriptions:
  power_supply_sub_once:
    paths:
      - /components/component[name=PowerSupply2]
    mode: once

outputs:
  nats_output:
    type: nats
    address: <host>:<port>
    subject: <subject>

The error presented is:

user@server> gnmic subscribe --config gnmic-config-small-subscription.yaml
failed loading config file: 1 error(s) decoding:

* 'targets[<host>:<port>].subscriptions[0]' expected type 'string', got unconvertible type 'map[interface {}]interface {}', value: 'map[power_supply_sub_once:<nil>]'
2023/01/22 18:07:10.984152 [gnmic] version=0.28.0, commit=8315400, date=2022-12-07T17:02:16Z, gitURL=https://github.com/openconfig/gnmic, docs=https://gnmic.openconfig.net
2023/01/22 18:07:10.984156 [gnmic] using config file "gnmic-config-small-subscription.yaml"
Error: failed reading targets config: 1 error(s) decoding:

* 'subscriptions[0]' expected type 'string', got unconvertible type 'map[interface {}]interface {}', value: 'map[power_supply_sub_once:<nil>]'

Target-specific options aren't updated when loaded dynamically

I'm not sure if this is a bug or intended behavior, but I noticed that loaded targets (at least for the file loader) aren't having their target-specific options updated. For example, if I have a target file whose contents are

targeta:
  address: 192.0.2.1:1234
  username: gnmic
  password: secret
  timeout: 2s
  skip-verify: true

and whose subscriptions are up and working, a change to the options there will be logged by the loader but never acted upon. I've tested this with the address, username, and password options. I would expect that the subscriptions to the target would be restarted with the new options. (At least where the change would require a new session.)

The updates don't seem to make it into the target configuration at all, because even after closing the connection from the other end and letting gNMIc reconnect, it still connects with the original values. Removing the target entirely, reloading the file, and adding the target back with the new values does work.

Is it possible to add tags to the JSON output format?

My requirement is as follows:

I am able to successfully receive telemetry metrics from a router using gNMIc but I would like to be able to add tags to the JSON output so that I can add a tag to signify what the interface_name is for the metric I just received. I realize that the event output format does that for me automatically, but the client that's getting the data needs the schema to be in the format that's outputted by the json output format.

It seems like my only option under these conditions is to use Processors to convert the model schema of my telemetry data from the event schema to something that looks closer to the json schema. Is this my only option? Or is there a way to add tags to a json output format?

Thank you.

Empty "add-target" output param

Hi,
Shouldn't it return a response instead of nil? Because otherwise it seems to break outputs if "add-target" is empty string

return nil, nil

e.g. with file output
rsp, err = outputs.AddSubscriptionTarget(rsp, meta, f.Cfg.AddTarget, f.targetTpl)

it overwrites rsp with nil, and then it just marshals\writes nothing and i get only empty line breaks in my terminal
My current config:

targets:
  <redacted>:50051:
subscriptions:
  sub1:
    paths:
      - /components/component/cpu
    mode: stream
    stream-mode: sample
    sample-interval: 10s
  file:
    type: file
    file-type: stdout

/app/gnmic --debug --config /test/gncfg.yaml subscribe logs:

2022/11/07 09:47:30.503150 /home/runner/work/gnmic/gnmic/app/collector.go:70: [gnmic] target "<redacted>:50051": gNMI Subscribe Response: &{SubscriptionName:sub1 SubscriptionConfig:{"name":"sub1","paths":["/components/component/cpu"],"mode":"stream","stream-mode":"sample","encoding":"json","sample-interval":10000000000} Response:update:{timestamp:1667814449797670136 prefix:{} update:{path:{origin:"openconfig" elem:{name:"components"}} val:{json_val:"{\"component\":[{\"name\":\"cpu2\",\"cpu\":{\"utilization\":{\"state\":{\"avg\":18,\"instant\":15,\"max\":42,\"min\":3,\"max-time\":\"1667814428\",\"min-time\":\"1667814440\"}}}},{\"name\":\"cpu0\",\"cpu\":{\"utilization\":{\"state\":{\"avg\":17,\"instant\":6,\"max\":35,\"min\":5,\"max-time\":\"1667814428\",\"min-time\":\"1667814440\"}}}},{\"name\":\"cpu1\",\"cpu\":{\"utilization\":{\"state\":{\"avg\":14,\"instant\":8,\"max\":32,\"min\":2,\"max-time\":\"1667814428\",\"min-time\":\"1667814440\"}}}},{\"name\":\"cpu7\",\"cpu\":{\"utilization\":{\"state\":{\"avg\":3,\"instant\":2,\"max\":9,\"min\":0,\"max-time\":\"1667814428\",\"min-time\":\"1667814440\"}}}},{\"name\":\"ALL\",\"cpu\":{\"utilization\":{\"state\":{\"avg\":10,\"instant\":6,\"max\":23,\"min\":2,\"max-time\":\"1667814428\",\"min-time\":\"1667814440\"}}}},{\"name\":\"cpu6\",\"cpu\":{\"utilization\":{\"state\":{\"avg\":3,\"instant\":4,\"max\":11,\"min\":0,\"max-time\":\"1667814428\",\"min-time\":\"1667814440\"}}}},{\"name\":\"cpu5\",\"cpu\":{\"utilization\":{\"state\":{\"avg\":9,\"instant\":6,\"max\":26,\"min\":1,\"max-time\":\"1667814434\",\"min-time\":\"1667814440\"}}}},{\"name\":\"cpu3\",\"cpu\":{\"utilization\":{\"state\":{\"avg\":17,\"instant\":8,\"max\":34,\"min\":4,\"max-time\":\"1667814422\",\"min-time\":\"1667814440\"}}}},{\"name\":\"cpu4\",\"cpu\":{\"utilization\":{\"state\":{\"avg\":1,\"instant\":3,\"max\":3,\"min\":0,\"max-time\":\"1667814446\",\"min-time\":\"1667814416\"}}}}]}"}}}}
2022/11/07 09:47:30.503200 /home/runner/work/gnmic/gnmic/app/collector.go:70: [gnmic] target "<redacted>:50051": gNMI Subscribe Response: &{SubscriptionName:sub1 SubscriptionConfig:{"name":"sub1","paths":["/components/component/cpu"],"mode":"stream","stream-mode":"sample","encoding":"json","sample-interval":10000000000} Response:sync_response:true}


<-- empty line breaks appear every sample interval

version 0.26.0 of gnmic works fine:

2022/11/07 09:47:53.158810 /home/runner/work/gnmic/gnmic/app/collector.go:62: [gnmic] target "<redacted>:50051": gNMI Subscribe Response: &{SubscriptionName:sub1 SubscriptionConfig:{"name":"sub1","paths":["/components/component/cpu"],"mode":"stream","stream-mode":"sample","encoding":"json","sample-interval":10000000000} Response:update:{timestamp:1667814472498353137  prefix:{}  update:{path:{origin:"openconfig"  elem:{name:"components"}}  val:{json_val:"{\"component\":[{\"name\":\"cpu2\",\"cpu\":{\"utilization\":{\"state\":{\"avg\":18,\"instant\":15,\"max\":42,\"min\":3,\"max-time\":\"1667814428\",\"min-time\":\"1667814440\"}}}},{\"name\":\"cpu0\",\"cpu\":{\"utilization\":{\"state\":{\"avg\":17,\"instant\":6,\"max\":35,\"min\":5,\"max-time\":\"1667814428\",\"min-time\":\"1667814440\"}}}},{\"name\":\"cpu1\",\"cpu\":{\"utilization\":{\"state\":{\"avg\":14,\"instant\":8,\"max\":32,\"min\":2,\"max-time\":\"1667814428\",\"min-time\":\"1667814440\"}}}},{\"name\":\"cpu7\",\"cpu\":{\"utilization\":{\"state\":{\"avg\":3,\"instant\":2,\"max\":9,\"min\":0,\"max-time\":\"1667814428\",\"min-time\":\"1667814440\"}}}},{\"name\":\"ALL\",\"cpu\":{\"utilization\":{\"state\":{\"avg\":10,\"instant\":6,\"max\":23,\"min\":2,\"max-time\":\"1667814428\",\"min-time\":\"1667814440\"}}}},{\"name\":\"cpu6\",\"cpu\":{\"utilization\":{\"state\":{\"avg\":3,\"instant\":4,\"max\":11,\"min\":0,\"max-time\":\"1667814428\",\"min-time\":\"1667814440\"}}}},{\"name\":\"cpu5\",\"cpu\":{\"utilization\":{\"state\":{\"avg\":9,\"instant\":6,\"max\":26,\"min\":1,\"max-time\":\"1667814434\",\"min-time\":\"1667814440\"}}}},{\"name\":\"cpu3\",\"cpu\":{\"utilization\":{\"state\":{\"avg\":17,\"instant\":8,\"max\":34,\"min\":4,\"max-time\":\"1667814422\",\"min-time\":\"1667814440\"}}}},{\"name\":\"cpu4\",\"cpu\":{\"utilization\":{\"state\":{\"avg\":1,\"instant\":3,\"max\":3,\"min\":0,\"max-time\":\"1667814446\",\"min-time\":\"1667814416\"}}}}]}"}}}}
2022/11/07 09:47:53.158862 /home/runner/work/gnmic/gnmic/app/collector.go:62: [gnmic] target "<redacted>:50051": gNMI Subscribe Response: &{SubscriptionName:sub1 SubscriptionConfig:{"name":"sub1","paths":["/components/component/cpu"],"mode":"stream","stream-mode":"sample","encoding":"json","sample-interval":10000000000} Response:sync_response:true}

{
  "source": "<redacted>:50051",
  "subscription-name": "sub1",
  "timestamp": 1667814472498353137,
  "time": "2022-11-07T09:47:52.498353137Z",
  "updates": [
... <-- correct output appears every sampling interval

mTLS not working(certificate signed by unknown authority error)

Hello,

I created a new CA cert, which then was used for signing a user cert-key pair using the XCA tool.

  • The gnmic tunnel server always fails with "invalid CA authority error".
  • gNMIC version : 0.30.0
  • Same certs are working fine with another TLS client-server with mTLS.
  • Sharing wireshark output, config file & client error.
    Is this a known issue , or am I not using the config options correctly?

1. Config file

insecure: true
username: ADMIN
password: ADMIN
log: true

tunnel-server:
  address: ":50051"
  tls:
    ca-file: /home/shikha/Client/rootca_2K.crt
    cert-file: /home/shikha/secure-tunnel/gnmi/cmd/gnmi_collector/certs1/cert.pem
    key-file: /home/shikha/secure-tunnel/gnmi/cmd/gnmi_collector/certs1/key.pem
    client-auth: "require-verify"
  target-wait-time: 20s
  enable-metrics: false
  debug: false

2. Client output

2023/04/25 14:23:24.899447 /home/runner/work/gnmic/gnmic/app/outputs.go:27: [gnmic] starting output type file
2023/04/25 14:23:24.900211 /home/runner/work/gnmic/gnmic/outputs/file/file_output.go:204: [file_output:default-stdout] initialized file output: {"Cfg":{"FileName":"","FileType":"stdout","Format":"json","Multiline":true,"Indent":"  ","Separator":"\n","SplitEvents":false,"OverrideTimestamps":false,"AddTarget":"","TargetTemplate":"","EventProcessors":null,"MsgTemplate":"","ConcurrencyLimit":1000,"EnableMetrics":false,"Debug":false}}
2023/04/25 14:23:24.912009 /home/runner/go/pkg/mod/google.golang.org/[email protected]/grpclog/logger.go:53: [gnmic] [core] [Server #1] Server created
2023/04/25 14:23:24.919712 /home/runner/go/pkg/mod/google.golang.org/[email protected]/grpclog/logger.go:53: [gnmic] [core] [Server #1 ListenSocket #2] ListenSocket created
2023/04/25 14:23:52.364434 /home/runner/go/pkg/mod/google.golang.org/[email protected]/grpclog/logger.go:53: [gnmic] [core] [Server #1] grpc: Server.Serve failed to create ServerTransport: connection error: desc = "ServerHandshake(\"192.168.0.15:62423\") **failed: tls: failed to verify client certificate: x509: certificate signed by unknown authority"**

3. Wireshark snapshot

  • Always failing with bad cert error due to invalid CA authority.
    mTLS_fail

unclear usage of proto encoding

Hi there,

This may be more of a documentation / noob question than anything else. I have a Nokia running the following, as described by the capabilities request

gNMI version: 0.7.0
supported models:
  - nokia-conf, Nokia, 20.10.R8
  - nokia-state, Nokia, 20.10.R8
  - nokia-li-state, Nokia, 20.10.R8
  - nokia-li-conf, Nokia, 20.10.R8
supported encodings:
  - JSON
  - BYTES
  - PROTO

The tool has worked great with --encoding json or --encoding bytes and I am able to do get all of what I need working just fine like that. I am now trying to test any efficiency / performance differences between that and the proto encoding and running into a wall.

After some reading up I went and grabbed the Nokia proto files from here https://github.com/nokia/protobufs. Looking at the contents of the file I can see '/state/system/cpu' inside of it. However, if I run a command like this...

gnmic -d --proto-dir 7x50_protobufs/latest_sros_22.10 --proto-file nokia-sros-combined-model.proto -a $host:$port -u $user -p $password --path '/state/system/cpu[sample-period=1]/summary' --mode once --encoding proto

It shows that it's loading up the proto files, however I end up with output like this:

{
  "source": "$host:$ip",
  "subscription-name": "default-1671720379",
  "timestamp": 1671720414861904391,
  "time": "2022-12-22T06:46:54.861904391-08:00",
  "updates": [
    {
      "Path": "",
      "values": {
        "": "Qh2iAhryARcKAggBkgMQCg4KBQipkM4HEgUIkE4QAg=="
      }
    }
  ]
}

I'm not really sure how to go about troubleshooting this. If I try other proto files in that github repo (or don't specify it at all) I get similar results, so maybe that means that the proto files are not correct? Maybe my usage is incorrect? There isn't specifically a "22.10.r8" tag in that repo, only a ".r1" but it isn't clear to me if that matters or not. If it is an invalid file, maybe some sort of warning about being unable to decode would be helpful as an indicator that something is up?

Apologies if this is a completely newbie question or I'm off entirely in the wrong direction. I haven't been able to find much in the way of examples of people using protobufs successfully so I thought I would try here.

Thanks.

Golang 1.19 support

Attempting to use the API with Golang 1.19 results in another unsafe GC panic

panic: Something in this program imports go4.org/unsafe/assume-no-moving-gc to declare that it assumes a non-moving garbage collector, but your version of go4.org/unsafe/assume-no-moving-gc hasn't been updated to assert that it's safe against the go1.19 runtime. If you want to risk it, run with environment variable ASSUME_NO_MOVING_GC_UNSAFE_RISK_IT_WITH=go1.19 set. Notably, if go1.19 adds a moving garbage collector, this program is unsafe to use.

goroutine 1 [running]:
go4.org/unsafe/assume-no-moving-gc.init.0()
        /home/bewing/go/pkg/mod/go4.org/unsafe/[email protected]/untested.go:25 +0x1f4
exit status 2

Based on the previous issue with 1.18, it again appears related to gomplate: hairyhenderson/gomplate#1467

The upstream fix is first in 3.11.2: https://github.com/hairyhenderson/gomplate/releases/tag/v3.11.2
3.11.0 has some deprecation notes that probably need to be checked: https://github.com/hairyhenderson/gomplate/releases/tag/v3.11.0

YANG model to Influx DB data type mappings

Hi,

I'm testing gNMI subscriptions to a Nokia SROS device in order to collect counter information and ultimately send this to Influx DB.

I've encounted a few issues with the gnmic Influx output data types and would be grateful for any feedback on the points below.

  1. With gNMI JSON encoding, I observed 64-bit YANG numeric types are mapped to strings in the InfluxDB output. RFC7951 ("JSON Encoding of Data Modeled with YANG") states 64-bit numeric values should be represented as JSON strings and I assume this is why the corresponding InfluxDB output is also a string. I also assume there is nothing gnmic can do about this as it doesn't know the original YANG type and this is a limitation of JSON encoding.

  2. With gNMI BYTES encoding, 64-bit YANG numeric values are mapped to numeric types in the Influx output as desired. However, I observed some errors in the gnmic Influx output logs and this appears to be due to the Nokia YANG model using "type union" for some counters. The Nokia documentation says the type sent for a particular counter depends on the value at that sample and in the example below you can see the "operational_pir" counter is first sent as an int64, which sets the data type for the counter on the Influx server, but the next sample is of type string and this results in an Influx error "field type conflict" because the data type has changed. Again, I assume there is nothing gnmic can do about this as it doesn't know the original YANG type is a union.

Example

gnmic Influx output for counter "operational_pir" (type int) - sample 1
> myport,path=/state/port/network/egress/queue/hardware-queue,port_id=1/x1/1/c18/1,queue_id=8,source=10.125.7.99 <more fields>,network/egress/queue/hardware_queue/operational_pir=1000000i,<more fields> 1676273979998510780

gnmic Influx output -Influx output for counter "operational_pir" (type string) - sample 2
myport,path=/state/port/network/egress/queue/hardware-queue,port_id=1/x1/1/c18/2,queue_id=1,source=10.125.7.99 <more fields>,network/egress/queue/hardware_queue/operational_pir="max",<more fields> 1676273980016682122

gnmic Influx output log
2023-02-13T07:39:48Z E! [outputs.influxdb_v2] Failed to write metric to sros (will be dropped: 422 Unprocessable Entity): unprocessable entity: failure writing points to database: partial write: field type conflict: input field

Nokia YANG model type for the above counter

	typedef int-max {
		type union {
			type enumeration {
				enum "max"                          { value -1; }
			}
			type int64;
		}
		description "Maximum 64-bit value with max as -1";
	}
  1. Lastly, I tried gNMI PROTO encoding and the Influx output appears to have all fields defined as strings. In theory, I assume gnmic has full access to YANG type information via the proto file but there is no existing gnmic functionality to set the Influx data types to anything other than string (or deal with the union type scenario described in item 2).

gnmic Influx output log
default-1676519390,source=10.125.7.99:57400 //state/port.0/portId/value="B/1",//state/port.0/statistics/inBroadcastPackets/value="3968555",//state/port.0/statistics/inOctets/value="304900821",//state/port.0/statistics/inPackets/value="4003444",//state/port.0/statistics/inUnicastPackets/value="34889",//state/port.0/statistics/outBroadcastPackets/value="20",//state/port.0/statistics/outOctets/value="6473925",//state/port.0/statistics/outPackets/value="46584",//state/port.0/statistics/outUnicastPackets/value="46564" 1676519424779419247

In an ideal world, I would like the YANG model and Influx data types to be aligned as much as possible and it seems BTYES encoding is the best way to achieve this at the moment. To workaround the union quirk, I'm also looking at the OpenConfig YANG models rather than Nokia models as on the surface it appears they make less use of the problematic union types.

Thanks for any feedback.

Possible to suppress empty messages?

I am dealing with a system that is mapping an internal database to arbitrary path elements in gNMI where keys are part of the elem path string, instead of actually being filterable keys. As a result, I am doing a LOT of filtering on events, and in some cases, end up with messages (lists of Events) that are comprised of just empty events after using the event-drop filter to delete unwanted data. The file output looks like this:

[
  {}
]
[
  {},
  {}
]
[
  {}
]
[
  {},
  {}
]
[
  {},
  {}
]
[
  {},
  {}
]
[
  {}
]
[
  {},
  {}
]

Is it possible to completely remove empty events from the message, and drop/suppress empty messages instead of writing them to outputs further downstream?

Docker compose container not accepting valid command string

Using the docker-compose container, the command to kick-off a subscription appears to be invalid causing the container to fail.

Running the same command ad-hoc with gnmic outside of a container works as expected.

 gnmic:
    container_name: gnmic
    image: gnmic/gnmic:latest
    ports: 
      - 9804:9804
    volumes: 
      - '/etc/gnmic/gnmic.yaml:/app/gnmic.yaml'
    command: 
      - "subscribe --config /app/gnmic.yaml"
    restart: always

Docker log output:

Error: unknown command "subscribe --config /app/gnmic.yaml" for "gnmic"
Run 'gnmic --help' for usage.

Processors help needed

Hi ,

Is it possible to take the source (usually this has the hostname from the GNMI device) and split this into fields that I can write as tags?

Example hostname:
region-location-rack-role-foo-01

I want tags:

region=region
location=location
rack=rack
role=role

gnmic api metrics

gnmic_http_loader_number_of_loaded_targets
is always 0 even if i have successful grpc subscription ongoing that i loaded using http.

I would like if possible to get metrics on when i am unable to subscribe or connect.
Maybe this metric could be pushed to each output as metric up also to follow Prometheus default generated metrics.
e.g following events.

[gnmic] failed to initialize target "switch1": 10.10.10.50:50051: context deadline exceeded
[gnmic] retrying target "switch1" in 10s

This metric grpc_client_handled_total could it be made more detailed to include what target and what subscription also?
grpc_client_handled_total{grpc_code="OutOfRange",grpc_method="Subscribe",grpc_service="gnmi.gNMI",grpc_type="bidi_stream"} 5776

[gnmic] target "switch2": subscription system_resources rcv error: rpc error: code = OutOfRange desc = Maximum number of Subscribe requests reached
[gnmic] target "switch2": subscription system_resources rcv error: retrying in 10s

Troubleshooting RAM Usage

Hi,

Great work on gNMIc, we've been using it in dial-out mode with processors for almost a year now!
Recently, we noticed an issue where the gnmic process grabs a lot of RAM and eventually fills up the VM (in 15-20minutes).
I think it started once we deployed more than 230 7250 IXR-e nodes. The forecast for this year is to double this amount.
This is the currently used node template: https://github.com/ebinachan/telemetry/blob/master/sros/ixr_dialout_telemetry_template.txt

What would be the best way to troubleshoot this?

Many thanks,
EC
20230326_gnmic_RAM_tshoot_32gb

Feature request: Make username,password optional parameter

If some software follows gNMI/gNOI spec to implement certain services ( eg: certificate management ), it would not be needed to add username based authentication to this service as authentication would be ensured in various other methods.

In such case, it becomes cumbersome to use "gnmic" as it always prompts for username and password.

It would be nicer to make username/password prompt as optional or introduce a flag with which there should not be any need to username and password.

Bump openconfig/gnmi dep and get rid of Path Alias

As per the change in the upstream openconfig/gnmi path aliases have been removed (gnmi 0.8.0) - openconfig/reference#148

The latest go package of openconfig/gnmi already misses those aliases fields https://pkg.go.dev/github.com/openconfig/[email protected]/proto/gnmi#SubscriptionList which makes openconfig/gnmic/api unusable for programs that depend on the latest openconfig/gnmi go package.

package main

import (
	_ "github.com/openconfig/gnmic/api"
)

func main() {
}

yields

❯ go run .
# github.com/openconfig/gnmic/api
/root/openconfig/gnmic/api/gnmi_msgs.go:1025:19: msg.Subscribe.UseAliases undefined (type *gnmi.SubscriptionList has no field or method UseAliases)
/root/openconfig/gnmic/api/gnmi_msgs.go:1257:8: msg.Alias undefined (type *gnmi.Notification has no field or method Alias)

this is with github.com/openconfig/gnmi v0.0.0-20220920173703-480bf53a74d2 dependency

gnmic -> aws msk (kafka) -> apache druid

Fantastic project!

Having some issues getting data through into apache druid. The data arrives just fine, but druid cannot handle batched messages, or any json that is not an object. Sending gnmic data in json format allows me to process the messages in druid, but I loose all the event format goodness of gnmic (like jq, event processors etc.). Is there a way with gnmic to de-batch messages and send exactly one event per message? I am aware message transformations can be done on the kafka side, but this is not an option at this stage.

An example of an event message generated by gnmic is (its an array of events in one message):

[
  {
    "timestamp": 1680417104877454555,
    "tags": {
      "interface_name": "Ethernet34/1",
      "source": "node1:6030"
    },
    "values": {
      "/interfaces/interface/state/counters/in-octets": 105458054050005
    }
  },
  {
    "timestamp": 1680417104877454555,
    "tags": {
      "interface_name": "Ethernet34/1",
      "source": "node1:6030"
    },
    "values": {
      "/interfaces/interface/state/counters/in-unicast-pkts": 135224098913
    }
  }
]

What I need is the array exploded into one message per event:

message 1:
  {
    "timestamp": 1680417104877454555,
    "tags": {
      "interface_name": "Ethernet34/1",
      "source": "node1:6030"
    },
    "values": {
      "/interfaces/interface/state/counters/in-octets": 105458054050005
    }
  }

message 2:
  {
    "timestamp": 1680417104877454555,
    "tags": {
      "interface_name": "Ethernet34/1",
      "source": "node1:6030"
    },
    "values": {
      "/interfaces/interface/state/counters/in-unicast-pkts": 135224098913
    }
  }

Ideally the "values" should be exploded too, but when using event output format I have not seen multiple values per event (although the key is "values" it's an object, not an array so it should hopefully be ok).

FR: add `with-module-names` to `gnmic path` command.

gnmic path command allows to extract paths from yang models in different formats. Currently, the following flavours are available:

[
  {
    "path": "/acl/capture-filter/ipv4-filter/entry[sequence-id=*]/description",
    "path-with-prefix": "/srl_nokia-acl:acl/capture-filter/ipv4-filter/entry[sequence-id=*]/description",
    "type": "description",
    "description": "Description string for the filter entry",
    "namespace": "urn:srl_nokia/acl"
  }
]

Unfortunately, path-with-prefix is not the fully qualified path, as it needs to have module names, and not prefixes.

E.g. srl_nokia-if -- prefix, whereas srl_nokia-interfaces - module name.

No TLS options for Kafka input

There are no TLS configuration options for Kafka as input. Causing my gnmic consumer not able to connect to my Kafka cluster.

gnmic logs
2023/03/07 03:07:49.521966 /home/runner/go/pkg/mod/github.com/!shopify/[email protected]/broker.go:1285: [kafka_input] Failed to read SASL handshake header : unexpected EOF

Kafka logs
[2023-03-07 03:04:30,713] INFO [SocketServer brokerId=1] Failed authentication with hostname1.example.com/10.10.10.10 (SSL handshake failed) (org.apache.kafka.common.network.Selector)

Feature request: SNMP trap output

Some deployments still use SNMP traps for monitoring/alerting purposes. An 'SNMP' output would be similar to the existing 'UDP' output, but formatted like an UDP trap packet (sent to port 162 by default)

See https://www.rfc-editor.org/rfc/rfc3416#section-4.2.6 and https://www.dpstele.com/snmp/snmpv3-trap-format.php

A basic inform PDU with a configurable SNMP OID (or even a limited hardcoded set of supported known traps like 'port down' (.1.3.6.1.6.3.1.1.5.3) and 'port up'(.1.3.6.1.6.3.1.1.5.4) as found at https://www.arubanetworks.com/techdocs/ClearPass/6.9/PolicyManager/Content/CPPM_UserGuide/SNMP_MIB_Events_Errors/Network_Interface_Status_Traps.htm

gnmic not re-attempting targets when unavailable

I've noticed when devices become unavailable, or an unexpected RPC error occurs, there is no attempt to continually re-attempt the subscriptions. This becomes particularly concerning that gnmic can't recover in a situation where device is down for planned maintenance, or just can't get an individual RPC call out for some reason.

Error messages where this occurs can be seen below:

2023/03/27 15:04:52.876435 [gnmic] target "Switch01": subscription intf_counters rcv error: retrying in 10s
2023/03/27 15:04:52.876469 [gnmic] target "Switch01": subscription cpu rcv error: rpc error: code = Unavailable desc = closing transport due to: connection error: desc = "error reading from server: EOF", received prior goaway: code: ENHANCE_YOUR_CALM, debug data: "too_many_pings"

The configuration appears as follows:

  Switch01:
    address: Switch01.example.com:6030
    timeout: 120s
subscriptions:
  cpu:
    paths:
      - /components/component/cpu/
    mode: STREAM
    stream-mode: sample
    sample-interval: 10s
    encoding: bytes
  intf_counters:
    paths:
      - /interfaces/interface/state/
    mode: STREAM
    stream-mode: sample
    sample-interval: 10s
    encoding: bytes
  bgp_stats:
    paths:
      - /network-instances/network-instance[name=*]/protocols/protocol[identifier=BGP][name=BGP]/bgp/neighbors/neighbor[neighbor-address=*]/afi-safis/afi-safi[afi-safi-name=IPV4_UNICAST]
    mode: STREAM
    stream-mode: sample
    sample-interval: 10s
outputs:
  prometheus:
    type: prometheus
    listen: 0.0.0.0:9804
    path: /metrics
    expiration: 60s
    timeout: 30s

Documentation issue

The extract tags processor example in the docs does not work https://gnmic.openconfig.net/user_guide/event_processors/event_extract_tags/

processors:
  # processor name
  sample-processor:
    # processor type
    event-extract-tags:
      value-names:
        - `/(\w+)/(?P<group>\w+)/(\w+)`

The regex contains backticks that throw an error

failed loading config file: While parsing config: yaml: line 48: found character that cannot start any token
Error: no subscriptions or inputs configuration found

Remove the backticks and it works.

Also can I suggest more examples for the feature. I am struggling to understand how it works.

Support of mixed schema requests in Set

It is not super easy to figure out how to do a mixed schema set request in the examples.

Since the origin:cli portion is byte blob i was thinking it might be nice to have a option to say something like

./gnmic set -a 192.168.16.50:6030 --skip-verify -u admin -p admin --replace / --replace-file /tmp/setrequest.json

so in this you would need to properly escape the "cli" config into json string to put into the request file
it would be nice to have the OC portion be this file but have a seperate file for the CLI coinfig

./gnmic set -a 192.168.16.50:6030 --skip-verify -u admin -p admin --replace / --replace-file /tmp/setrequest.json --replace-cli-origin "cli" --replace-cli-file /tmp/vendor-config-file.txt

Proposal for plugin support.

Background

I was listning to Roman's presentation at DKNog 2023 and was smitten by gNMIc. My employeer (netnod) has been looking for a gNMI system that we can utilize for our ST needs and gNMI fit our purpose fast.

Some context

My first task was to reimplment our statistics solution which currently use a SNMP Exporter for Prometheus to use gNMIc's prometheus output. Just basic prometheus output works really good but we have one more feature that would be good which is to append a customer specific identifier to a metric as a prometheus label. This identifier has to be fetched from a GraphQL API that contains the device + interface mapping to customer id. This is where i got stuck.

To get around this issue i wrote a custom formatter which does the thing i wanted and that was not very complex. But then we come into the issue with maintaining our fork of gNMIc which is something we dont really like to do if we dont really have to, we are a small company with few devs and SRE's. So thats why im proposing this.

I was initially looking at the GoLang plugin module (https://pkg.go.dev/plugin) but this has its drawbacks that it can get abit wonky when you run a plugin compiled by another version of go etc.

And after talking quickly with Roman on Slack about this he pointed me to https://github.com/hashicorp/go-plugin which also does the same thing but utilizing RPC calls over local sockets.

The idea i have the current formatters, actions, inputs and formatters the same way and have them included in the current code base. But also allow users of gNMIc to code their own modules to be loaded at runtime. This way one could keep their highly specific implementations to their modules but without maintaining a separate fork with all of those issues.

I have not started writing any code for plugin support there for its abit vague how its to be implemented. I'm also very happy to develop a working example if the maintainers deem this to be worthy path to pursue and think it has any chance of beeing accepted into mainline. I dont want to waste my time implementing this if there is no intrested from the maintainers for this feature.

Null value returned when value is set using double_val

ISSUE

As per GNMI speification section 2.2.3 Node Values the float values should be stored in double_val field.
But the GNMIc is reporting null values in that case.
This is recent change in the specification and needs to be implemented in GNMIc.

Possible Solution

func getValue(updValue *gnmi.TypedValue) (interface{}, error) {

This function doesn't have a mapping for double_val, adding that should resolve the problem.

General Question Tunnel_Server.MD

Learning the ropes here and I am not a developer but interested in setting up a local environment to learn. I have a Centos7 environment and I downloaded gnmic. I have read through parts of the documentation and I am having a difficult time on how to configure a tunnel server gnmi instance. Where can I configure the these fields? Do I need to create a new tunnel.server.config.yaml file or does a default already exists? Where can I find it? Sorry if these are elementary questions.

insecure: true
username: admin
password: admin

subscriptions:
sub1:
paths:
- /state/port
sample-interface: 10s

gnmi-server:
address: :57400

tunnel-server:
address: :57401
targets:
- id: .*
type: GNMI_GNOI
config:
subscriptions:
- sub1

Using format proto to stdout causes panic

Should not be allowed as per docs, but causes panic if you try:

outputs:
  output1:
    type: file # output type
    file-type: stdout # or stderr
    format: proto

Panic caused:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x2cede70]

goroutine 62 [running]:
golang.org/x/sync/semaphore.(*Weighted).Acquire(0x0, {0x42276b8, 0xc0002d7940}, 0x1)
        /home/runner/go/pkg/mod/golang.org/x/[email protected]/semaphore/semaphore.go:41 +0x50
github.com/openconfig/gnmic/outputs/file.(*File).Write(0xc0000a0d20, {0x42276b8?, 0xc0002d7940?}, {0x41fb460?, 0xc0001f3f40}, 0x0?)
        /home/runner/work/gnmic/gnmic/outputs/file/file_output.go:216 +0x9a
github.com/openconfig/gnmic/app.(*App).Export.func2({0x423d4a0?, 0xc0000a0d20?})
        /home/runner/work/gnmic/gnmic/app/collector.go:171 +0x82
created by github.com/openconfig/gnmic/app.(*App).Export
        /home/runner/work/gnmic/gnmic/app/collector.go:169 +0x38d

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.