Code Monkey home page Code Monkey logo

ibm / ibm-spectrum-scale-bridge-for-grafana Goto Github PK

View Code? Open in Web Editor NEW
28.0 7.0 16.0 2.46 MB

This tool allows the IBM Storage Scale users to perform performance monitoring for IBM Storage Scale devices using third-party applications such as Grafana or Prometheus software.

License: Apache License 2.0

Python 97.29% Dockerfile 2.71%
grafana performance-monitoring spectrum-scale gpfs cloudnative grafana-datasource k8s-integration openshift-integration data-visualization ibm-cnsa

ibm-spectrum-scale-bridge-for-grafana's Introduction

CircleCI CII Best Practices

The IBM Storage Scale bridge for Grafana could be used for exploring IBM Storage Scale performance data on Grafana dashboards.

Grafana Bridge is a standalone Python application. It translates the IBM Storage Scale metadata and performance data collected by the IBM Storage Scale performance monitoring tool (ZiMon) to the query requests acceptable by the Grafana integrated openTSDB plugin.

Getting started

Installation guides:

The latest article:

Find more helpful information about the bridge usage in the project Wiki

Contributing

At this time, third party contributions to this code will not be accepted.

License

IBM Storage Scale bridge for Grafana is licensed under version 2.0 of the Apache License. See the LICENSE file for details.

ibm-spectrum-scale-bridge-for-grafana's People

Contributors

dependabot[bot] avatar dunnevan avatar helene avatar imgbotapp avatar stevemar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

ibm-spectrum-scale-bridge-for-grafana's Issues

SERVER default not working in container

The documentation states that the bridge should run on a collector node. So I tried to run the container there and all I got was WARNING - Perfmon RESTcall error __ Server responded: 503 Connection refused from server

After searching for quite some time I finally found the reason: The container needs to know the server's IP from inside the container. It does not work with the default of 0.0.0.0, and it does (of course) not work with any of the server's IP, because inside the container the host is reachable by another IP. So finally I got it working with this internal address automatically provided by podman:

podman run -dt -p 4242:4242 -e SERVER=host.containers.internal -e APIKEYVALUE=... --name grafana_bridge localhost/bridge_image:latest

host.containers.internal is the magic here. So I propose to change the default in the Dockerfile to that address and also add a comment to the config file about this.

dashboards uses random walk input

As a newbie Grafana user, it's been very difficult to understand why the provided dashboards set doesn't work. They seem to all be using random walk data. None of the "default dashboards" prompt for any data source. Others prompt, but still use random walk.

Importing f.ex. the

example_dashboards_bundle/Predefined\ Basic\ Dashboards/File System Capacity View-1554293302658.json

prompts me for a data-source, but when checking the resulting graphs, they still list random walk.

This is with Grafana v7.0.3

cannot get metadata from pmcollector

when starting this zimonGrafanaIntf.py application, it reports below and exits after retry
"QueryHandler: getTopology returns no data", the server respond is "503 Connection refused from server"

I checked gpfs gui and pmcollectors are active and running by "systemctl status gpfsgui" and "systemctl status pmcollector"
and zimonGrafanaIntf.py is started at the same host with gpfs gui and pmcollector.

Is there something I missed with the configuration?
what is the serverPort in "GPFS Server" section of config.ini. Is there any CLI that I can check the listening port of queries on pmcollector?

I also find the query Rest APIs are /sysmon/v1/perfmon/topo, but this API can't be found in GPFS doc https://www.ibm.com/docs/en/storage-scale/5.1.9?topic=endpoints-performance-monitoring

Various grafana-bridge Dockerfile updates

  • Lock base UBI8.4 image to ubi8.4-206
  • Produce requirements.txt with versioned packages that can be used for python package installation during image build.
  • Copy apache2 license into grafana-bridge container

Grafana operator and supported Grafana version

As discussed:

In the YAML example grafana-operator-subscription.yaml, it points to the alpha channel, and that is version 3.x of the Operator..
And if I read the operator text they recommend using v4.

And in the support-matrix we are stating support for Grafana 8. and above.
However, the current version for Grafana in OpenShift it's v.7.5.11.
Please separate the support matrix for CSNA and Standalone/classic Setup.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: grafana-operator-subscription
  namespace: grafana-for-cnsa
spec:
  channel: **alpha => v4**
  name: grafana-operator 
  source: community-operators 
  sourceNamespace: openshift-marketplace 

Create and deploy a default dashboard automatically in Grafana instance running for CNSA.

The Example Dashboard bundle was developed for a classic Spectrum Scale cluster, so most of them not really make sense for the cloud-native environment.

So it would be great if we could create cloud-native examples.
Atm, we need to import JSONs manually, would it be possible to offer the 'default dashboard' deployment with the Grafana instance deployment together.

Mount a volume with the mmsdrfs/ZIMonsensors.cfg file

As discussed in this issue : #169, it would be interesting to allow the container to mount a volume with the mmsdrfs/ZIMonsensors.cfg file rather than having to integrate it before manually or automatically building the container. Especially since this file can be updated.

Failed to connection to pmcollector: name 'request' is not defined, quitting

Dear IBM support team, I meet some issue when I want to use scale bridge for grafana. The python script return error info "failed to connect to pmcollector". I just wonder what is the meaning of apikeyname and apikeyvalue. How can I get the correct value to connect pmcollector? My GPFS version == 5.0.1.1

Restricted port number prohibits running mutliple instances of the bridge

Description
The port number argument is now limited to ony two choices, 4242 and 8443, where the 8443 port is used to select a TLS connection.
This makes it impossible to run multiple instances of the bridge on one node, because everyu instance needs it's own port.

To Reproduce
Steps to reproduce the behavior:
call zimonGrafanaIntf.py with -p 4243

Expected behavior
Being able to run the bridge with an arbitrary port, as it was possible in old versions.

Desktop (please complete the following information):
We run several instances of the Bridge for our various clusters, most of them on one node.

  • IBM Spectrum Scale system type:
    • [*] IBM Spectrum Scale classic cluster.
    • IBM Spectrum Scale Container Native Storage Access (CNSA).
    • [*] IBM Elastic Storage Server (ESS).
  • IBM Spectrum Scale cluster version: 5.0.5.1
  • IBM Spectrum Scale bridge for Grafana version: 6.1.3
  • Grafana version: 8.0.3
  • OS Grafana is installed on: CentOS 7

Additional context
Add any other context about the problem here.

How to increase GUI metrics refersh frequency?

Hello, IBM support team. Is there any solution to increase metrics data refresh frequence? We meet a problem that the timestamp of data display at GUI is 5mins later than system time.

initialization failures with 7.0.4

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • IBM Spectrum Scale system type:
    • IBM Spectrum Scale classic cluster.
    • IBM Spectrum Scale Container Native Storage Access (CNSA).
    • IBM Elastic Storage Server (ESS).
  • IBM Spectrum Scale cluster version:
  • IBM Spectrum Scale bridge for Grafana version:
  • Grafana version:
  • OS Grafana is installed on:

Additional context
Add any other context about the problem here.

sensor for some metrics is disabled

Hi, IBM support team. I have meet an issue that when i query some metrics that bridge server return 400 code. The log print info that xxx metrics is disabled. But I have modify pmcollector.cfg to enable this metric. But still can not solve this issue.

Import example dashboards instructions

Hi, it is impractical importing each of the many example dashboards individually, and this is the workflow linked to by the README. Therefore I suggest some pointer to the file /etc/grafana/provisioning/dashboards/sample.yaml which gives an example of adding a folder of dashboards for automatic provisioning. Cheers, Leo

Error starting grafana bridge v7.1.0

Describe the bug
Grafana-bridge v7.1.0 runs in the following error during the start

# python3 zimonGrafanaIntf.py -c 10
Traceback (most recent call last):
  File "zimonGrafanaIntf.py", line 37, in <module>
    from opentsdb import OpenTsdbApi
  File "/root/ibm-spectrum-scale-bridge-for-grafana/source/opentsdb.py", line 27, in <module>
    from collector import SensorCollector, QueryPolicy
  File "/root/ibm-spectrum-scale-bridge-for-grafana/source/collector.py", line 121, in <module>
    class SensorTimeSeries(object):
  File "/root/ibm-spectrum-scale-bridge-for-grafana/source/collector.py", line 153, in SensorTimeSeries
    def _setup_static_metrics_data(self, metric_names: list[str]):
TypeError: 'type' object is not subscriptable

To Reproduce
Steps to reproduce the behavior:

  1. Cone the repository # git clone https://github.com/IBM/ibm-spectrum-scale-bridge-for-grafana.git bridge-71
  2. Switch to branch v7.1 git switch v7.1
  3. Start bridge python3 zimonGrafanaIntf.py -c 10

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • IBM Storage Scale system type:
    • IBM Storage Scale classic cluster.
    • IBM Storage Scale Container Native Storage Access (CNSA).
    • IBM Elastic Storage Server (ESS).
  • IBM Storage Scale cluster version: 5.1.8.1
  • IBM Storage Scale bridge for Grafana version: v7.1.0
  • Grafana version: 9.5
  • OS Grafana is installed on: RHEL85

Additional context
Add any other context about the problem here.

GrafanaBridge has no systemd unit files shipped

Is your feature request related to a problem? Please describe.
In the Area of systemd on Linux systems we missed a deployed system unit file for the zimonGrafana-Bridge.

Describe the solution you'd like
Generate a pre deployed system-unit file under /usr/lib/systemd/system/zimonGrafana.service.

Describe alternatives you've considered

Additional context

  • Please generate a unit-file that are automatically restartable and dependent to necessary service (gpfs, pmcollector)

Bridge fails to start with " name 'requests' is not defined"

When I try and start the bridge on my collector node, I see the following error:

[root@nrg1-zimon1 source]# python3 zimonGrafanaIntf.py
2021-10-05 12:02 - INFO     -  *** IBM Spectrum Scale bridge for Grafana - Version: 7.0.4 ***
2021-10-05 12:02 - ERROR    - Failed to initialize connection to pmcollector: name 'requests' is not defined, quitting
[root@nrg1-zimon1 source]#

GPFS AFM Metrics.

Hi,

As agreed i have tried to come up with the metrics we need in order to get AFM in the grafana dashboard.

Im not sure which to choose excactly, maybe you have some information about that?

GPFSAFM, GPFSAFMFS, GPFSAFMFSET?

Which is best to use for monitoring AFM in grafana?

We usually use the command "mmafmctl getstate" to see the state, que and filesets used for AFM. sometime we use mmfsadm dump afm to see more detailed information as it also displays split write, current files in que/inflight (per node) and much more.

My best guess would be these are required as a minimum, but not 100% sure:

gpfs_afm_queued
gpfs_afm_inflight
gpfs_afm_complete
gpfs_afm_errors

i think the above is a good pointer towards checking the health of AFM on the different filesets?

Here is my initial idea of metrics to use for basic performance monitoring:

gpfs_afm_bytes_read
gpfs_afm_bytes_written
gpfs_afm_ops_sync
gpfs_afm_ops_sent
gpfs_afm_bytes_pending
gpfs_afm_avg_time
gpfs_afm_tot_write_time
gpfs_afm_conn_broken
gpfs_afm_used_q_memory
gpfs_afm_num_queued_msgs

if you have some better ideas im open for it.. we just need to make sure that we can see the current state of AFM in grafana as well as monitor the que, performance etc..

/Andi Christiansen

pmcollector version 5.1.4-0 uses unix sockets

Hello,
After upgrading to the 5.4.1 version of spectrum scale, the pmcollector does not provides a queryinterface anymore, only an unix socket.
Is it possible to use the bridge with this unix socket (/var/run/perfmon/pmcollector.socket) or is this version not supported yet?
Thanks!

ZimonGrafana Bridge don't start on rhel8.8 with python3.8 installed

Describe the bug
We installed python3.8 on rhel8.8 and see that the systemctl start zimonGrafana don't start.

To Reproduce
Steps to reproduce the behavior:

  1. install python3.8 with dbf install python38
  2. Start the systemctl unitfile zimonGrafana.service
  3. display the status with systemctl status zimonGrafana.service
  4. See error on journalctl -e

Expected behavior
Active zimonGrafana.service

Screenshots
Feb 29 22:28:38 sap00l11.lan.huk-coburg.de zimonGrafana[2802356]: 2024-02-29 22:28 - MainThread - INFO - *** IBM Storage Scale bridge for Grafana - Version: 7.1.2 ***
Feb 29 22:28:38 sap00l11.lan.huk-coburg.de zimonGrafana[2802356]: 2024-02-29 22:28 - MainThread - ERROR - Failed to initialize connection to pmcollector: name 'requests' is not defined, quitting

Desktop (please complete the following information):

  • IBM Storage Scale system type:
    • [x ] IBM Storage Scale classic cluster.
    • IBM Storage Scale Container Native Storage Access (CNSA).
    • IBM Elastic Storage Server (ESS).
  • IBM Storage Scale cluster version: 5.1.9.2
  • IBM Storage Scale bridge for Grafana version: 7.1.2
  • Grafana version: 10.1.1
  • OS Grafana is installed on: rhel8.8
  • OS Bridge is installed on: rhel8.8 EUS

Additional context
Add any other context about the problem here.

Where I can know the mean of all metrics?

Hi, IBM support team. I have installed scale-bridge-for-grafana into my env. And I send /api/suggest request to get all metrics. I wonder where I can refer what's the mean of those metrics.

Feedback on Bridge

Hi.

Great work with the Grafana Bridge.!

A couple of places that maybe need to be fixed.

    • - The use of CNSS is several places, code and document, Shouldn't this be CNSA? (Container Native Storage Access (CNSA))
    • - In the documentation, there is no clear documentation for how to build the container and run it on OCP.
    • - The Version for datasource on Grafana-bridge is 2.1. Maybe add in the documentation this should be changed to ==2.3?
    • [x ] - Information that OC namespace needs to change, this is already updated.

Issue with gpfs_filesetquota metrics collect

Hello,

I use metrics_gpfs_filesetquota, and i have an issue ... very strange i don't know why. There are some holes in my metrics collection (all the time 5min), and that's something I don't have with other endpoints.

image

  - job_name: 'gpfs_filesetquota'
    scrape_interval: 5m
    scrape_timeout: 2m
    metrics_path: '/metrics_gpfs_filesetquota'
    static_configs:
      - targets: ['localhost:9250']
        labels:
          env: qualif
          cluster: gpfs
    honor_timestamps: True
    scheme: https
    tls_config:
      cert_file: /etc/bridge_ssl/certs/cert_brige_ibm.pem
      key_file: /etc/bridge_ssl/certs/privkey_brige_ibm.pem
      insecure_skip_verify: True

I don't think I made a configuration error, if you have an idea I'll take it

openshift-marketplace doesn't available on KVM based installation

Describe the bug
Trying to use Grafana/Grafana Bridge on a baremetal / KVM based OCP environment. But openshift-marketplace is not available to install grafana-operator

Expected behavior
openshift-marketplace available or grafana-operator installed

Screenshots
[root@helper ~]# oc get packagemanifests -n openshift-marketplace |grep grafana
No resources found in openshift-marketplace namespace.
[root@helper ~]# oc get packagemanifests -n openshift-marketplace
No resources found in openshift-marketplace namespace.

Desktop (please complete the following information):

  • IBM Spectrum Scale system type:
    • IBM Spectrum Scale classic cluster.
    • IBM Spectrum Scale Container Native Storage Access (CNSA).
    • IBM Elastic Storage Server (ESS).
  • IBM Spectrum Scale cluster version: 5.1.6
  • IBM Spectrum Scale bridge for Grafana version:
  • Grafana version:
  • OS Grafana is installed on:

Additional context
Add any other context about the problem here.

Grafana dashboard update.

Hi Helene,

As agreed, here is a ticket to track the updates for the grafana dashboard.

/Andi Christiansen

No sensor configuration data parsed

Hi,
I attempt to test " classic IBM Spectrum Scale devices" , but some error occurd, there are some info

grafana_server # cat  /var/log/ibm_bridge_for_grafana/zserver.log 
2023-03-04 10:32:00,015 - bridgeLogger - INFO     -  *** IBM Spectrum Scale bridge for Grafana - Version: 7.0.9-dev ***
2023-03-04 10:32:00,015 - bridgeLogger - MOREINFO - zimonGrafanaItf invoked with parameters:
 port=4242
protocol=http
server=10.X.X.1
serverPort=4739
retryDelay=60
apiKeyName=scale_grafana
caCertPath=False
includeDiskData=False
logPath=/var/log/ibm_bridge_for_grafana
logLevel=10
logFile=zserver.log
2023-03-04 10:32:00,016 - bridgeLogger - DEBUG    - readSensorsConfigFromMMSDRFS attempt to read /var/mmfs/gen/mmsdrfs
2023-03-04 10:32:00,016 - bridgeLogger - DEBUG    - invoke parseSensorsConfig
2023-03-04 10:32:00,016 - bridgeLogger - MOREINFO - Server internal error occurred. Reason: No sensor configuration data parsed
2023-03-04 10:32:00,016 - bridgeLogger - ERROR    - Metadata could not be retrieved. Check log file for more details, quitting

storage node info

# netstat -ntlp | grep pmcollector
tcp        0      0 0.0.0.0:9085            0.0.0.0:*               LISTEN      782468/pmcollector  
tcp        0      0 0.0.0.0:4739            0.0.0.0:*               LISTEN      782468/pmcollector  

mmperfmon automanaged 

how do I need to adjust?and hope to get your help. thanks!

new dashboard for ESS NVME and Hybrid models

common metrics for ESS (pair or shared recoverygroup)

block:
-- all or selected pdisk IOPS, bandwidth
-- Declusted Array (DA) wise IOPS , bandwidth == total sum of IOPS/BW all(or selected) vdisk(NSD) from a DA
-- all vdisk of an ESS (building block)== total IOPS, bandwidth

network:
-- read /write .. in /out pkts and bandwidth - Ethernet
-- read /write .. in /out pkts and bandwidth - RoCE (Rdma o Ethernet)
-- read /write .. in /out pkts and bandwidth - Infiniband

How works Prometheus Integration ?

Hello team,

I see that IBm bridge for grafana support promtheus.

It is possible to explain how it is working (update readme maybe) ? And how to use it ?

And can you update your dockerfile to use Prometheus Exporter API ?

Regards,

GPFSNSDFS sensor metrics are not found by the bridge

When I do a query for some metric, e.g.
mmperfmon query 'mynode|GPFSNSDFS|exfld|gpfs_nsdfs_bytes_read'
I get reasonable output:
Legend:
1: mynode|GPFSNSDFS|exfld|gpfs_nsdfs_bytes_read
Row Timestamp gpfs_nsdfs_bytes_read
1 2020-08-04-16:14:47 0
2 2020-08-04-16:14:48 0
3 2020-08-04-16:14:49 0
4 2020-08-04-16:14:50 0
5 2020-08-04-16:14:51 0
6 2020-08-04-16:14:52 0
7 2020-08-04-16:14:53 0
8 2020-08-04-16:14:54 2097152
9 2020-08-04-16:14:55 0
10 2020-08-04-16:14:56 0
If I try to read that through the grafana bridge I get (in the zserver.log)
2020-08-04 16:21:54,470 - zimonGrafanaIntf - ERROR - Metric mynode|GPFSNSDFS|exfld|gpfs_nsdfs_bytes_read cannot be found. Please check if the corresponding sensor is configured
If I try to just read all the gpfs_nsdfs_bytes_read sensors (with or without filter) i get:
2020-08-04 16:21:13,834 - zimonGrafanaIntf - ERROR - Sensor for metric gpfs_nsdfs_bytes_read is disabled
How comes this discrepancy? I am pretty sure that some updates ago the metrics were visible in the Grafana bridge.

Version 5

Hi,

Can you help me find the download url for version 5?

Grafana Dashboard shows HTTP 500 Error by using grafana bridge with CNSA 5.1.5 prerelease version

Describe the bug
GrafanaBridge pods are successfully deployed and are in running state. Never less the metric graph in Grafana Dashboard shows HTTP500 error

Expected behavior
the metric graph should show data

Desktop (please complete the following information):

  • IBM Spectrum Scale system type:
    • IBM Spectrum Scale classic cluster.
    • IBM Spectrum Scale Container Native Storage Access (CNSA).
    • IBM Elastic Storage Server (ESS).
  • IBM Spectrum Scale cluster version: 5.1.5
  • IBM Spectrum Scale bridge for Grafana version: 7.0.6
  • Grafana version: 7.5.16
  • OS Grafana is installed on: OCP 4.10

Additional context
[root@myocp ~]# oc rsh ibm-spectrum-scale-grafana-bridge-6844d54c6-cwf6f
Defaulted container "grafanabridge" out of: grafanabridge, initservice (init)
sh-4.4$ cd /var/log/ibm_bridge_for_grafana/
sh-4.4$ ls
cherrypy_access.log cherrypy_error.log
sh-4.4$ cat cherrypy_access.log
10.254.21.111 - - [21/Sep/2022:10:36:53] "POST /api/query HTTP/1.1" 500 824 "" "Grafana/7.5.16"
10.254.21.111 - - [21/Sep/2022:10:36:56] "POST /api/query HTTP/1.1" 500 824 "" "Grafana/7.5.16"
10.254.21.111 - - [21/Sep/2022:10:37:00] "POST /api/query HTTP/1.1" 500 824 "" "Grafana/7.5.16"
10.254.21.111 - - [21/Sep/2022:10:37:01] "POST /api/query HTTP/1.1" 500 824 "" "Grafana/7.5.16"
10.254.21.111 - - [21/Sep/2022:10:37:06] "POST /api/query HTTP/1.1" 500 824 "" "Grafana/7.5.16"
10.254.21.111 - - [21/Sep/2022:10:37:11] "POST /api/query HTTP/1.1" 500 824 "" "Grafana/7.5.16"
10.254.21.111 - - [21/Sep/2022:10:37:16] "POST /api/query HTTP/1.1" 500 824 "" "Grafana/7.5.16"
10.254.21.111 - - [21/Sep/2022:10:37:21] "POST /api/query HTTP/1.1" 500 824 "" "Grafana/7.5.16"
10.254.21.111 - - [21/Sep/2022:10:37:26] "POST /api/query HTTP/1.1" 500 824 "" "Grafana/7.5.16"
10.254.21.111 - - [21/Sep/2022:10:37:31] "POST /api/query HTTP/1.1" 500 824 "" "Grafana/7.5.16"
10.254.21.111 - - [21/Sep/2022:10:37:36] "POST /api/query HTTP/1.1" 500 824 "" "Grafana/7.5.16"
10.254.21.111 - - [21/Sep/2022:10:37:41] "POST /api/query HTTP/1.1" 500 824 "" "Grafana/7.5.16"
10.254.21.111 - - [21/Sep/2022:10:37:46] "POST /api/query HTTP/1.1" 500 824 "" "Grafana/7.5.16"
10.254.21.111 - - [21/Sep/2022:10:37:51] "POST /api/query HTTP/1.1" 500 824 "" "Grafana/7.5.16"
10.254.21.111 - - [21/Sep/2022:10:38:44] "POST /api/query HTTP/1.1" 500 824 "" "Grafana/7.5.16"
10.254.21.111 - - [21/Sep/2022:10:41:04] "POST /api/query HTTP/1.1" 500 824 "" "Grafana/7.5.16"
10.254.21.111 - - [21/Sep/2022:10:41:27] "POST /api/query HTTP/1.1" 500 824 "" "Grafana/7.5.16"
10.254.21.111 - - [21/Sep/2022:10:41:28] "POST /api/query HTTP/1.1" 500 824 "" "Grafana/7.5.16"
10.254.21.111 - - [21/Sep/2022:10:41:33] "POST /api/query HTTP/1.1" 500 824 "" "Grafana/7.5.16"
sh-4.4$ cat cherrypy_error.log
[21/Sep/2022:10:35:29] ENGINE Bus STARTING
[21/Sep/2022:10:35:29] ENGINE Started monitor thread 'Autoreloader'.
[21/Sep/2022:10:35:29] ENGINE Serving on https://0.0.0.0:8443/
[21/Sep/2022:10:35:29] ENGINE Bus STARTED
[21/Sep/2022:10:36:53] HTTP
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/cherrypy/_cprequest.py", line 638, in respond
self._do_respond(path_info)
File "/usr/local/lib/python3.6/site-packages/cherrypy/_cprequest.py", line 697, in _do_respond
response.body = self.handler()
File "/usr/local/lib/python3.6/site-packages/cherrypy/lib/encoding.py", line 223, in call
self.body = self.oldhandler(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/cherrypy/lib/jsontools.py", line 59, in json_handler
value = cherrypy.serving.request._json_inner_handler(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/cherrypy/_cpdispatch.py", line 54, in call
return self.callable(*self.args, **self.kwargs)
File "zimonGrafanaIntf.py", line 460, in POST
query, dsOp, dsInterval = self._createZimonQuery(q, jreq.get('start'), jreq.get('end'))
File "zimonGrafanaIntf.py", line 354, in _createZimonQuery
bucketSize = self._getSensorPeriod(inMetric)
File "zimonGrafanaIntf.py", line 434, in _getSensorPeriod
if sensorAttr['name'] == str('"%s"' % sensor):
KeyError: 'name'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.