panodata / grafana-wtf Goto Github PK
View Code? Open in Web Editor NEWGrep through all Grafana entities in the spirit of git-wtf.
License: GNU Affero General Public License v3.0
Grep through all Grafana entities in the spirit of git-wtf.
License: GNU Affero General Public License v3.0
Just found this project and am very excited about it!
I ran into this problem when running the following:
grafana-wtf --grafana-url=MYGRAFANA --grafana-token=MYTOKEN explore dashboards --format=yaml
Stack trace:
Traceback (most recent call last):
File "/Users/taylorm/.pyvenv/grafana-wtf/bin/grafana-wtf", line 8, in <module>
sys.exit(run())
File "/Users/taylorm/.pyvenv/grafana-wtf/lib/python3.7/site-packages/grafana_wtf/commands.py", line 243, in run
results = engine.explore_dashboards()
File "/Users/taylorm/.pyvenv/grafana-wtf/lib/python3.7/site-packages/grafana_wtf/core.py", line 438, in explore_dashboards
ix = Indexer(engine=self)
File "/Users/taylorm/.pyvenv/grafana-wtf/lib/python3.7/site-packages/grafana_wtf/core.py", line 492, in __init__
self.index()
File "/Users/taylorm/.pyvenv/grafana-wtf/lib/python3.7/site-packages/grafana_wtf/core.py", line 495, in index
self.index_dashboards()
File "/Users/taylorm/.pyvenv/grafana-wtf/lib/python3.7/site-packages/grafana_wtf/core.py", line 528, in index_dashboards
ds_annotations = self.collect_datasource_items(dbdetails.annotations)
File "/Users/taylorm/.pyvenv/grafana-wtf/lib/python3.7/site-packages/grafana_wtf/core.py", line 501, in collect_datasource_items
for node in element:
TypeError: 'NoneType' object is not iterable
I see that we're currently not testing against 8.5 in the test matrix; I did fork this repo and run the tests against 8.5.5 and all listed python versions and the tests pass, so I'm not sure what the issue might be. Could a maintainer have a look into this issue? I can spend a bit of time digging into this issue as well.
Version: 0.15.0
Platform: MacOS (installed via homebrew)
Run grafana find alerts
...
Dashboard »...«
=================================
Title <non-empty>
Folder General
UID h...
Created at 2022-05-19T19:16:28Z by Anonymous
Updated at 2023-03-16T20:45:22Z by Anonymous
Dashboard ...
Variables ...
Global
------
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.9/site-packages/munch/__init__.py", line 103, in __getattr__
return object.__getattribute__(self, k)
AttributeError: 'Munch' object has no attribute 'title'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.9/site-packages/munch/__init__.py", line 106, in __getattr__
return self[k]
KeyError: 'title'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/homebrew/bin/grafana-wtf", line 8, in <module>
sys.exit(run())
File "/opt/homebrew/lib/python3.9/site-packages/grafana_wtf/commands.py", line 243, in run
report.display(options.search_expression, result)
File "/opt/homebrew/lib/python3.9/site-packages/grafana_wtf/report/textual.py", line 25, in display
self.output_items("Dashboards", result.dashboards, self.compute_url_dashboard)
File "/opt/homebrew/lib/python3.9/site-packages/grafana_wtf/report/textual.py", line 90, in output_items
title = panel.title
File "/opt/homebrew/lib/python3.9/site-packages/munch/__init__.py", line 108, in __getattr__
raise AttributeError(k)
AttributeError: title
Notably, this does not happen for every query, which suggests it's triggered by a specific entity being parsed in the response.
In our case there are abundance of panels, and many dashboards.
When we insert a query as find command argument, we can discover dashboards. This is better than nothing, but not much helpful, especially as users already have a good idea which dashboard to look at. It's the panels that are the biggest pain in searching where a given query is defined.
Env
both "explore datasources" and "explore dashboards" command fail:
[grafana_wtf.core ] INFO : Found 185 data source(s)
Traceback (most recent call last):
File "~/grafana_wtf/commands.py", line 237, in run
results = engine.explore_datasources()
File "~/grafana_wtf/core.py", line 406, in explore_datasources
ix = Indexer(engine=self)
File "~/grafana_wtf/core.py", line 492, in __init__
self.index()
File "~/grafana_wtf/core.py", line 496, in index
self.index_dashboards()
File "~/grafana_wtf/core.py", line 543, in index_dashboards
ds_templating = self.collect_datasource_items(dbdetails.templating)
File "~/grafana_wtf/core.py", line 514, in collect_datasource_items
datasource = self.datasource_by_name.get(ds_name, {})
TypeError: unhashable type: 'list'
As I see in #44 that 9.3.1 is not officially supported yet, I am assuming it has something to do with Grafana compatibility.
Please let me know if there is something I can do to help more.
Hi there,
within our conversation at [1], @chenlujjj asked for another feature:
Another feature request is to find out invalid data souces.
By invalid I mean the data source cannot pass test when press “Save & Test” button in its page.
I have written a tiny go script to do this. Maybe it can be added tografana-wtf
.
With kind regards,
Andreas.
[1] https://community.grafana.com/t/how-to-find-out-unused-datasources/56920/7
Hi @matschaffer and @robouden,
as you appear to be avid Grafana users (thank you for your contributions to Panodata Map Panel 12), I thought it would be a good idea to bring this Python program to your attention. It will allow you to search for arbitrary labels within datasources and dashboards of your Grafana instance and - while undocumented yet - even replace them.
This is tremendously helpful when running large instances (in terms of number of deployed datasources/dashboards) with respect to maintenance and migration tasks.
With kind regards,
Andreas.
When I try to explore datasources it always fail with a TypeError. No idea why this happens.
I have tried versions 0.15.0 - 0.17.0
This is what happens:
➜ ~ export GRAFANA_URL=https://grafana.example.com
➜ ~ export GRAFANA_TOKEN=*************
➜ ~ grafana-wtf explore datasources --format=yaml --cache-ttl=0 --drop-cache
2023-11-14 12:57:30,811 [grafana_wtf.commands ] INFO : Using Grafana at https://grafana.example.com
2023-11-14 12:57:30,814 [grafana_wtf.core ] INFO : Response cache will expire immediately (expire_after=0)
2023-11-14 12:57:30,815 [grafana_wtf.core ] INFO : Response cache database location is /home/user/.cache/grafana-wtf.sqlite
2023-11-14 12:57:30,815 [grafana_wtf.core ] INFO : Dropping response cache
2023-11-14 12:57:30,815 [grafana_wtf.core ] INFO : Clearing cache
2023-11-14 12:57:30,815 [requests_cache.backends.base ] INFO : Clearing all items from the cache
2023-11-14 12:57:30,933 [grafana_wtf.core ] INFO : Scanning dashboards
2023-11-14 12:57:31,421 [grafana_wtf.core ] INFO : Found 172 dashboard(s)
0it [00:00, ?it/s]
2023-11-14 12:57:31,429 [grafana_wtf.core ] INFO : Fetching dashboards in parallel with 5 concurrent requests | 0/172 [00:00<?, ?it/s]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 172/172 [00:02<00:00, 69.62it/s]
2023-11-14 12:57:34,122 [grafana_wtf.core ] INFO : Scanning datasources██████████████████████████████▉ | 169/172 [00:02<00:00, 75.00it/s]
2023-11-14 12:57:34,177 [grafana_wtf.core ] INFO : Found 35 data source(s)
Traceback (most recent call last):
File "/usr/local/bin/grafana-wtf", line 8, in <module>
sys.exit(run())
^^^^^
File "/home/user/.local/lib/python3.11/site-packages/grafana_wtf/commands.py", line 309, in run
results = engine.explore_datasources()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/grafana_wtf/core.py", line 420, in explore_datasources
ix = Indexer(engine=self)
^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/grafana_wtf/core.py", line 560, in __init__
self.index()
File "/home/user/.local/lib/python3.11/site-packages/grafana_wtf/core.py", line 564, in index
self.index_dashboards()
File "/home/user/.local/lib/python3.11/site-packages/grafana_wtf/core.py", line 627, in index_dashboards
item = DatasourceItem.from_payload(item)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/grafana_wtf/model.py", line 171, in from_payload
return cls(**payload)
^^^^^^^^^^^^^^
TypeError: DatasourceItem.__init__() got an unexpected keyword argument 'datasource'
So far everything else seem to work, and grafana-wtf has been of great help to me. Would really appreciate if you could help me with this.
Thanks!
It would be excellent to simulate a dry run of a replacement when you want to perform a replacement across a large number of dashboards (like migrating datasource IDs)
grafana-wtf 0.13.1
Grafana 8.4.3
grafana-wtf explore datasources --format=yaml
2022-03-06 09:33:33,459` [grafana_wtf.core ] INFO : Setting up response cache to expire after 300 seconds
2022-03-06 09:33:33,462 [grafana_wtf.core ] INFO : Scanning dashboards
2022-03-06 09:33:33,469 [grafana_wtf.core ] INFO : Found 133 dashboards
0%| | 0/133 [00:00<?, ?it/s]2022-03-06 09:33:33,475 [grafana_wtf.core ] INFO : Fetching dashboards in parallel with 5 concurrent requests
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 133/133 [00:00<00:00, 453.97it/s]2022-03-06 09:33:34,067 [grafana_wtf.core ] INFO : Scanning datasources
2022-03-06 09:33:34,069 [grafana_wtf.core ] INFO : Found 21 data sources
Traceback (most recent call last):
File "/usr/local/bin/grafana-wtf", line 10, in <module>
sys.exit(run())
File "/usr/local/lib/python3.7/dist-packages/grafana_wtf/commands.py", line 234, in run
results = engine.explore_datasources()
File "/usr/local/lib/python3.7/dist-packages/grafana_wtf/core.py", line 390, in explore_datasources
ix = Indexer(engine=self)
File "/usr/local/lib/python3.7/dist-packages/grafana_wtf/core.py", line 478, in __init__
self.index()
File "/usr/local/lib/python3.7/dist-packages/grafana_wtf/core.py", line 481, in index
self.index_dashboards()
File "/usr/local/lib/python3.7/dist-packages/grafana_wtf/core.py", line 513, in index_dashboards
ds_panels = self.collect_datasource_items(dbdetails.panels)
File "/usr/local/lib/python3.7/dist-packages/grafana_wtf/core.py", line 494, in collect_datasource_items
return sorted(items)
TypeError: '<' not supported between instances of 'dict' and 'dict'
First off, thank you for this incredibly useful tool!
In cases where a datasource is specified via variable, that datasource may appear as unused in the output of grafana-wtf explore datasources
.
How to reproduce
grafana-wtf explore datasources
Expected
The unused
field of the output does not contain the new datasource.
Actual
The unused
field contains the new datasource.
Notes
Since a datasource variable of a given type with no filter could in theory match every datasource of that type, I'm not sure if there's an obvious fix that would avoid excluding too much from the unused
field, for any Grafana instance that uses such datasource variables, which might itself be unexpected.
It might also be unexpected that the used
field excludes datasources that can be selected via variable; though for the same reason as above, this might result in dashboards that use a datasource variable to appear in every used
entry, which may itself be unexpected.
This might just be a documentation fix to warn that these cases exist. Or, perhaps a new argument could be added to explore datasources
to control how to treat such cases?
Hi @amotl,
Thank you for this tool, it really helps me a lot.
Not sure if I should open this as an issue, but is there a functionality to get information about the notification channels? For example, I needed to see if any of the dashboards are using specific notification channel for the alerting, but I couldn't do it.
If there is no such functionality, do you think it would be possible and useful to implement?
Best regards,
Nikodemas
I might suggest this tweak to finding unused Data sources..
grafana-wtf/grafana-wtf-venv/lib/python3.7/site-packages/grafana_wtf/core.py
if datasource_item.name in ["-- Grafana --", "-- Mixed --"] or datasource_item.type == "grafana" or datasource_item.uid in ["-- Grafana --", "-- Mixed --", "grafana", "-- Dashboard --"] :
Reason:
Some older dashboards that are created are being triggered. The Name aspect has moved to uid and needs to account for both..
Request:
I have been trying to play with the jq syntax to extract the SQL statements of the data sources and not been able to do so yet. Do you have any suggestions?
Also a little note for those not familiar with python that much.. the version > .13 require pyton 3.7 or greater to run.
Thanks
@jangaraj just reported through #2 (comment):
According to doc user needs to use
export GRAFANA_URL=https://daq.example.org/grafana/
. But that trailing slash is a problem in my case (I use https://domain.org/). Could you handle that in the code, so user will be able to use URL with/without trailing slash, please?
Thanks!
First off, this tools is great. I am trying to use it to get every MetricName associated with a specific datasource. I was wondering if this was possible through the current setup but cant find the input combo? Looking for something like, if datasource name is 'CloudWatch', list all of the MetricsNames used in any panel in Grafana.
It would be helpful to ignore untrusted SSL certificates. Merely a quality of life enhancement.
As we can see at 1, running the tests on Python 3.12-dev currently croaks, while it has worked before. See also GH-57.
___________________________ test_find_textual_empty ____________________________
docker_grafana = 'http://127.0.0.1:33333/'
capsys = <_pytest.capture.CaptureFixture object at 0x7f099215e510>
def test_find_textual_empty(docker_grafana, capsys):
# Run command and capture output.
set_command("find foobar")
> grafana_wtf.commands.run()
tests/test_commands.py:46:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
grafana_wtf/commands.py:192: in run
engine.scan_common()
grafana_wtf/core.py:115: in scan_common
self.scan_dashboards()
grafana_wtf/core.py:196: in scan_dashboards
self.fetch_dashboards_parallel()
grafana_wtf/core.py:230: in fetch_dashboards_parallel
loop = asyncio.get_event_loop()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <asyncio.unix_events._UnixDefaultEventLoopPolicy object at 0x7f0991fc6f90>
def get_event_loop(self):
"""Get the event loop for the current context.
Returns an instance of EventLoop or raises an exception.
"""
if self._local._loop is None:
> raise RuntimeError('There is no current event loop in thread %r.'
% threading.current_thread().name)
E RuntimeError: There is no current event loop in thread 'MainThread'.
/opt/hostedtoolcache/Python/3.12.0-alpha.3/x64/lib/python3.12/asyncio/events.py:676: RuntimeError
Grafana version 7.0.5
grafana-wtf version 0.9.0, python 3.8.5
When doing any replace command with grafana-wtf, the altered dashboard gets move out of its current grafana directory and moved into "General".
Running with --debug
doesn't seem to provide more insight.
Thank you for the new datasource-breakdown
feature. It seems to be a problem for my case (Grafana 6.1.x):
root@dee6c9aa6148:/# grafana-wtf --cache-ttl=1200 --concurrency=10 datasource-breakdown
2021-12-10 17:03:41,821 [grafana_wtf.core ] INFO : Setting up response cache to expire after 1200 seconds
2021-12-10 17:03:41,827 [grafana_wtf.core ] INFO : Scanning dashboards
2021-12-10 17:03:41,881 [grafana_wtf.core ] INFO : Found 2328 dashboards
0%| | 0/2328 [00:00<?, ?it/s]2021-12-10 17:03:41,904 [grafana_wtf.core ] INFO : Fetching dashboards in parallel with 10 concurrent requests
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2328/2328 [00:36<00:00, 63.30it/s]
2021-12-10 17:04:18,678 [grafana_wtf.core ] INFO : Scanning datasources
2021-12-10 17:04:18,741 [grafana_wtf.core ] INFO : Found 422 data sources
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/munch/__init__.py", line 103, in __getattr__
return object.__getattribute__(self, k)
AttributeError: 'Munch' object has no attribute 'datasource'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/munch/__init__.py", line 106, in __getattr__
return self[k]
KeyError: 'datasource'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/grafana-wtf", line 8, in <module>
sys.exit(run())
File "/usr/local/lib/python3.9/site-packages/grafana_wtf/commands.py", line 198, in run
results = engine.datasource_breakdown()
File "/usr/local/lib/python3.9/site-packages/grafana_wtf/core.py", line 293, in datasource_breakdown
ix = Indexer(engine=self)
File "/usr/local/lib/python3.9/site-packages/grafana_wtf/core.py", line 336, in __init__
self.index()
File "/usr/local/lib/python3.9/site-packages/grafana_wtf/core.py", line 339, in index
self.index_dashboards()
File "/usr/local/lib/python3.9/site-packages/grafana_wtf/core.py", line 360, in index_dashboards
ds_panels = self.collect_datasource_names(dashboard.dashboard.panels)
File "/usr/local/lib/python3.9/site-packages/grafana_wtf/core.py", line 344, in collect_datasource_names
return list(set([item.datasource for item in root if item.datasource]))
File "/usr/local/lib/python3.9/site-packages/grafana_wtf/core.py", line 344, in <listcomp>
return list(set([item.datasource for item in root if item.datasource]))
File "/usr/local/lib/python3.9/site-packages/munch/__init__.py", line 108, in __getattr__
raise AttributeError(k)
AttributeError: datasource
root@dee6c9aa6148:/# pip list
Package Version
------------------ ---------
appdirs 1.4.4
attrs 21.2.0
cattrs 1.9.0
certifi 2021.10.8
charset-normalizer 2.0.9
colored 1.4.3
decorator 5.1.0
docopt 0.6.2
grafana-api 1.0.3
grafana-wtf 0.11.0
idna 3.3
jsonpath-rw 1.4.0
munch 2.5.0
pip 21.0.1
ply 3.11
Pygments 2.10.0
PyYAML 5.4.1
requests 2.26.0
requests-cache 0.8.1
setuptools 53.0.0
six 1.16.0
tabulate 0.8.9
tqdm 4.62.3
url-normalize 1.4.3
urllib3 1.26.7
wheel 0.36.2
WARNING: You are using pip version 21.0.1; however, version 21.3.1 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
It would be cool to support something like the jq query and transformation language for querying and filtering data requested from the Grafana API, like outlined within grafana/simple-json-datasource#111.
This is an excellent tool! Thank you!
It would be really handy to have the direct URL to the panel or variable editor in the find
output.
This would make the panel fixing workflow a lot smoother:
It looks like cryptography
suddenly wants to build.
Downloading cryptography-42.0.5.tar.gz (671 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 671.0/671.0 kB 13.8 MB/s eta 0:00:00
Installing build dependencies: started
Installing build dependencies: finished with status 'error'
error: subprocess-exited-with-error
[...]
error: command 'gcc' failed: No such file or directory
-- https://github.com/panodata/grafana-wtf/actions/runs/8482596249/job/23242113705?pr=125#step:10:755
cryptography
stopped publishing binary wheel packages for Python 3.11? Why?
-- https://pypi.org/project/cryptography/#files
/cc @Ousret
Hello,
first of all, thank you for this great tool 👍🏽 .
I'm having issue with any command. Basically I'm receiving this error every time:
2022-06-30 16:23:47,323 [grafana_wtf.core ] INFO : Setting up response cache to expire after 300 seconds
Traceback (most recent call last):
File "/home/tom/.local/bin/grafana-wtf", line 8, in <module>
sys.exit(run())
File "/home/tom/.local/lib/python3.9/site-packages/grafana_wtf/commands.py", line 174, in run
engine.enable_cache(expire_after=cache_ttl, drop_cache=options["drop-cache"])
File "/home/tom/.local/lib/python3.9/site-packages/grafana_wtf/core.py", line 55, in enable_cache
requests_cache.install_cache(expire_after=expire_after)
File "/home/tom/.local/lib/python3.9/site-packages/requests_cache/patcher.py", line 48, in install_cache
backend = init_backend(cache_name, backend, **kwargs)
File "/home/tom/.local/lib/python3.9/site-packages/requests_cache/backends/__init__.py", line 97, in init_backend
return BACKEND_CLASSES[backend](cache_name, **kwargs)
File "/home/tom/.local/lib/python3.9/site-packages/requests_cache/backends/sqlite.py", line 100, in __init__
self.responses: SQLiteDict = SQLitePickleDict(db_path, table_name='responses', **kwargs)
File "/home/tom/.local/lib/python3.9/site-packages/requests_cache/backends/sqlite.py", line 155, in __init__
self.init_db()
File "/home/tom/.local/lib/python3.9/site-packages/requests_cache/backends/sqlite.py", line 160, in init_db
with self._lock, self.connection() as con:
File "/usr/lib/python3.9/contextlib.py", line 117, in __enter__
return next(self.gen)
File "/home/tom/.local/lib/python3.9/site-packages/requests_cache/backends/sqlite.py", line 168, in connection
self._local_context.con = sqlite3.connect(self.db_path, **self.connection_kwargs)
sqlite3.OperationalError: unable to open database file
Output above is from grafana-wtf --grafana-url=https://redacted.org/grafana/?verify=no --grafana-token=redacted info --format=yaml
Do you have some clue what can be wrong?
1.) Dashboard search run serially
It is slow approach for Grafana instances with many dashboards, e.g.:
2019-05-06 18:12:11,809 [grafana_wtf.core ] INFO : Found 1000 dashboards
2019-05-06 18:12:11,809 [grafana_wtf.core ] INFO : Fetching dashboards
20%|███████████████████████▏ | 198/1000 [00:56<05:12, 2.57it/s
Python multiprocessing (parallel runs) can increase search speed in this case. There should be configuration to limit number of search processes, otherwise it can be "DDoS attack"
2.) Actually, Grafana instance from first example contains more than 1k dashboards, but there is default 1k API limits. There should be config/env variable to configure this limit.
Thanks.
It looks like the request cache stopped working? When invoking grafana-wtf on play.grafana.org, the program requests data over and over again. That is especially hard on such a huge Grafana installation, where it takes almost one and a half minutes to acquire all the metadata.
export GRAFANA_URL=https://play.grafana.org
grafana-wtf info --format=yaml
Fetching dashboards in parallel with 5 concurrent request/s]
4%|█▏ | 67/1537 [00:03<01:26, 16.99it/s]
grafana:
version: 11.1.0-68793
url: https://play.grafana.org
statistics: {}
summary:
dashboard_panels: 9299
dashboard_annotations: 1501
dashboard_templating: 1349
Is it related to the switchover to Niquests, because it does not harmonize with requests-cache well, yet?
/cc @Ousret
This is a work in progress but you seem to be a major WIZ at json (I stink at it) This is something I am working on as well.. The output needs to be worked on and the for each loop for each dashboard is not quite right yet either. You might be able to modify this.
Ideally, I would like to know if any users or teams have access to the folder and or the dashboards. I have NOT thought about doing this down at the data source levels (In my case that may be overkill but others MIGHT like it).
#!/bin/bash
GRAFANA_API_URL="https://xxx/api"
API_KEY="xxxxx"
get_permissions() {
local uid="$1"
local endpoint="$2"
curl -s -H "Authorization: Bearer ${API_KEY}" "$GRAFANA_API_URL/$endpoint/$uid/permissions"
}
response=$(curl -s -H "Authorization: Bearer ${API_KEY}" "$GRAFANA_API_URL/search")
IFS=$'\n' dash_folders=($(echo "$response" | jq -r '.[] | select(.type=="dash-folder") | .uid'))
IFS=$'\n' dash_folder_titles=($(echo "$response" | jq -r '.[] | select(.type=="dash-folder") | .title'))
for index in "${!dash_folders[@]}"; do
folder_uid=${dash_folders[$index]}
folder_title=${dash_folder_titles[$index]}
echo "$folder_title, $folder_uid,"
permissions=$(get_permissions "$folder_uid" "folders")
length=$(echo "$permissions" | jq length)
for ((i=0; i<$length; i++)); do
team=$(echo "$permissions" | jq -r ".[$i].team // \"N/A\"")
user=$(echo "$permissions" | jq -r ".[$i].user // \"N/A\"")
permissionName=$(echo "$permissions" | jq -r ".[$i].permissionName // \"N/A\"")
if [[ "$team" != "N/A" ]]; then
echo "Team $team - $permissionName"
fi
if [[ "$user" != "N/A" ]]; then
echo "User $user - $permissionName"
fi
done
echo "-------Dashboards in Folder ---"
IFS=$'\n' dash_dbs_in_folder=($(echo "$response" | jq -r ".[] | select(.type==\"dash-db\" and .folderId == ${dash_folders[$index]}) | .title"))
IFS=$'\n' dash_dbs_uids=($(echo "$response" | jq -r ".[] | select(.type==\"dash-db\" and .folderId == ${dash_folders[$index]}) | .uid"))
for dash_index in "${!dash_dbs_in_folder[@]}"; do
dashboard_title="${dash_dbs_in_folder[$dash_index]}"
dashboard_uid="${dash_dbs_uids[$dash_index]}"
echo "$folder_title - $dashboard_title"
permissions=$(get_permissions "$dashboard_uid" "dashboards")
length=$(echo "$permissions" | jq length)
for ((i=0; i<$length; i++)); do
team=$(echo "$permissions" | jq -r ".[$i].team // \"N/A\"")
user=$(echo "$permissions" | jq -r ".[$i].user // \"N/A\"")
permissionName=$(echo "$permissions" | jq -r ".[$i].permissionName // \"N/A\"")
if [[ "$team" != "N/A" ]]; then
echo "Team $team - $permissionName"
fi
if [[ "$user" != "N/A" ]]; then
echo "User $user - $permissionName"
fi
done
done
echo "----------------------"
done
`
To reach beyond the 5000 results limit employed by Grafana on the dashboard search API, grafana-wtf
should apply appropriate paging, see also https://grafana.com/docs/http_api/folder_dashboard_search/.
Hi was wondering if there's any where we can use this tool to return the list of unique label names which are reference across all graphs which are using a Prometheus datasource? I have a scenario where I'd like to test the possibly far reaching effects of dropping all labels except those matching a given regex and was hoping I could first get the complete list of labels names referenced across all my grafana objects. If this isn't available at this time, is that something that could potentially be added to the functionality?
Dear @cronosnull,
I just found 740f6e4 and a6cb123 from your pen. Well done, appreciate it! Do you mind submitting a pull request?
With kind regards,
Andreas.
Hi, I am using this tool and needed to export the output to a plain text file to upload it to a Jira issue. The text was bloated with special characters used to give colors to the terminal. I am going to fork this project and work on that feature if you guys don't mind.
Hello, replacing datasources in dashboards has been working great so far with replace.
Is it feasible to extend the functionality of replace to also update datasources for alerts managed by Grafana?
Example:
grafana-wtf replace alerts foo bar
I would like to see datasource stat, which answer my question:
What is the most popular datasource type?
Datasources types stat - total 15:
cloudwatch 10
influxdb 5
Which datasource is unused in any dashboard; how many dashboards/panels/variables use = datasource popularity? It can replace
Datasource usage stat:
name dashboards panels variables
datasource-1 4 20 0
...
datasource-15 0 0 0
And similar stats also for panels:
Panel stat:
type total dashboards
graph 354 121
text 58 20
singlestat 134 30
blackmirror1-singlestat-math-panel 20 5
row 50 10
table 201 156
Thank you.
E TypeError: expected string or bytes-like object
-- https://github.com/panodata/grafana-wtf/actions/runs/8482124688/job/23240743427?pr=125#step:5:510
Maybe Niquests returns bytes, where Requests previously returned text?
/cc @Ousret
I'm using v0.17.0. When running grafana-wtf --drop-cache find xyz
I get a DB lock errror:
2023-11-22 14:11:14,368 [grafana_wtf.core ] INFO : Fetching dashboards in parallel with 5 concurrent requests | 0/578 [00:00<?, ?it/s]
2023-11-22 14:11:19,952 [requests_cache.backends.sqlite ] WARNING: Database is locked in thread 6272069632; retrying (1/3)
2023-11-22 14:11:25,210 [requests_cache.backends.sqlite ] WARNING: Database is locked in thread 6204764160; retrying (1/3)
2023-11-22 14:11:30,478 [requests_cache.backends.sqlite ] WARNING: Database is locked in thread 6221590528; retrying (1/3)
2023-11-22 14:11:35,742 [requests_cache.backends.sqlite ] WARNING: Database is locked in thread 6238416896; retrying (1/3)
2023-11-22 14:11:41,009 [requests_cache.backends.sqlite ] WARNING: Database is locked in thread 6255243264; retrying (1/3)
Running on Apple M2 Max.
How can I recover from that?
Cheers
it's a very minor finding,
issue-
cli usage mentions one of the command as below.
grafana-wtf explore dashboards --format=json | jq 'select( .[] | .datasources | .[].type=="influxdb" )'
fix-
but, the proper syntax should be as below. I have not looked at the others but, they also may have issues
grafana-wtf explore dashboards --format=json | jq '.[] | select(.datasources | .[].type=="influxdb")'
Hello.
I run grafana-wtf explore datasources --format=yaml
and return error:
2022-01-10 20:13:57,670 [grafana_wtf.core ] INFO : Setting up response cache to expire after 300 seconds
2022-01-10 20:13:57,673 [grafana_wtf.core ] INFO : Scanning dashboards
2022-01-10 20:13:57,739 [grafana_wtf.core ] INFO : Found 116 dashboards
0%| | 0/116 [00:00<?, ?it/s]2022-01-10 20:13:57,745 [grafana_wtf.core ] INFO : Fetching dashboards in parallel with 5 concurrent requests
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 116/116 [00:01<00:00, 82.00it/s]
2022-01-10 20:13:59,159 [grafana_wtf.core ] INFO : Scanning datasources
2022-01-10 20:13:59,189 [grafana_wtf.core ] INFO : Found 21 data sources
Traceback (most recent call last):
File "/usr/local/bin/grafana-wtf", line 10, in <module>
sys.exit(run())
File "/usr/local/lib/python3.7/dist-packages/grafana_wtf/commands.py", line 228, in run
results = engine.explore_datasources()
File "/usr/local/lib/python3.7/dist-packages/grafana_wtf/core.py", line 385, in explore_datasources
ix = Indexer(engine=self)
File "/usr/local/lib/python3.7/dist-packages/grafana_wtf/core.py", line 465, in __init__
self.index()
File "/usr/local/lib/python3.7/dist-packages/grafana_wtf/core.py", line 468, in index
self.index_dashboards()
File "/usr/local/lib/python3.7/dist-packages/grafana_wtf/core.py", line 496, in index_dashboards
ds_panels = self.collect_datasource_names(dbdetails.panels)
File "/usr/local/lib/python3.7/dist-packages/grafana_wtf/core.py", line 477, in collect_datasource_names
return list(sorted(set(names)))
TypeError: unhashable type: 'Munch'
pip3 list
Package Version
------------------- ---------
appdirs 1.4.4
attrs 21.4.0
cattrs 1.10.0
certifi 2021.10.8
chardet 4.0.0
charset-normalizer 2.0.10
colored 1.4.3
decorator 5.1.1
distro-info 0.21
docker 5.0.0
docopt 0.6.2
grafana-api 1.0.3
grafana-wtf 0.12.0
idna 3.3
iotop 0.6
jsonpath-rw 1.4.0
munch 2.5.0
pip 18.1
ply 3.11
pycurl 7.43.0.2
Pygments 2.11.2
PyGObject 3.30.4
python-apt 1.8.4.3
PyYAML 5.4.1
requests 2.27.1
requests-cache 0.9.0
setuptools 40.8.0
six 1.16.0
tabulate 0.8.9
tqdm 4.62.3
typing-extensions 4.0.1
ufw 0.36
unattended-upgrades 0.1
url-normalize 1.4.3
urllib3 1.26.8
websocket-client 0.59.0
wheel 0.37.1
pip3 install grafana-wtf --force
...
Collecting munch<3,>=2.5.0 (from grafana-wtf)
Using cached https://files.pythonhosted.org/packages/cc/ab/85d8da5c9a45e072301beb37ad7f833cd344e04c817d97e0cc75681d248f/munch-2.5.0-py2.py3-none-any.whl
...
I've not attempted to put effort checking it. I've just followed the usage in the most simplistic way.
$ grafana-wtf info --format=yaml
2024-01-25 17:03:36,895 [grafana_wtf.commands ] INFO : Using Grafana at {Grafana endpoint URL}
2024-01-25 17:03:36,905 [grafana_wtf.core ] INFO : Response cache will expire after 3600 seconds
2024-01-25 17:03:36,908 [grafana_wtf.core ] INFO : Response cache database location is /home/{user}/.cache/grafana-wtf.sqlite
Traceback (most recent call last):
File "/home/{user}/.local/bin/grafana-wtf", line 8, in <module>
sys.exit(run())
File "/home/{user}/.local/lib/python3.8/site-packages/grafana_wtf/commands.py", line 322, in run
response = engine.info()
File "/home/{user}/.local/lib/python3.8/site-packages/grafana_wtf/core.py", line 268, in info
version=health.get("version"),
AttributeError: 'str' object has no attribute 'get'
The Grafana endpoint and access is proven to work with this:
$ curl -H "Authorization: Bearer {token}" {Grafana endpoint}/api/dashboards/home
$
Grafana v8.3.3 (30bb7a93ca)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.