greptimeteam / dashboard Goto Github PK
View Code? Open in Web Editor NEWThe dashboard UI for GreptimeDB
License: Apache License 2.0
The dashboard UI for GreptimeDB
License: Apache License 2.0
Right now:
Cloud: Can only input.
Dev: Can only select.
As shown in our slack community, the string results contains line breaker \n
are not rendered. To optimise the display, we can use <pre>
to render string field so that pre-formated text like \n
/\t
are correctly displayed.
explain analyze select * from numbers
on dashboardHttp response body:
{
"code": 0,
"execution_time_ms": 1,
"output": [
{
"records": {
"rows": [
[
"Plan with Metrics",
"CoalescePartitionsExec, metrics=[output_rows=100, elapsed_compute=19.447µs, spill_count=0, spilled_bytes=0, mem_used=0]\n ProjectionExec: expr=[number@0 as number], metrics=[output_rows=100, elapsed_compute=1.328µs, spill_count=0, spilled_bytes=0, mem_used=0]\n CoalesceBatchesExec: target_batch_size=4096, metrics=[output_rows=100, elapsed_compute=15.759µs, spill_count=0, spilled_bytes=0, mem_used=0]\n RepartitionExec: partitioning=RoundRobinBatch(16), metrics=[fetch_time=2.483886ms, repart_time=16ns, send_time=1.568µs]\n RepartitionExec: partitioning=RoundRobinBatch(16), metrics=[fetch_time=29.384µs, repart_time=1ns, send_time=1.783µs]\n ExecutionPlan(PlaceHolder), metrics=[]\n"
]
],
"schema": {
"column_schemas": [
{
"data_type": "String",
"name": "plan_type"
},
{
"data_type": "String",
"name": "plan"
}
]
}
}
}
]
}
How database cli displays these text:
public=> EXPLAIN ANALYZE SELECT * FROM numbers;
plan_type | plan
-------------------+--------------------------------------------------------------------------------------------------------------------------------------------------
Plan with Metrics | CoalescePartitionsExec, metrics=[output_rows=100, elapsed_compute=18.244µs, spill_count=0, spilled_bytes=0, mem_used=0] +
| ProjectionExec: expr=[number@0 as number], metrics=[output_rows=100, elapsed_compute=1.528µs, spill_count=0, spilled_bytes=0, mem_used=0] +
| CoalesceBatchesExec: target_batch_size=4096, metrics=[output_rows=100, elapsed_compute=17.493µs, spill_count=0, spilled_bytes=0, mem_used=0]+
| RepartitionExec: partitioning=RoundRobinBatch(16), metrics=[fetch_time=1.406715ms, repart_time=16ns, send_time=386ns] +
| RepartitionExec: partitioning=RoundRobinBatch(16), metrics=[fetch_time=12.665µs, repart_time=1ns, send_time=1.032µs] +
| ExecutionPlan(PlaceHolder), metrics=[] +
|
(1 row)
If users copy a Unix value to the time-picker input area, we want the component to convert it to a local time value automatically.
This value might need to be the same format as Arco's time-picker's 'value-format' : X.
We also need to consider the difference between X and x, because we do not want users to lose any value.
localhost:4000/dashboard
.localhost:4000/dashboard/
or localhost:4000/dashboard/query
, it will not work.Not sure if this can be fixed. @MichaelScofield
DataFusion can convert `Plan` into a format that can be displayed by [graphviz](https://graphviz.org). Whether to consider supporting generate query plan diagrams through graphviz like [arrow-ballista](https://arrow.apache.org/ballista/user-guide/tuning-guide.html#viewing-query-plans-and-metrics)
Originally posted by @francis-du in #70 (comment)
We can add another page to visualize datafusion query plan into this tree/graph based chart, and provide as much information as possible.
As native PromQL query support is about to land on GreptimeDB, we will add PromQL edit and execute on dashboard as well. Changes include:
notebooks are designed for interactive data exploration and experiments. There are best practices in the industry:
At the moment, it is allowed to save a script without its name provided in the text field, this results in a server error and it's indirect to judge the cause.
Suggested change:
Check if the name field upon saving, abort the highlight the field if it's empty.
We need to release dashboard to let GreptimeDB download and embed it at GreptimeDB's build time.
Right now I've made a testing release v0.0.1-test. It contains 2 asserts that GreptimeDB needed:
build.tar.gz
: built by command npm run build:docker
at source code root path, cd
to directory dist
, and then tar -czvf build.tar.gz *
sha256.txt
: generated by shasum -a 256 build.tar.gz > sha256.txt
(You can also refer to the release script of influxdb-ui here.)
The GreptimeDB will download these 2 asserts, verifies sha256 and then extracts the files in the tar.gz to embed them.
The release I made was just a test, all asserts were build in my local mac, and uploaded manually. Can you use Github's feature to formalize the releasing procedure?
As we have finished rendering data as table and chart, the next step is to visualize data type/table schema information.
As of https://github.com/GreptimeTeam/greptimedb/tree/v0.1.0-alpha-20221205-weekly, we have following data types supported:
So the main types we going to render are:
How we are rendering these types:
Also add env select for image build action.
Fix nginx refresh by changing the location of try_files index.html
Currently, when user creates a new table in the dashboard, the list of tables on the left side of dashboard would not update automatically, which means users have to manually click the refreash button to see their changes.
We can simply run getTables()
if the sql excuted by runCode()
contains strings like CREATE TABLE
, ALTER TABLE
, etc.
Problem: Only clicking the small icon on the left would trigger the table to load more columns.
This is not user-friendly. So we want the whole row area to respond to clicking and then load more.
We want to show more info with less words. Might add icons.
Dashboard in Docker should be able to refresh.
Right now it can only redirect when url is ended with /dashboard/, not with /dashboard/query.
This should be related to docker and nginx configuration.
To speed up loading of the dashboard on cloud, we need to add a build option to release those assets to CDN and use CDN address in html. This allows browser to cache those assets across different dashboard domains.
I write this in the dashboard editor
-- This is a comment
show tables;
and click "Run All", then get this error:
Error: Query engine output error: Failed to execute query: -- This is a comment show tables;, source: Invalid SQL, error: Currently executing multiple SQL queries are not supported.
Looks like the comment doesn't get ignored.
It should run correctly as there is no comment content.
Greptime db has built-in support for Python script. We will add script editor and manager in our dashboard UI.
These scripts are stored in scripts
table and can be executed via /run-script
rest api.
Currently we use build image to run pnpm build
however this is completely unnecessary and stops us from building arm64 image. Because output files of this project are platform-agnostic, we can simply build the in github ubuntu image and copy them into nginx base image.
We will keep current docker as a reproducible build image.
Refer to GreptimeTeam/greptimedb#941
We need to seek a further solution to inject `VITE_CLOUD_URL` at runtime so we can reuse same image across different environments. Let me create an issue for this.
Originally posted by @sunng87 in #156 (review)
@alili Let's see what we can do about it.
I'd be cool if Dashboard world be able to display information about cluster status as well as info about nodes, their statuses and connections. Perhaps it should connect directly to meta server in this case?
Refer to https://docs.greptime.com/user-guide/python-coprocessor/io#input
For example, a script like:
@coprocessor(returns=['value'])
def add(**params) -> vector[i64]:
a = params['a']
b = params['b']
return int(a) + int(b)
We can call this coprocessor with arguments a
and b
:
curl -XPOST \
"http://localhost:4000/v1/run-script?name=add&db=public&a=42&b=99"
The query parameters except db
and name
are collected into params
for the coprocessor.
The dashboard can support these user input parameters for scripts.
Add icons in all the table columns whose data-type is timestamp.
Click the icon and convert all the timestamp values to locale time strings.
Problems will occur if we have new tabs on both Query and Scripts pages. @alili
Add docker image and integrate this project into our greptimedb-operator
. This images contains:
In order to access the database without cross-origin issue, we have two options:
arco-vue
应该是支持auto-import
的,没有必要在main.ts
里全局引入,建议简化,缩小打包大小// vite.config.ts
import { ArcoResolver } from 'unplugin-vue-components/resolvers'
AutoImport({
...
imports: [
...
{
'@arco-design/web-vue': ['Message']
}
],
resolvers: [ArcoResolver()],
}),
Components({
...
resolvers: [
ArcoResolver({
resolveIcons: true
})
]
}),
auto-import
,需要简化vue
/vue-router
/pinia
/vue-i18n
的import
代码,目前存在很多没必要的引入以及type
定义pinia
持久化插件pinia-plugin-persistedstate
,不需要额外再添加vueuse
pinia
Composition API 的写法来定义store
,更符合vue3
用法locale
拆散了放在组件中,i18n插件识别不到,不知道你们怎么解决的?social-link
代码中 v-html
的部分Refer to GreptimeTeam/greptimedb#1581
We should use double-quoted strings for table and column names. There may be some special character in them.
Might need new design.
Provide demo data for faster user on boarding. These demo data can be stored in dashboard as csv or json.
This is better when our database has implemented bulk import feature.
Looks like the dashboard hardcodes the server address, it would be better to make it configurable in UI.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.