Code Monkey home page Code Monkey logo

sql_exporter's People

Contributors

arthurzenika avatar bobrik avatar daivikdave avatar dependabot[bot] avatar dewey avatar dominikschulz avatar fusakla avatar ilantnt avatar jelmer avatar joacoc avatar lukas-mi avatar manojvivek avatar marevers avatar marthjod avatar mateiw avatar metalmatze avatar rubyalwaystaken avatar serik1256 avatar simonfrey avatar stefreak avatar wilfriedroset avatar wojciech12 avatar xxorde avatar zwopir avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sql_exporter's Issues

Missing MSSQL Integration Example

While using the open-source SQL exporter, I noticed that there is no example provided for integrating with MSSQL, despite MSSQL being used in the exporter. This omission makes it difficult for users to set up MSSQL integrations effectively.

In order to find the connection string that needs to be used, I went to the library of MSSQL and found the documention:
sqlserver://username:password@host/instance?param1=value&param2=value

I suggest adding a dedicated example or documentation section specifically focused on setting up MSSQL integration within the exporter (and all other integrations that are not documented).

Thank you!

Clickhouse http connection support

Hello.

I am trying to run exporter with clickhouse, my simple config:

---
jobs:
  - name: "global"
    interval: '5s'
    connections:
      - 'clickhouse://admin:******@10.221.0.19:31673'
    queries:
      - name: "custom_cluster_nodes_count"
        help: "Amount of cluster nodes"
        labels:
          - "cluster"
          - "region"
        values:
          - "amount"
        query:  |
                SELECT 'cluster' as cluster, 'region' as region, COUNT(*) as amount FROM system.clusters

But in logs got:

{"caller":"job.go:190","err":"read tcp 10.0.2.100:60136->10.221.0.19:31673: i/o timeout","job":"global","level":"warn","msg":"Failed to connect","ts":"2023-06-19T08:42:30.770058746Z"}
{"caller":"job.go:190","err":"read tcp 10.0.2.100:59498->10.221.0.19:31673: i/o timeout","job":"global","level":"warn","msg":"Failed to connect","ts":"2023-06-19T08:42:32.334186815Z"}

I've tried use DSN like clickhouse+http://admin:******@10.221.0.19:31673 (by https://github.com/xo/dburl doc), but also without success:

{"caller":"job.go:190","err":"sql: unknown driver \"clickhouse+http\" (forgotten import?)","job":"global","level":"warn","msg":"Failed to connect","ts":"2023-06-19T08:43:45.063663853Z"}

But official clickhouse-go driver also has http support.

How I should configure using http instead of tcp in connection?

Interval by query

Hello,

It would be a nice improvement to be able to setup an interval by query.

=> Keep the job interval as default and overwrite interval in queries if the key is used.

Thank you!

The Makefile does not work

This happens when I try to use the Makefile:

oseibert:~/git$ git clone https://github.com/justwatchcom/sql_exporter.git
Cloning into 'sql_exporter'...
remote: Enumerating objects: 4092, done.
remote: Counting objects: 100% (381/381), done.
remote: Compressing objects: 100% (273/273), done.
remote: Total 4092 (delta 134), reused 275 (delta 92), pack-reused 3711
Receiving objects: 100% (4092/4092), 7.10 MiB | 7.08 MiB/s, done.
Resolving deltas: 100% (1487/1487), done.
oseibert:~/git$ cd sql_exporter/
oseibert:~/git/sql_exporter$ make
>> formatting code
go: added github.com/Masterminds/semver v1.5.0
go: upgraded github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d => v0.0.0-20211218093645-b94a6e3cc137
go: added github.com/google/go-github/v25 v25.1.3
go: added github.com/google/go-querystring v1.1.0
go: upgraded github.com/prometheus/client_golang v1.12.0 => v1.13.0
go: upgraded github.com/prometheus/common v0.32.1 => v0.37.0
go: upgraded github.com/prometheus/procfs v0.7.3 => v0.8.0
go: added github.com/prometheus/promu v0.13.0
go: added go.uber.org/atomic v1.9.0
go: upgraded golang.org/x/net v0.0.0-20211118161319-6a13c67c3ce4 => v0.0.0-20220809012201-f428fae20770
go: upgraded golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c => v0.0.0-20220808172628-8227340efae7
go: upgraded golang.org/x/sys v0.0.0-20220429233432-b5fbb4746d32 => v0.0.0-20220808155132-1c4a2a72c664
go: upgraded google.golang.org/appengine v1.6.6 => v1.6.7
go: upgraded google.golang.org/protobuf v1.27.1 => v1.28.1
>> building binaries
make: /bin/promu: No such file or directory
make: *** [build] Error 1
oseibert:~/git/sql_exporter$

That error happens because there is no $GOPATH set. If I work around that (and the use of $GOPATH doesn't expect a colon-separated list of directories, but just a single one):

oseibert:~/git/sql_exporter$ env GOPATH=$(go env GOPATH) make build
>> building binaries
 >   sql_exporter
go: inconsistent vendoring in /Users/oseibert/git/sql_exporter:
	github.com/prometheus/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
	github.com/prometheus/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
	github.com/Masterminds/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
	github.com/alecthomas/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
	github.com/alecthomas/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
	github.com/google/go-github/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
	github.com/google/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
	github.com/pkg/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
	github.com/prometheus/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
	github.com/prometheus/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
	go.uber.org/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
	golang.org/x/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
	golang.org/x/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
	golang.org/x/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
	google.golang.org/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
	google.golang.org/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
	gopkg.in/alecthomas/[email protected]: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt
	github.com/prometheus/[email protected]: is marked as explicit in vendor/modules.txt, but not explicitly required in go.mod
	github.com/prometheus/[email protected]: is marked as explicit in vendor/modules.txt, but not explicitly required in go.mod
	github.com/prometheus/[email protected]: is marked as explicit in vendor/modules.txt, but not explicitly required in go.mod
	golang.org/x/[email protected]: is marked as explicit in vendor/modules.txt, but not explicitly required in go.mod
	golang.org/x/[email protected]: is marked as explicit in vendor/modules.txt, but not explicitly required in go.mod
	google.golang.org/[email protected]: is marked as explicit in vendor/modules.txt, but not explicitly required in go.mod

	To ignore the vendor directory, use -mod=readonly or -mod=mod.
	To sync the vendor directory, run:
		go mod vendor
!! command failed: build -o /Users/oseibert/git/sql_exporter/sql_exporter -ldflags -X github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/common/version.Version=0.4.0 -X github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/common/version.Revision=a9da0f5d1b4e2092e30389e3eb8465d2eafd500c -X github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/common/version.Branch=master -X github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/common/version.BuildUser=oseibert@oseibert -X github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/common/version.BuildDate=20220809-12:23:07  -a -tags netgo github.com/justwatchcom/sql_exporter: exit status 1
make: *** [build] Error 1
oseibert:~/git/sql_exporter$

Add posibility to run query on a given connection

Hi,

For our use case, it would be helpful if we have the possibility to run some queries on specific connections.

So we would probably need to extend config to something like thisL

  connections:
  db1: 'postgres://postgres@localhost/postgres?sslmode=disable'
  db2: 'postgres://postgres@localhost/postgres?sslmode=disable'

and add posibility to define db in queries section

  queries:
    # name is prefied with sql_ and used as the metric name
  - name: "running_queries"
  - connections:
    - db1

I'm willing to contribute. I guess that the most difficult part will be to make the config backward compatible. My proposal is

  1. create a new section ?
named_connections:
  db1: 'postgres://postgres@localhost/postgres?sslmode=disable'
  db2: 'postgres://postgres@localhost/postgres?sslmode=disable'
  1. add somehow the name to the connection string
connections:
 - 'db1##postgres://postgres@localhost/postgres?sslmode=disable'
 - 'db1##postgres://postgres@localhost/postgres?sslmode=disable'

Support multiple job files in config.yml

Hi

I would like to separate job files per context and config.xml to have array of job files to load, like in https://github.com/free/sql_exporter
this will give the flexibility to add/remove/change files containing the queries and not having one huge file with all queries inside, also if we have different intervals to run jobs it will be much easier that way.

Unknown driver "mssql+pyodbc"

Issue:

{"caller":"job.go:190","err":"sql: unknown driver \"mssql+pyodbc\" (forgotten import?)","job":"example","level":"warn","msg":"Failed to connect","ts":"2023-09-22T09:02:58.038302874Z"}

Is this driver indeed unsupported or am I using it wrongly?

Configuration in config.yml:

---
# jobs is a map of jobs, define any number but please keep the connection usage on the DBs in mind
jobs:
  # each job needs a unique name, it's used for logging and as an default label
- name: "example"
  interval: '30s'
  connections:
  - 'mssql+pyodbc://username:password@host:1433/dbatasks?driver=ODBC+Driver+17+for+SQL+Server&MARS_Connection=yes'
  startup_sql:
  - 'SET lock_timeout = 1000'
  - 'SET idle_in_transaction_session_timeout = 100'
  queries:
  - name: "exceptionlog_count"
    help: "Sum of log entries in ExceptionLog table"
    labels:
      - "db"
    values:
      - "exceptionlog_count"
    query:  EXEC [dbo].[usp_PrometheusExceptionLogMonitoring]
    allow_zero_rows: false

Docker run command:

docker run -d -p 9237:9237 \
-v /etc/query/config.yml:/config/config.yml \
-e CONFIG=/config/config.yml \
--name sql-exporter ghcr.io/justwatchcom/sql_exporter

Empty docker repository?

Hi,

Looks like there is no image avaible on docker hub.

I got a message: repository justwatchcom/sql_exporter not found: does not exist or no pull access.

Any plans to push a new image?

Cheers!

Francisco

README out-of-date

It states only environment variables and no flags are available. Version 0.2.0 supports flags though.

ConfigMap with template strings

Hello, I would recommend to update README.md to add information that template strings can be used in ConfigMap for Kubernetes users, because at first I could not understand how to add password to ConfigMap in secure way.

Had to dig into source code to find out, that env variables can be used as template strings.

Template string in ConfigMap {{YOUR_SQL_PASS}} should match pod env variable YOUR_SQL_PASS: "secure-password" (secret).

Seamless configuration reload like Prometheus

Hello,
I have one issue with this exporter, which we are using in production for several months.
It seems we can't reload configuration on the fly, like we can with Prometheus or Alertmanager for instance.
This is actually a problem, because we must restart the container everytime we change the configuration (we're using Docker) but it also executes all SQL requests again and new alerts are fired for every request even if there is no error.
I searched into the documentation and did not find anything.

If such a mechanism do not exists yet, will it be hard for you to implement something like with Prometheus where we only send a HUP kill signal to the main processus thus reloading the configuration ?

Thanks in advance !

help needed: measure available disk space

I'm working at a big French hospital which uses Microsoft SQL server to store patient data.

One of the most important things i need to monitor is available disk space. This is critical because if there is no available disk space, the system completely fails and data is lost.

I've been trying to use sys.dm_os_volume_stats which completely fails at returning the actual available/used/total space used by each volume (G:, C:, etc.)

I see many people use EXECUTE sys.xp_cmdshell 'wmic volume get name, freespace, capacity, label' to do this but it seems that this is outputing unstructured text.

Do you guys have any advice? How do you monitor and measure your disk space?

Thanks a lot for any help you can provide.

Grafana graphs not populating

I am using your k8s examples and managing to get the sql-exporter to show up in prometheus targets, but for some reason nothing is populatin in grafan. Looking at the metrics endpoint of the container I can see it is showing output. I am using the new prometheus 2.0.0 beta but not sure if it is in any way related to that, as I have other graphs showing up fine. Anything that I could have been missing?

Query References

We should allow queries to be defined once and reused within the jobs.

Missing docker image for v0.4.8 in ghcr.io

The docker image for the newest release is currently missing in the Github Docker registry. Could you please provide the image there?

โฏ docker pull ghcr.io/justwatchcom/sql_exporter:v0.4.8
Error response from daemon: manifest unknown

โฏ docker pull ghcr.io/justwatchcom/sql_exporter:v0.4.7
v0.4.7: Pulling from justwatchcom/sql_exporter
Digest: sha256:167f9816c5d9a9abec050a527ee03e366626a86948b9af989b14e672a09c79c5
Status: Image is up to date for ghcr.io/justwatchcom/sql_exporter:v0.4.7
ghcr.io/justwatchcom/sql_exporter:v0.4.7

Support exporting histogram metrics

This issue is to track the support for exporting histogram metrics. Here is a proposal for histogram metrics support: muxinc#3 Happy to package that as an upstream PR here if the config structure makes sense.

Panic: not implemented errors with MSSQL (Azure Managed Instance)

We are experiencing panic: not implemented errors (and a subsequent crash of the exporter) when using it with an Azure Managed Instance MSSQL database. We have had success with other Azure-hosted MSSQL databases before, but the only difference is that the managed instance forces encrypted traffic which could be the cause of the issue. We have attempted changing the github.com/denisenkom/go-mssqldb dependency to a higher version but this unfortunately does not fix the issue.

Here's a stack trace of such an error happening:

panic: Not implemented

goroutine 45 [running]:
github.com/denisenkom/go-mssqldb.passthroughConn.SetWriteDeadline(...)
	/src/vendor/github.com/denisenkom/go-mssqldb/net.go:167
crypto/tls.(*Conn).SetWriteDeadline(...)
	/usr/local/go/src/crypto/tls/conn.go:151
crypto/tls.(*Conn).closeNotify(0xc000910700)
	/usr/local/go/src/crypto/tls/conn.go:1361 +0xdb
crypto/tls.(*Conn).Close(0xc0000cadb8)
	/usr/local/go/src/crypto/tls/conn.go:1331 +0x69
github.com/denisenkom/go-mssqldb.(*Conn).Close(0xc0000cadd0)
	/src/vendor/github.com/denisenkom/go-mssqldb/mssql.go:361 +0x28
database/sql.(*driverConn).finalClose.func2()
	/usr/local/go/src/database/sql/sql.go:646 +0x3c
database/sql.withLock({0x12eb6e8, 0xc000629e60}, 0xc0000cae88)
	/usr/local/go/src/database/sql/sql.go:3396 +0x8c
database/sql.(*driverConn).finalClose(0xc000629e60)
	/usr/local/go/src/database/sql/sql.go:644 +0x117

Replace leveled logger

This project is currently using an custom implementation of an level-filtering logger.

We want to replace this by the github.com/go-kit/kit/log/level package some day.

[question] how to get a row count?

Hello!

I have setup your exporter with prometheus and postgres, and I can see basic metrics being exported using the config provided in the repo. However, when I go to write my own queries, I am not understanding how this is done. I started with a few simple examples:

  - name: "msg_count"
    help: "Number of messages in the posts table"
    values:
      - "msg_count"
    query: "SELECT COUNT(*) FROM posts;"
  - name: "connections"
    help: "Active Connections"
    values:
      - "connections_count"
    query: "SELECT sum(numbackends) FROM pg_stat_database;"

The logs look ok, and I can query prometheus for the stats, though they come back 0 (I can see correct values with psql, so I know 0 is wrong).

I assume that I don't have the config correct, but I haven't been able to figure out what I'm doing wrong.

fatal error: concurrent map iteration and map write

Hello,

I am seeing many random crashes after long periods of running.

Version: 0.2.0
Environment: Linux 4.14/x86_64
go version: go1.8.3

The crash is:

fatal error: concurrent map iteration and map write

Anonymized logs of crash are here:


Jan 12 23:16:25 <HOST> prometheus-sql-exporter[<PID>]: {"caller":"level.go:84"... (a normal event being logged)}
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: fatal error: concurrent map iteration and map write
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: goroutine 20829277 [running]:
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: runtime.throw(0x9198cb, 0x26)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/runtime/panic.go:596 +0x95 fp=0xc421023db0 sp=0xc421023d90
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: runtime.mapiternext(0xc421023f28)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/runtime/hashmap.go:737 +0x7ee fp=0xc421023e60 sp=0xc421023db0
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: runtime.mapiterinit(0x8866a0, 0xc420135ce0, 0xc421023f28)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/runtime/hashmap.go:727 +0x2b3 fp=0xc421023eb8 sp=0xc421023e60
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: main.(*Exporter).Collect(0xc4201350e0, 0xc4210b63c0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /build/prometheus-sql-exporter/tmp/build/src/github.com/justwatchcom/sql_exporter/exporter.go:77 +0x100 fp=0xc421023f98 sp=0xc421023eb8
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).Gather.func2(0xc421249790, 0xc4210b63c0, 0xbcac60, 0xc4201350e0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /build/prometheus-sql-exporter/tmp/build/src/github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/client_golang/prometheus/registry.go:382 +0x61 fp=0xc421023fc0 sp=0xc421023f98
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: runtime.goexit()
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/runtime/asm_amd64.s:2197 +0x1 fp=0xc421023fc8 sp=0xc421023fc0
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: created by github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).Gather
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /build/prometheus-sql-exporter/tmp/build/src/github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/client_golang/prometheus/registry.go:383 +0x2ec
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: goroutine 1 [IO wait, 1735 minutes]:
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.runtime_pollWait(0x7ffb89c2a700, 0x72, 0x0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/runtime/netpoll.go:164 +0x59
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.(*pollDesc).wait(0xc4202e7db8, 0x72, 0x0, 0xc421595d60)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/fd_poll_runtime.go:75 +0x38
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.(*pollDesc).waitRead(0xc4202e7db8, 0xffffffffffffffff, 0x0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/fd_poll_runtime.go:80 +0x34
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.(*netFD).accept(0xc4202e7d50, 0x0, 0xbc84e0, 0xc421595d60)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/fd_unix.go:430 +0x1e5
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.(*TCPListener).accept(0xc42012c3b0, 0xc4200ad2a0, 0x87e080, 0xffffffffffffffff)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/tcpsock_posix.go:136 +0x2e
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.(*TCPListener).AcceptTCP(0xc42012c3b0, 0xc42006fbc8, 0xc42006fbd0, 0xc42006fbc0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/tcpsock.go:215 +0x49
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/http.tcpKeepAliveListener.Accept(0xc42012c3b0, 0x924870, 0xc4200ad220, 0xbcde60, 0xc420345470)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:3044 +0x2f
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/http.(*Server).Serve(0xc4203182c0, 0xbcd820, 0xc42012c3b0, 0x0, 0x0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:2643 +0x228
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/http.(*Server).ListenAndServe(0xc4203182c0, 0xc4203182c0, 0xc4201286d0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:2585 +0xb0
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/http.ListenAndServe(0x7fffe54dff20, 0x16, 0x0, 0x0, 0x0, 0x0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:2787 +0x7f
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: main.main()
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /build/prometheus-sql-exporter/tmp/build/src/github.com/justwatchcom/sql_exporter/main.go:80 +0xa7d
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: goroutine 17 [syscall, 88007 minutes, locked to thread]:
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: runtime.goexit()
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/runtime/asm_amd64.s:2197 +0x1
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: goroutine 19 [chan receive]:
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: main.(*Job).runOnce(0xc42013f800, 0x312c8110, 0xc01540)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /build/prometheus-sql-exporter/tmp/build/src/github.com/justwatchcom/sql_exporter/job.go:167 +0x10a
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: main.(*Job).(main.runOnce)-fm(0xc4210c6000, 0xc420061d48)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /build/prometheus-sql-exporter/tmp/build/src/github.com/justwatchcom/sql_exporter/job.go:116 +0x2a
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: github.com/justwatchcom/sql_exporter/vendor/github.com/cenkalti/backoff.RetryNotify(0xc420061ea8, 0xbca8a0, 0xc4210c6000, 0x0, 0xc1da20, 0xc4210c6000)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /build/prometheus-sql-exporter/tmp/build/src/github.com/justwatchcom/sql_exporter/vendor/github.com/cenkalti/backoff/retry.go:32 +0x3f
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: github.com/justwatchcom/sql_exporter/vendor/github.com/cenkalti/backoff.Retry(0xc420061ea8, 0xbca8a0, 0xc4210c6000, 0x4, 0x0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /build/prometheus-sql-exporter/tmp/build/src/github.com/justwatchcom/sql_exporter/vendor/github.com/cenkalti/backoff/retry.go:22 +0x48
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: main.(*Job).Run(0xc42013f800)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /build/prometheus-sql-exporter/tmp/build/src/github.com/justwatchcom/sql_exporter/job.go:116 +0x466
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: created by main.NewExporter
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /build/prometheus-sql-exporter/tmp/build/src/github.com/justwatchcom/sql_exporter/exporter.go:42 +0x20c
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: goroutine 8 [chan receive, 88007 minutes]:
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: database/sql.(*DB).connectionOpener(0xc4200acfa0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/database/sql/sql.go:837 +0x4a
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: created by database/sql.Open
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/database/sql/sql.go:582 +0x212
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: goroutine 19552027 [IO wait]:
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.runtime_pollWait(0x7ffb89c2a280, 0x72, 0x8)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/runtime/netpoll.go:164 +0x59
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.(*pollDesc).wait(0xc4200ba148, 0x72, 0xbc9aa0, 0xbc5610)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/fd_poll_runtime.go:75 +0x38
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.(*pollDesc).waitRead(0xc4200ba148, 0xc42074a000, 0x1000)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/fd_poll_runtime.go:80 +0x34
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.(*netFD).Read(0xc4200ba0e0, 0xc42074a000, 0x1000, 0x1000, 0x0, 0xbc9aa0, 0xbc5610)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/fd_unix.go:250 +0x1b7
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.(*conn).Read(0xc42118a000, 0xc42074a000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/net.go:181 +0x70
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/http.(*connReader).Read(0xc4205ba040, 0xc42074a000, 0x1000, 0x1000, 0x1b80, 0x8c81c0, 0xc420648c01)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:754 +0x140
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: bufio.(*Reader).fill(0xc42108a060)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/bufio/bufio.go:97 +0x117
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: bufio.(*Reader).ReadSlice(0xc42108a060, 0xa, 0xf, 0xe, 0xc4200719f8, 0x410096, 0x7ffb89cc6138)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/bufio/bufio.go:338 +0xbb
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: bufio.(*Reader).ReadLine(0xc42108a060, 0xc420090e00, 0x100, 0xf8, 0x8f9520, 0xc420648c01, 0x17ffb89cc1000)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/bufio/bufio.go:367 +0x37
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/textproto.(*Reader).readLineSlice(0xc420a94420, 0xc420071ac8, 0xc420071ac8, 0x410df8, 0x100, 0x8f9520)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/textproto/reader.go:55 +0x5f
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/textproto.(*Reader).ReadLine(0xc420a94420, 0xc420090e00, 0x72, 0x8000000000000000, 0x0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/textproto/reader.go:36 +0x2f
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/http.readRequest(0xc42108a060, 0x0, 0xc420090e00, 0x0, 0x0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/request.go:918 +0xa5
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/http.(*conn).readRequest(0xc4200ac3c0, 0xbcdda0, 0xc4205ba000, 0x0, 0x0, 0x0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:934 +0x213
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/http.(*conn).serve(0xc4200ac3c0, 0xbcdda0, 0xc4205ba000)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:1763 +0x49a
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: created by net/http.(*Server).Serve
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:2668 +0x2ce
Jan 12 23:16:26 <HOST> systemd[1]: prometheus-sql-exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: goroutine 20829272 [IO wait]:
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.runtime_pollWait(0x7ffb89c2a580, 0x72, 0x7)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/runtime/netpoll.go:164 +0x59
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.(*pollDesc).wait(0xc4200ba0d8, 0x72, 0xbc9aa0, 0xbc5610)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/fd_poll_runtime.go:75 +0x38
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.(*pollDesc).waitRead(0xc4200ba0d8, 0xc42084e611, 0x1)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/fd_poll_runtime.go:80 +0x34
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.(*netFD).Read(0xc4200ba070, 0xc42084e611, 0x1, 0x1, 0x0, 0xbc9aa0, 0xbc5610)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/fd_unix.go:250 +0x1b7
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.(*conn).Read(0xc42012c0b0, 0xc42084e611, 0x1, 0x1, 0x0, 0x0, 0x0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/net.go:181 +0x70
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/http.(*connReader).backgroundRead(0xc42084e600)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:656 +0x58
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: created by net/http.(*connReader).startBackgroundRead
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:652 +0xdf
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: goroutine 20298495 [runnable]:
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/client_golang/prometheus.checkMetricConsistency(0xc420d6ab40, 0xc4213dfa40, 0xc420a94540, 0xc420a94570, 0xc42010bd01, 0x1)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /build/prometheus-sql-exporter/tmp/build/src/github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/client_golang/prometheus/registry.go:696 +0x337
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).Gather(0xc420012940, 0x0, 0x0, 0x0, 0x0, 0x0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /build/prometheus-sql-exporter/tmp/build/src/github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/client_golang/prometheus/registry.go:494 +0xa8a
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/client_golang/prometheus/promhttp.HandlerFor.func1(0xbcd4a0, 0xc4208280e0, 0xc42000b700)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /build/prometheus-sql-exporter/tmp/build/src/github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/client_golang/prometheus/promhttp/http.go:82 +0x4c
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/http.HandlerFunc.ServeHTTP(0xc42013bd40, 0xbcd4a0, 0xc4208280e0, 0xc42000b700)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:1942 +0x44
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/http.(*ServeMux).ServeHTTP(0xc01280, 0xbcd4a0, 0xc4208280e0, 0xc42000b700)
Jan 12 23:16:26 <HOST> systemd[1]: prometheus-sql-exporter.service: Unit entered failed state.
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:2238 +0x130
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/http.serverHandler.ServeHTTP(0xc4203182c0, 0xbcd4a0, 0xc4208280e0, 0xc42000b700)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:2568 +0x92
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/http.(*conn).serve(0xc4200ad220, 0xbcdda0, 0xc42084e5c0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:1825 +0x612
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: created by net/http.(*Server).Serve
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:2668 +0x2ce
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: goroutine 4546870 [select]:
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: database/sql.(*DB).connectionCleaner(0xc4200acfa0, 0x1f3305bc00)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/database/sql/sql.go:759 +0x59b
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: created by database/sql.(*DB).startCleanerLocked
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/database/sql/sql.go:746 +0xb5
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: goroutine 18957801 [IO wait]:
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.runtime_pollWait(0x7ffb89c2a1c0, 0x72, 0x6)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/runtime/netpoll.go:164 +0x59
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.(*pollDesc).wait(0xc4201380d8, 0x72, 0xbc9aa0, 0xbc5610)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/fd_poll_runtime.go:75 +0x38
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.(*pollDesc).waitRead(0xc4201380d8, 0xc4211e5000, 0x1000)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/fd_poll_runtime.go:80 +0x34
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.(*netFD).Read(0xc420138070, 0xc4211e5000, 0x1000, 0x1000, 0x0, 0xbc9aa0, 0xbc5610)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/fd_unix.go:250 +0x1b7
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.(*conn).Read(0xc42012c000, 0xc4211e5000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/net.go:181 +0x70
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/http.(*connReader).Read(0xc4214ec140, 0xc4211e5000, 0x1000, 0x1000, 0x1b80, 0x8c81c0, 0xc420084401)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:754 +0x140
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: bufio.(*Reader).fill(0xc420506000)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/bufio/bufio.go:97 +0x117
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: bufio.(*Reader).ReadSlice(0xc420506000, 0xc42095a00a, 0xc420cd11e0, 0x924e98, 0xc420960840, 0x20f459c0, 0xc420f45a28)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/bufio/bufio.go:338 +0xbb
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: bufio.(*Reader).ReadLine(0xc420506000, 0xc420091c00, 0x100, 0xf8, 0x8f9520, 0xc420084400, 0x7ffb89cc1000)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/bufio/bufio.go:367 +0x37
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/textproto.(*Reader).readLineSlice(0xc4205501b0, 0xc420f45ac8, 0xc420f45ac8, 0x410df8, 0x100, 0x8f9520)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/textproto/reader.go:55 +0x5f
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/textproto.(*Reader).ReadLine(0xc4205501b0, 0xc420091c00, 0x72, 0x8000000000000000, 0x0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/textproto/reader.go:36 +0x2f
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/http.readRequest(0xc420506000, 0x0, 0xc420091c00, 0x0, 0x0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/request.go:918 +0xa5
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/http.(*conn).readRequest(0xc4201200a0, 0xbcdda0, 0xc4214ec100, 0x0, 0x0, 0x0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:934 +0x213
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/http.(*conn).serve(0xc4201200a0, 0xbcdda0, 0xc4214ec100)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:1763 +0x49a
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: created by net/http.(*Server).Serve
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:2668 +0x2ce
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: goroutine 19516641 [IO wait]:
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.runtime_pollWait(0x7ffb89c2a100, 0x72, 0x9)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/runtime/netpoll.go:164 +0x59
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.(*pollDesc).wait(0xc420138228, 0x72, 0xbc9aa0, 0xbc5610)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/fd_poll_runtime.go:75 +0x38
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.(*pollDesc).waitRead(0xc420138228, 0xc420a92000, 0x1000)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/fd_poll_runtime.go:80 +0x34
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.(*netFD).Read(0xc4201381c0, 0xc420a92000, 0x1000, 0x1000, 0x0, 0xbc9aa0, 0xbc5610)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/fd_unix.go:250 +0x1b7
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net.(*conn).Read(0xc42118a208, 0xc420a92000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/net.go:181 +0x70
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/http.(*connReader).Read(0xc420fc7440, 0xc420a92000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:754 +0x140
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: bufio.(*Reader).fill(0xc420adc060)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/bufio/bufio.go:97 +0x117
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: bufio.(*Reader).ReadSlice(0xc420adc060, 0xa, 0x9, 0x8, 0xc42006d9f8, 0x410096, 0x7ffb89c67998)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/bufio/bufio.go:338 +0xbb
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: bufio.(*Reader).ReadLine(0xc420adc060, 0xc420492800, 0x100, 0xf8, 0x8f9520, 0xc420145401, 0x17ffb89cc1000)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/bufio/bufio.go:367 +0x37
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/textproto.(*Reader).readLineSlice(0xc420551920, 0xc42006dac8, 0xc42006dac8, 0x410df8, 0x100, 0x8f9520)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/textproto/reader.go:55 +0x5f
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/textproto.(*Reader).ReadLine(0xc420551920, 0xc420492800, 0x72, 0x8000000000000000, 0x0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/textproto/reader.go:36 +0x2f
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/http.readRequest(0xc420adc060, 0x0, 0xc420492800, 0x0, 0x0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/request.go:918 +0xa5
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/http.(*conn).readRequest(0xc420120320, 0xbcdda0, 0xc420fc7400, 0x0, 0x0, 0x0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:934 +0x213
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: net/http.(*conn).serve(0xc420120320, 0xbcdda0, 0xc420fc7400)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:1763 +0x49a
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: created by net/http.(*Server).Serve
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/net/http/server.go:2668 +0x2ce
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: goroutine 20828607 [runnable]:
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: encoding/json.(*Encoder).Encode(0xc420f49c80, 0x8886e0, 0xc4212005a0, 0x864180, 0xc420b27140)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/encoding/json/stream.go:188 +0x386
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: github.com/justwatchcom/sql_exporter/vendor/github.com/go-kit/kit/log.(*jsonLogger).Log(0xc420128700, 0xc4201083c0, 0xc, 0xc, 0xc420108300, 0x8)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /build/prometheus-sql-exporter/tmp/build/src/github.com/justwatchcom/sql_exporter/vendor/github.com/go-kit/kit/log/json_logger.go:34 +0x180
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: github.com/justwatchcom/sql_exporter/vendor/github.com/go-kit/kit/log.(*context).Log(0xc4201347e0, 0xc420108300, 0x8, 0xc, 0xc4206ee0c0, 0xc420648c00)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /build/prometheus-sql-exporter/tmp/build/src/github.com/justwatchcom/sql_exporter/vendor/github.com/go-kit/kit/log/log.go:124 +0x1ed
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: github.com/justwatchcom/sql_exporter/vendor/github.com/go-kit/kit/log/level.(*logger).Log(0xc42013a480, 0xc420108300, 0x8, 0xc, 0xc42102a480, 0x2)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /build/prometheus-sql-exporter/tmp/build/src/github.com/justwatchcom/sql_exporter/vendor/github.com/go-kit/kit/log/level/level.go:84 +0xa5
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: github.com/justwatchcom/sql_exporter/vendor/github.com/go-kit/kit/log.(*context).Log(0xc421200570, 0xc42102a480, 0x2, 0x2, 0x0, 0x0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /build/prometheus-sql-exporter/tmp/build/src/github.com/justwatchcom/sql_exporter/vendor/github.com/go-kit/kit/log/log.go:124 +0x1ed
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: main.(*Job).runOnceConnection(0xc42013f800, 0xc420016690, 0xc4200ba1c0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /build/prometheus-sql-exporter/tmp/build/src/github.com/justwatchcom/sql_exporter/job.go:151 +0x652
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: created by main.(*Job).runOnce
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /build/prometheus-sql-exporter/tmp/build/src/github.com/justwatchcom/sql_exporter/job.go:161 +0x96
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: goroutine 20829273 [semacquire]:
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: sync.runtime_Semacquire(0xc42124979c)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/runtime/sema.go:47 +0x34
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: sync.(*WaitGroup).Wait(0xc421249790)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /usr/local/go/src/sync/waitgroup.go:131 +0x7a
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).Gather.func1(0xc421249790, 0xc4210b63c0)
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /build/prometheus-sql-exporter/tmp/build/src/github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/client_golang/prometheus/registry.go:376 +0x2b
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]: created by github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).Gather
Jan 12 23:16:26 <HOST> prometheus-sql-exporter[<PID>]:         /build/prometheus-sql-exporter/tmp/build/src/github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/client_golang/prometheus/registry.go:378 +0x244

Thanks!

I have one question about Mysql user query data in Prometheus.

Hi, this is BcKim.

I have one question about Mysql user query data in Prometheus.

I need to check processlist information based on timeline.

but PMM can't support processlist information with SQL text data.

so I found this.

-- "mysqld_exporter/collector/perf_schema_events_statements"

This collector can gather Mysql user query data.

this is the gathering query.

-- "perf_schema_events_statements" gather query
const perfEventsStatementsQuery = "
SELECT
ifnull(SCHEMA_NAME, 'NONE') as SCHEMA_NAME,
DIGEST,
LEFT(DIGEST_TEXT, %d) as DIGEST_TEXT,
COUNT_STAR,
SUM_TIMER_WAIT,
SUM_ERRORS,
SUM_WARNINGS,
SUM_ROWS_AFFECTED,
SUM_ROWS_SENT,
SUM_ROWS_EXAMINED,
SUM_CREATED_TMP_DISK_TABLES,
SUM_CREATED_TMP_TABLES,
SUM_SORT_MERGE_PASSES,
SUM_SORT_ROWS,
SUM_NO_INDEX_USED
FROM (
SELECT *
FROM performance_schema.events_statements_summary_by_digest
WHERE SCHEMA_NAME NOT IN ('mysql', 'performance_schema', 'information_schema')
AND LAST_SEEN > DATE_SUB(NOW(), INTERVAL %d SECOND)
ORDER BY LAST_SEEN DESC
)Q
GROUP BY
Q.SCHEMA_NAME,
Q.DIGEST,
Q.DIGEST_TEXT,
Q.COUNT_STAR,
Q.SUM_TIMER_WAIT,
Q.SUM_ERRORS,
Q.SUM_WARNINGS,
Q.SUM_ROWS_AFFECTED,
Q.SUM_ROWS_SENT,
Q.SUM_ROWS_EXAMINED,
Q.SUM_CREATED_TMP_DISK_TABLES,
Q.SUM_CREATED_TMP_TABLES,
Q.SUM_SORT_MERGE_PASSES,
Q.SUM_SORT_ROWS,
Q.SUM_NO_INDEX_USED
ORDER BY SUM_TIMER_WAIT DESC
LIMIT %d
"

as you can see, this collector gather the sql text data.

but I couldn't find Mysql user query data in Prometheus.

please let me know about finding sql text data in Prometheus.

thanks.

Exporter blocks drop-database

Hi,

as the exporter keeps its connection open between scrapes it becomes harder to drop an unused database.

I feel like disconnecting once a scrape is finished serves multiple purposes. It becomes quite obvious for the monitoring once no more connections can be established and as these connections are not permanent it becomes more feasible to use this exporter in a many-database and autodiscovery szenarios. Last but not least there is a DROP DATABASE enhancement as one is currently required to manually remove monitoring connections.

If required i'll provide the patch - would this approach be acceptable?

`allow_zero_rows` does not work

Regarding #39

allow_zero_rows true or false should help if a query does not get a row as result. I was using an old version 0.4.0 that didn't include the patch mentioned above.
Now I am using

https://apt.postgresql.org/pub/repos/apt/pool/main/p/prometheus-sql-exporter/prometheus-sql-exporter_0.4.5-1.pgdg22.04+1_amd64.deb

But I still don't see any difference in the behavior.

The error is still the same:
{"caller":"job.go:205","err":"zero rows returned","job":"lindeconnect_hr","level":"warn","msg":"Failed to run query","query":"corrupt_cus_portal","ts":"2022-11-21T14:36:25.857520723Z"}
Doesn't matter if I set true or false. I would have expected that I get the metric scraped with a value of zero.

An example query what I've used looks like:

  - name: "corrupt_cus_portal"
    help: "corrupt portal cus"
    values:
      - "portal"
    query:  |
            SELECT
              portal
            FROM
              corrupt_cu_statistic ccs
            WHERE
              dc::date = now()::date limit 1;
    allow_zero_rows: true

How should that work?

Support new postgres features

Hello!

Currently pq is stopped in development and does not support any new postgres features like new environment variables, parameters and so on.

Right now I'm trying to deploy an exporter in kubernetes and want to collect metrics from master, but I have 3 PG instances and patroni, so I don't know who is master and who is slave, so I tried to use target_session_attrs parameter. But pq doesn't support it because MR hasn't been merged for 5 years...

If I knew GO, I would create MR to migrate to pgx, which is currently under active development and supports all new features.

Thanks.

Missing /src/vendor/net/netip

This is the following error I am getting while building:

>> formatting code
>> building binaries
package github.com/justwatchcom/sql_exporter
	imports github.com/snowflakedb/gosnowflake
	imports github.com/Azure/azure-sdk-for-go/sdk/azcore: build constraints exclude all Go files in /src/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore
package github.com/justwatchcom/sql_exporter
	imports github.com/snowflakedb/gosnowflake
	imports github.com/Azure/azure-sdk-for-go/sdk/azcore/policy: build constraints exclude all Go files in /src/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/policy
package github.com/justwatchcom/sql_exporter
	imports github.com/snowflakedb/gosnowflake
	imports github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
	imports github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/internal/exported
	imports github.com/Azure/azure-sdk-for-go/sdk/internal/log: build constraints exclude all Go files in /src/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/log
package github.com/justwatchcom/sql_exporter
	imports github.com/snowflakedb/gosnowflake
	imports github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob: build constraints exclude all Go files in /src/vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob
package github.com/justwatchcom/sql_exporter
	imports github.com/snowflakedb/gosnowflake
	imports github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/bloberror: build constraints exclude all Go files in /src/vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/bloberror
package github.com/justwatchcom/sql_exporter
	imports github.com/snowflakedb/gosnowflake
	imports github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container: build constraints exclude all Go files in /src/vendor/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container
vendor/github.com/ClickHouse/ch-go/proto/ipv4.go:5:2: cannot find package "." in:
	/src/vendor/net/netip
make: *** [Makefile:43: build] Error 1

EDIT: Upgrade golang to at least 1.20

Feature Request: Detecting failed scrapes

I'm working on application where we're using SQL exporter to get a valuable metric for our use case: the time elapsed since a user inserted some data in a database table. It's very straightforward and we really liked the exporter !

However, when the database goes down (as part of our tests), I noticed that the metrics are "stuck". The sql_exporter keeps reporting that the queries failed, but the metrics are still present.

I think that continuing to reporting the last value is good (coherent with the overall Prometheus philosophy), but it would be great to have a boolean metric indicating failures and execution time.
Something similar to what is done by both mysqld_exporter and postgres_exporter.

They have two metrics:

  • pg/mysqld_exporter_last_scrape_error
  • pg/mysqld_exporter_last_scrape_duration_seconds

We would really like to have something similar with sql_exporter, such as:
sql_exporter_last_scrape_failed{database="my_database", driver="postgres", query="my_query_name",... } := (True/False)

What do you think it'd take to get there?

Thanks in advance for your help and any pointers you may provide.

Metric label collision

I want to select user-level stats from a database and naturally I added user as a label for my metrics. Unfortunately, sql_exporter crashed immediately, because it adds a lot of own labels:

panic: descriptor Desc{fqName: "sql_clickhouse_user_query_read_rows", help: "Rows read in queries", constLabels: {}, variableLabels: [user type driver host database user col]} is invalid: duplicate label names

goroutine 1 [running]:
github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).MustRegister(0xc420044880, 0xc4202ac850, 0x1, 0x1)
	/Users/bobrik/projects/sql_exporter/src/github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/client_golang/prometheus/registry.go:353 +0x92
github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/client_golang/prometheus.MustRegister(0xc4202ac850, 0x1, 0x1)
	/Users/bobrik/projects/sql_exporter/src/github.com/justwatchcom/sql_exporter/vendor/github.com/prometheus/client_golang/prometheus/registry.go:152 +0x53
main.main()
	/Users/bobrik/projects/sql_exporter/src/github.com/justwatchcom/sql_exporter/main.go:63 +0x7a5

Relabeling later in Prometheus seems cumbersome. Not sure what's the cleanest way forward here.

Connection properties is not working without password

Hello Team,

I have configured the below connection in config map but it is not working:-

  connections:
  - 'postgres://postgres_exporter@servername/postgres?sslmode=disable'

if I pass the password properties, it is working.

  connections:
  - 'postgres://postgres_exporter:**password**@servername/postgres?sslmode=disable'  

I have configured the secret and given correct password but it is not working.

Please suggest.

is there any way to configure multiple server connection in different config map?

Any chances to get a primary/standby label into the metrics?

I am running the SQL exporter on a PostgreSQL cluster and by default, my collected metrics get several labels added.
Also, a hostname label which makes it easy to work with the values of the primary PostgreSQL cluster node. But only, if the primary role is always running on the same node. This works of course, because I know that host XYZ is usually the primary host.

But if you have a failover or maintenance, the primary role could be on another host. So I would have to change all my queries in Alertmanager or Grafana to point to another hostname. This makes of course no sense.
For these cases, it would be awesome to have a label in the metrics that reflect if this metric comes from a primary or a standby/potential node.

For example the query:

postgres=# select pg_is_in_recovery();
pg_is_in_recovery
-------------------
t
(1 row)

would identify if this host is a primary or not. It returns True if recovery is still in progress(so the server is running in standby mode or slave).

Is there a way to get this label into a query definition?

SSL support

Does SQL exporter support secured PostgresSQL connection?
If so how do i add the .crt needed?
Thanks
Chanan

Requesting to add Vertica driver support

Hi

I added Vertica driver suppor locally and tests are in progress.
Only have to do to put under vendor/github the vertica go driver
Will be great to be part of the exporter

Thank you

Cannot connect to clickhouse

Hi!
I'm trying to connect to my clickhouse. I place this in the yaml:

jobs:
- name: "example"
  # ...
  connections:
  - 'clickhouse://default:password@host:9000'

I then run LOGLEVEL=debug sql_exporter and get this output:

...
{"caller":"job.go:174","job":"example","level":"debug","msg":"Starting","ts":"2022-05-19T15:53:31.482395619Z"}
{"caller":"job.go:190","err":"code: 516, message: default: Authentication failed: password is incorrect or there is no user with such na
me","job":"example","level":"warn","msg":"Failed to connect","ts":"2022-05-19T15:53:31.583273404Z"}                                     {"caller":"job.go:190","err":"code: 516, message: default: Authentication failed: password is incorrect or there is no user with such na
me","job":"example","level":"warn","msg":"Failed to connect","ts":"2022-05-19T15:53:32.227874497Z"}   
...

Is it possible for me to debug it somehow?
I checked and it works for clickhouse-client:

clickhouse-client --host host --password password --user default --port 9000

What might be the problem here?

Support for multiple configuration files

It would be nice to be able to add multiple configuration files. Then it will be possible to store these configs together with the configuration of the database from which the metrics are collected.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sql-exporter
spec:
...
  containers:
  - name: sql-exporter
    env:
    - name: CONFIG_FOLDER
      value: /config
...
    volumeMounts:
    - mountPath: /config
      name: config-volume
  volumes:
  - name: config-volume
    configMap:
      name: sql-exporter-config
 ...
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
...
configMapGenerator:
- name: sql-exporter-config
  files:
  - config1.yaml
  - config2.yaml
 ...

Consider supporting Counter type

We should consider supporting Counter type metrics. In general I'd like to keep the SQL Exporter as simple as possible, but it may make sense to support Counters.

Disable "level":"debug"

Is there any way that we can disable debug level log? And maybe just enable warn and info?

{"caller":"job.go:124","commit":"","job":"global","level":"debug","msg":"Running Query","name":"prom-sql-exporter","query":"running_queries","ts":"2017-04-04T10:10:40.807686131Z","version":""}
{"caller":"job.go:130","commit":"","job":"global","level":"debug","msg":"Query finished","name":"prom-sql-exporter","query":"running_queries","ts":"2017-04-04T10:10:40.844363504Z","version":""}
{"caller":"job.go:124","commit":"","job":"global","level":"debug","msg":"Running Query","name":"prom-sql-exporter","query":"db_sizes","ts":"2017-04-04T10:10:40.844430868Z","version":""}
{"caller":"job.go:130","commit":"","job":"global","level":"debug","msg":"Query finished","name":"prom-sql-exporter","query":"db_sizes","ts":"2017-04-04T10:10:40.861593873Z","version":""}
{"caller":"job.go:124","commit":"","job":"global","level":"debug","msg":"Running Query","name":"prom-sql-exporter","query":"replication_lag","ts":"2017-04-04T10:10:40.861646604Z","version":""}
{"caller":"job.go:130","commit":"","job":"global","level":"debug","msg":"Query finished","name":"prom-sql-exporter","query":"replication_lag","ts":"2017-04-04T10:10:40.884981587Z","version":""}
{"caller":"job.go:124","commit":"","job":"global","level":"debug","msg":"Running Query","name":"prom-sql-exporter","query":"pg_stat_user_tables","ts":"2017-04-04T10:10:40.885030098Z","version":""}
{"caller":"job.go:130","commit":"","job":"global","level":"debug","msg":"Query finished","name":"prom-sql-exporter","query":"pg_stat_user_tables","ts":"2017-04-04T10:10:40.917357515Z","version":""}
{"caller":"job.go:124","commit":"","job":"global","level":"debug","msg":"Running Query","name":"prom-sql-exporter","query":"pg_statio_user_tables","ts":"2017-04-04T10:10:40.91742497Z","version":""}
{"caller":"job.go:130","commit":"","job":"global","level":"debug","msg":"Query finished","name":"prom-sql-exporter","query":"pg_statio_user_tables","ts":"2017-04-04T10:10:40.938580153Z","version":""}
{"caller":"job.go:124","commit":"","job":"global","level":"debug","msg":"Running Query","name":"prom-sql-exporter","query":"pg_number_of_slow_queries","ts":"2017-04-04T10:10:40.938621651Z","version":""}

Thanks for this cool repo.

0.3 mysql error: Failed to parse URL

Using mysql URL like mysql://user:password@tcp(db.query.consul:3306)/db causing error:

{"caller":"level.go:63","err":"parse mysql://user:password@tcp(db.query.consul:3306)/db: invalid port \":3306)\" after host","job":"de_bulkimport_checks","level":"error","msg":"Failed to parse URL","ts":"2020-03-27T19:26:45.072425282Z","url":"mysql://user:password@tcp(db.query.consul:3306)/db"}

In previous version 0.2 works fine.

Improve test coverage

At least the internals should be covered.

Not sure about integration tests connecting to databases. Would be nice, but may be error prone.

How to edit /metrics page.

When I entered metrics page, I saw many values about duration, memory and so on. And some metrics related with queries in config.yaml was located in end of page. But I want to see only metrics related with these queries. So can you tell me how to edit document in metrics page?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.