Code Monkey home page Code Monkey logo

mysqld_exporter's Introduction

MySQL Server Exporter Build Status

CircleCI Docker Repository on Quay Docker Pulls Go Report Card

Prometheus exporter for MySQL server metrics.

Supported versions:

  • MySQL >= 5.6.
  • MariaDB >= 10.3

NOTE: Not all collection methods are supported on MySQL/MariaDB < 5.6

Building and running

Required Grants

CREATE USER 'exporter'@'localhost' IDENTIFIED BY 'XXXXXXXX' WITH MAX_USER_CONNECTIONS 3;
GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'localhost';

NOTE: It is recommended to set a max connection limit for the user to avoid overloading the server with monitoring scrapes under heavy load. This is not supported on all MySQL/MariaDB versions; for example, MariaDB 10.1 (provided with Ubuntu 18.04) does not support this feature.

Build

make build

Running

Single exporter mode

Running using .my.cnf from the current directory:

./mysqld_exporter <flags>
Multi-target support

This exporter supports the multi-target pattern. This allows running a single instance of this exporter for multiple MySQL targets.

To use the multi-target functionality, send an http request to the endpoint /probe?target=foo:3306 where target is set to the DSN of the MySQL instance to scrape metrics from.

To avoid putting sensitive information like username and password in the URL, you can have multiple configurations in config.my-cnf file and match it by adding &auth_module=<section> to the request.

Sample config file for multiple configurations

    [client]
    user = foo
    password = foo123
    [client.servers]
    user = bar
    password = bar123

On the prometheus side you can set a scrape config as follows

    - job_name: mysql # To get metrics about the mysql exporter’s targets
      params:
        # Not required. Will match value to child in config file. Default value is `client`.
        auth_module: [client.servers]
      static_configs:
        - targets:
          # All mysql hostnames or unix sockets to monitor.
          - server1:3306
          - server2:3306
          - unix:///run/mysqld/mysqld.sock
      relabel_configs:
        - source_labels: [__address__]
          target_label: __param_target
        - source_labels: [__param_target]
          target_label: instance
        - target_label: __address__
          # The mysqld_exporter host:port
          replacement: localhost:9104
Flag format

Example format for flags for version > 0.10.0:

--collect.auto_increment.columns
--no-collect.auto_increment.columns

Example format for flags for version <= 0.10.0:

-collect.auto_increment.columns
-collect.auto_increment.columns=[true|false]

Collector Flags

Name MySQL Version Description
collect.auto_increment.columns 5.1 Collect auto_increment columns and max values from information_schema.
collect.binlog_size 5.1 Collect the current size of all registered binlog files
collect.engine_innodb_status 5.1 Collect from SHOW ENGINE INNODB STATUS.
collect.engine_tokudb_status 5.6 Collect from SHOW ENGINE TOKUDB STATUS.
collect.global_status 5.1 Collect from SHOW GLOBAL STATUS (Enabled by default)
collect.global_variables 5.1 Collect from SHOW GLOBAL VARIABLES (Enabled by default)
collect.heartbeat 5.1 Collect from heartbeat.
collect.heartbeat.database 5.1 Database from where to collect heartbeat data. (default: heartbeat)
collect.heartbeat.table 5.1 Table from where to collect heartbeat data. (default: heartbeat)
collect.heartbeat.utc 5.1 Use UTC for timestamps of the current server (pt-heartbeat is called with --utc). (default: false)
collect.info_schema.clientstats 5.5 If running with userstat=1, set to true to collect client statistics.
collect.info_schema.innodb_metrics 5.6 Collect metrics from information_schema.innodb_metrics.
collect.info_schema.innodb_tablespaces 5.7 Collect metrics from information_schema.innodb_sys_tablespaces.
collect.info_schema.innodb_cmp 5.5 Collect InnoDB compressed tables metrics from information_schema.innodb_cmp.
collect.info_schema.innodb_cmpmem 5.5 Collect InnoDB buffer pool compression metrics from information_schema.innodb_cmpmem.
collect.info_schema.processlist 5.1 Collect thread state counts from information_schema.processlist.
collect.info_schema.processlist.min_time 5.1 Minimum time a thread must be in each state to be counted. (default: 0)
collect.info_schema.query_response_time 5.5 Collect query response time distribution if query_response_time_stats is ON.
collect.info_schema.replica_host 5.6 Collect metrics from information_schema.replica_host_status.
collect.info_schema.tables 5.1 Collect metrics from information_schema.tables.
collect.info_schema.tables.databases 5.1 The list of databases to collect table stats for, or '*' for all.
collect.info_schema.tablestats 5.1 If running with userstat=1, set to true to collect table statistics.
collect.info_schema.schemastats 5.1 If running with userstat=1, set to true to collect schema statistics
collect.info_schema.userstats 5.1 If running with userstat=1, set to true to collect user statistics.
collect.mysql.user 5.5 Collect data from mysql.user table
collect.perf_schema.eventsstatements 5.6 Collect metrics from performance_schema.events_statements_summary_by_digest.
collect.perf_schema.eventsstatements.digest_text_limit 5.6 Maximum length of the normalized statement text. (default: 120)
collect.perf_schema.eventsstatements.limit 5.6 Limit the number of events statements digests by response time. (default: 250)
collect.perf_schema.eventsstatements.timelimit 5.6 Limit how old the 'last_seen' events statements can be, in seconds. (default: 86400)
collect.perf_schema.eventsstatementssum 5.7 Collect metrics from performance_schema.events_statements_summary_by_digest summed.
collect.perf_schema.eventswaits 5.5 Collect metrics from performance_schema.events_waits_summary_global_by_event_name.
collect.perf_schema.file_events 5.6 Collect metrics from performance_schema.file_summary_by_event_name.
collect.perf_schema.file_instances 5.5 Collect metrics from performance_schema.file_summary_by_instance.
collect.perf_schema.file_instances.remove_prefix 5.5 Remove path prefix in performance_schema.file_summary_by_instance.
collect.perf_schema.indexiowaits 5.6 Collect metrics from performance_schema.table_io_waits_summary_by_index_usage.
collect.perf_schema.memory_events 5.7 Collect metrics from performance_schema.memory_summary_global_by_event_name.
collect.perf_schema.memory_events.remove_prefix 5.7 Remove instrument prefix in performance_schema.memory_summary_global_by_event_name.
collect.perf_schema.tableiowaits 5.6 Collect metrics from performance_schema.table_io_waits_summary_by_table.
collect.perf_schema.tablelocks 5.6 Collect metrics from performance_schema.table_lock_waits_summary_by_table.
collect.perf_schema.replication_group_members 5.7 Collect metrics from performance_schema.replication_group_members.
collect.perf_schema.replication_group_member_stats 5.7 Collect metrics from performance_schema.replication_group_member_stats.
collect.perf_schema.replication_applier_status_by_worker 5.7 Collect metrics from performance_schema.replication_applier_status_by_worker.
collect.slave_status 5.1 Collect from SHOW SLAVE STATUS (Enabled by default)
collect.slave_hosts 5.1 Collect from SHOW SLAVE HOSTS
collect.sys.user_summary 5.7 Collect metrics from sys.x$user_summary (disabled by default).

General Flags

Name Description
mysqld.address Hostname and port used for connecting to MySQL server, format: host:port. (default: locahost:3306)
mysqld.username Username to be used for connecting to MySQL Server
config.my-cnf Path to .my.cnf file to read MySQL credentials from. (default: ~/.my.cnf)
log.level Logging verbosity (default: info)
exporter.lock_wait_timeout Set a lock_wait_timeout (in seconds) on the connection to avoid long metadata locking. (default: 2)
exporter.log_slow_filter Add a log_slow_filter to avoid slow query logging of scrapes. NOTE: Not supported by Oracle MySQL.
tls.insecure-skip-verify Ignore tls verification errors.
web.config.file Path to a web configuration file
web.listen-address Address to listen on for web interface and telemetry.
web.telemetry-path Path under which to expose metrics.
version Print the version information.

Environment Variables

Name Description
MYSQLD_EXPORTER_PASSWORD Password to be used for connecting to MySQL Server

Configuration precedence

If you have configured cli with both mysqld flags and a valid configuration file, the options in the configuration file will override the flags for client section.

TLS and basic authentication

The MySQLd Exporter supports TLS and basic authentication.

To use TLS and/or basic authentication, you need to pass a configuration file using the --web.config.file parameter. The format of the file is described in the exporter-toolkit repository.

Customizing Configuration for a SSL Connection

If The MySQL server supports SSL, you may need to specify a CA truststore to verify the server's chain-of-trust. You may also need to specify a SSL keypair for the client side of the SSL connection. To configure the mysqld exporter to use a custom CA certificate, add the following to the mysql cnf file:

ssl-ca=/path/to/ca/file

To specify the client SSL keypair, add the following to the cnf.

ssl-key=/path/to/ssl/client/key
ssl-cert=/path/to/ssl/client/cert

Using Docker

You can deploy this exporter using the prom/mysqld-exporter Docker image.

For example:

docker network create my-mysql-network
docker pull prom/mysqld-exporter

docker run -d \
  -p 9104:9104 \
  --network my-mysql-network  \
  prom/mysqld-exporter
  --config.my-cnf=<path_to_cnf>

heartbeat

With collect.heartbeat enabled, mysqld_exporter will scrape replication delay measured by heartbeat mechanisms. Pt-heartbeat is the reference heartbeat implementation supported.

Filtering enabled collectors

The mysqld_exporter will expose all metrics from enabled collectors by default. This is the recommended way to collect metrics to avoid errors when comparing metrics of different families.

For advanced use the mysqld_exporter can be passed an optional list of collectors to filter metrics. The collect[] parameter may be used multiple times. In Prometheus configuration you can use this syntax under the scrape config.

params:
  collect[]:
  - foo
  - bar

This can be useful for having different Prometheus servers collect specific metrics from targets.

Example Rules

There is a set of sample rules, alerts and dashboards available in the mysqld-mixin

mysqld_exporter's People

Contributors

adivinho avatar aleksi avatar arvenil avatar beorn7 avatar blkperl avatar brian-brazil avatar dafydd-t avatar dependabot[bot] avatar eugenechertikhin avatar fabxc avatar grobie avatar hateeyan avatar juliusv avatar maeserichar avatar mfouilleul avatar mmiller1 avatar peterloeffler avatar prombot avatar rgeyer avatar roidelapluie avatar roman-vynar avatar rtreffer avatar sdurrheimer avatar siavashs avatar simonpasquier avatar soara avatar superq avatar tomwilkie avatar winfredwz avatar wrouesnel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mysqld_exporter's Issues

Import data from INNODB_SYS_TABLESPACES

Hi,

This is available in MySQL 5.7

mysql> select * from INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES where name='sbinnodb/sbtest1' \G
*************************** 1. row ***************************
SPACE: 42
NAME: sbinnodb/sbtest1
FLAG: 33
FILE_FORMAT: Barracuda
ROW_FORMAT: Dynamic
PAGE_SIZE: 16384
ZIP_PAGE_SIZE: 0
SPACE_TYPE: Single
FS_BLOCK_SIZE: 4096
FILE_SIZE: 245937209344
ALLOCATED_SIZE: 245937266688
1 row in set (0.00 sec)

FILE_SIZE in this table is especially valuable because it is being updated in real time unlike INFORMATION_SCHEMA.TABLES, which is updated periodically and sometimes several GB off.

Pushing selected metrics from SHOW VARIABLES

Hello,

It would be great to push those numeric metrics from SHOW VARIABLES in order to use in the query expressions.

Some examples:

  • thread_cache_size - useful to have a max size on the graph as a baseline along with the existing threads_cached and threads_created from SHOW GLOBAL STATUS
  • max_connections - to calculate usage % for alerting, e.g. connections / max_connections * 100%
  • read_only ON/OFF translated into 1/0 - you may want to see when MySQL node was in read-only mode
  • table_open_cache
  • query_cache_size
  • thread_pool_size

Thanks,
Roman

Warning on start: «Unknown column 'SPACE_TYPE' in 'field list'»

mysqld_exporter version: 0.8.0
mysql version: percona-server 5.6 on ubuntu
user: debian-sys-maint (all grants)

On start mysqld_exporter shows:

ERRO[0000] Error scraping for collect.info_schema.innodb_sys_tablespaces: Error 1054: Unknown column 'SPACE_TYPE' in 'field list'  source=mysqld_exporter.go:288

it works after all, but maybe something is broken in my configuration?

Broken user/client statistics for Percona Server

This commit broke user statistics for Percona Server https://github.com/prometheus/mysqld_exporter/pull/138/files#diff-1cfdb3ffdb1892d4e0c3e4ea1ee8af1bL13

MariaDB has one set of columns https://mariadb.com/kb/en/mariadb/information-schema-user_statistics-table/
Percona Server has different https://www.percona.com/doc/percona-server/5.5/diagnostics/user_stats.html#USER_STATISTICS

Now it only works with MariaDB.

The same thing with client statistics.

Collect `SHOW ENGINE INNODB MUTEX`

Collect the output of SHOW ENGINE INNODB MUTEX.

Example Output

Example output:

Type    Name    Status
InnoDB  dict0mem.c:92   os_waits=5
InnoDB  dict0mem.c:92   os_waits=15
InnoDB  dict0mem.c:92   os_waits=3
InnoDB  dict0mem.c:92   os_waits=6
InnoDB  dict0mem.c:92   os_waits=14
InnoDB  dict0mem.c:92   os_waits=1
InnoDB  dict0mem.c:92   os_waits=1
InnoDB  dict0mem.c:92   os_waits=1
InnoDB  trx0purge.c:250 os_waits=8
InnoDB  trx0rseg.c:210  os_waits=89
InnoDB  trx0rseg.c:210  os_waits=94
InnoDB  trx0rseg.c:210  os_waits=92
InnoDB  trx0rseg.c:210  os_waits=97
InnoDB  ibuf0ibuf.c:537 os_waits=9
InnoDB  ibuf0ibuf.c:534 os_waits=3467068
InnoDB  ibuf0ibuf.c:531 os_waits=353548
InnoDB  dict0dict.c:716 os_waits=22363
InnoDB  trx0sys.c:196   os_waits=564
InnoDB  trx0sys.c:1327  os_waits=40
InnoDB  log0log.c:775   os_waits=40086
InnoDB  log0log.c:771   os_waits=14899464
InnoDB  buf0buf.c:1221  os_waits=462
InnoDB  buf0buf.c:1186  os_waits=113179298
InnoDB  fil0fil.c:1645  os_waits=912159
InnoDB  srv0start.c:1301    os_waits=189856
InnoDB  srv0srv.c:1027  os_waits=3079
InnoDB  srv0srv.c:1024  os_waits=323266475
InnoDB  combined buf0buf.c:935  os_waits=803521
InnoDB  dict0dict.c:1750    os_waits=2
InnoDB  dict0dict.c:1750    os_waits=1167
InnoDB  dict0dict.c:1750    os_waits=266
InnoDB  dict0dict.c:1750    os_waits=10
InnoDB  dict0dict.c:1750    os_waits=93
InnoDB  dict0dict.c:1750    os_waits=28
InnoDB  dict0dict.c:1750    os_waits=845
InnoDB  dict0dict.c:1750    os_waits=118
InnoDB  dict0dict.c:1750    os_waits=76
InnoDB  dict0dict.c:1750    os_waits=35788
InnoDB  fil0fil.c:1303  os_waits=9
InnoDB  dict0dict.c:1750    os_waits=6
InnoDB  dict0dict.c:1750    os_waits=40
InnoDB  dict0dict.c:1750    os_waits=494
InnoDB  dict0dict.c:1750    os_waits=1208
InnoDB  dict0dict.c:1750    os_waits=74346
InnoDB  dict0dict.c:1750    os_waits=3333
InnoDB  dict0dict.c:1750    os_waits=114169
InnoDB  fil0fil.c:1303  os_waits=2
InnoDB  fil0fil.c:1303  os_waits=77
InnoDB  fil0fil.c:1303  os_waits=1
InnoDB  fil0fil.c:1303  os_waits=6
InnoDB  trx0purge.c:246 os_waits=2146
InnoDB  dict0dict.c:1750    os_waits=10202084
InnoDB  dict0dict.c:739 os_waits=441
InnoDB  dict0dict.c:739 os_waits=23
InnoDB  dict0dict.c:739 os_waits=25
InnoDB  dict0dict.c:739 os_waits=171
InnoDB  dict0dict.c:729 os_waits=78
InnoDB  fil0fil.c:1303  os_waits=14533
InnoDB  log0log.c:832   os_waits=345967
InnoDB  btr0sea.c:178   os_waits=32118152
InnoDB  trx0i_s.c:1379  os_waits=51
InnoDB  combined buf0buf.c:936  os_waits=279594950

Notes

To Implement This

  • Collect name and the amount of os_waits
  • Many duplicate name occurs, we need to sum all the os_waits for these.

Warnings

  • on larger buffer pools, it can take +1 second to finish . So this should not be ran with an interval of 1 minute or so
  • Based on feedback of Percona Server Lead, It does not impact production in 5.6 and newer. (<5.6 was not checked) (no locks, but does use CPU)
  • maybe disable this feature by default.

Processed Example

Example (ignore second column with %, that is used to represent easier in text):

$ cat data/mutexes.rst | gawk -f pcs-innodb-mutexes-sum.awk
trx0purge.c:250                 0.0 %   8
ibuf0ibuf.c:537                 0.0 %   9
trx0sys.c:1327                  0.0 %   40
dict0mem.c:92                   0.0 %   46
trx0i_s.c:1379                  0.0 %   51
dict0dict.c:729                 0.0 %   78
trx0rseg.c:210                  0.0 %   372
buf0buf.c:1221                  0.0 %   462
trx0sys.c:196                   0.0 %   564
dict0dict.c:739                 0.0 %   660
trx0purge.c:246                 0.0 %   2146
srv0srv.c:1027                  0.0 %   3079
fil0fil.c:1303                  0.0 %   14628
dict0dict.c:716                 0.0 %   22363
log0log.c:775                   0.0 %   40086
srv0start.c:1301                0.0 %   189856
log0log.c:832                   0.0 %   345967
ibuf0ibuf.c:531                 0.0 %   353548
combined buf0buf.c:935          0.1 %   803521
fil0fil.c:1645                  0.1 %   912159
ibuf0ibuf.c:534                 0.4 %   3467068
dict0dict.c:1750                1.3 %   10434073
log0log.c:771                   1.9 %   14899464
btr0sea.c:178                   4.1 %   32118152
buf0buf.c:1186                  14.5%   113179298
combined buf0buf.c:936          35.9%   279594950
srv0srv.c:1024                  41.5%   323266475

my.cnf only accepted with key value pairs

hi everyone,

the mysqld_exporter chokes on a my.cnf that does not only contain key/value pairs, but boolean switches as well:

[mysqld]
...
skip-host-cache
skip-name-resolve

produces an fatal error on starting the mysqld_exporter:

[root@db1 ~]# ./mysqld_exporter -config.my-cnf=/etc/my.cnf 
INFO[0000] Starting mysqld_exporter (version=, branch=, revision=)  source=mysqld_exporter.go:412
INFO[0000] Build context (go=go1.5.4, user=, date=)      source=mysqld_exporter.go:413
FATA[0000] failed reading ini file: key-value delimiter not found: skip-host-cache
  source=mysqld_exporter.go:419

Add galera cluster information metric

Adding a metric with the cluster info, including the wsrep_cluster_state_uuid is useful to detect nodes that have split-brain from the rest of the cluster.

INNODB_METRICS buffer_page_read_XXX processing

Hi,

Innodb metrics tracks pages which are being read and written which is very helpful for performance analyses, ie understand what you're bound by reading undo space etc.

mysql> select name,count from innodb_metrics where name like "buffer_page_%";
+-----------------------------------------+---------+
| name | count |
+-----------------------------------------+---------+
| buffer_pages_created | 98345 |
| buffer_pages_written | 3912493 |
| buffer_pages_read | 3864202 |
| buffer_page_read_index_leaf | 1947236 |
| buffer_page_read_index_non_leaf | 3307 |
| buffer_page_read_index_ibuf_leaf | 1094490 |
| buffer_page_read_index_ibuf_non_leaf | 3178 |
| buffer_page_read_undo_log | 605859 |
| buffer_page_read_index_inode | 8628 |
| buffer_page_read_ibuf_free_list | 0 |
| buffer_page_read_ibuf_bitmap | 18652 |
| buffer_page_read_system_page | 168338 |
| buffer_page_read_trx_system | 2065 |
| buffer_page_read_fsp_hdr | 4119 |
| buffer_page_read_xdes | 5652 |
| buffer_page_read_blob | 0 |
| buffer_page_read_zblob | 0 |
| buffer_page_read_zblob2 | 0 |
| buffer_page_read_other | 0 |
| buffer_page_written_index_leaf | 1922780 |
| buffer_page_written_index_non_leaf | 2869 |
| buffer_page_written_index_ibuf_leaf | 1202883 |
| buffer_page_written_index_ibuf_non_leaf | 7719 |
| buffer_page_written_undo_log | 517163 |
| buffer_page_written_index_inode | 11030 |
| buffer_page_written_ibuf_free_list | 0 |
| buffer_page_written_ibuf_bitmap | 3947 |
| buffer_page_written_system_page | 227179 |
| buffer_page_written_trx_system | 3079 |
| buffer_page_written_fsp_hdr | 6400 |
| buffer_page_written_xdes | 7153 |
| buffer_page_written_blob | 0 |
| buffer_page_written_zblob | 0 |
| buffer_page_written_zblob2 | 0 |
| buffer_page_written_other | 0 |
+-----------------------------------------+---------+
35 rows in set (0.01 sec)

It would be good idea for buffer_page_read_XXXX to be put in the separate "dimension" so it is easy to plot all which are available.

Add performance_schema metrics

There are a lot of interesting metrics in the mysqld performance_schema

  • events_statements_summary_by_digest
  • table_io_waits_summary_by_index_usage
  • table_io_waits_summary_by_table
  • table_lock_waits_summary_by_table
  • hosts

Support MySQL 5.7 Performance Schema

Hi,

Getting this error running mysqld_exporter on 5.7

time="2016-01-25T12:57:29-05:00" level=info msg="Error scraping performance schema: Error 1054: Unknown column 'COUNT_WRITE_DELAYED' in 'field list'" file="mysqld_exporter.go" line=796
time="2016-01-25T12:57:29-05:00" level=info msg="Starting Server: :9304" file="mysqld_exporter.go" line=1747

Note it also would be helpful if error level would contain what "collector" is causing it or at very least actual failing query so it would be easier to troubleshoot it.

Disabled binary log stops data collection already

I noticed on the server with binary log disabled I get error message:

INFO[0014] Error scraping binlog size: Error 1381: You are not using binary logging file=mysqld_exporter.go line=549
INFO[0265] Error scraping binlog size: Error 1381: You are not using binary logging file=mysqld_exporter.go line=549

This causes all further metric collection to stop. Better behavior would be to collect all metrics which are available in the given instance and ignore those which can't be collected.

Current behavior means test servers which often do not have binary log enabled need to run different exporter configuration than other ones.

Implement Detailed file statistics capture

Hi,

Currently there is "event name" based file statistic implemented which allows to see the kind of IO but not towards what file it goes. For example innodb_data_file will be used both for main tablespace and individual tables which have IO.

file_summary_by_event_name

provides more detailed information showing actual file name.

*************************** 4. row ***************************
FILE_NAME: /var/lib/mysql/ibdata1
EVENT_NAME: wait/io/file/innodb/innodb_data_file
OBJECT_INSTANCE_BEGIN: 140501553632064
COUNT_STAR: 2522
SUM_TIMER_WAIT: 59152857149693
MIN_TIMER_WAIT: 434922
AVG_TIMER_WAIT: 23454740938
MAX_TIMER_WAIT: 4821765466863
COUNT_READ: 246
SUM_TIMER_READ: 1295329342336
MIN_TIMER_READ: 26980248
AVG_TIMER_READ: 5265566296
MAX_TIMER_READ: 397393276998
SUM_NUMBER_OF_BYTES_READ: 6144000
COUNT_WRITE: 1718
SUM_TIMER_WRITE: 267658895693
MIN_TIMER_WRITE: 2211063
AVG_TIMER_WRITE: 155796770
MAX_TIMER_WRITE: 115813693743
SUM_NUMBER_OF_BYTES_WRITE: 28147712
COUNT_MISC: 558
SUM_TIMER_MISC: 57589868911664
MIN_TIMER_MISC: 434922
AVG_TIMER_MISC: 103207650106
MAX_TIMER_MISC: 4821765466863
*************************** 5. row ***************************

do not skip status_counter metrics from innodb_metrics

Hi,

Currently such metrics are skipped probably based on wrong suggestion by me what they are duplicated in SHOW GLOBAL STATUS. Most are not in MySQL 5.7 !

infoSchemaInnodbMetricsQuery = `
    SELECT
      name, subsystem, type, comment,
      count
      FROM information_schema.innodb_metrics
      WHERE status = 'enabled'
        AND type != 'status_counter'

We need to remove type !='status_counter' from here and treat such metrics as count metrics

Sorry for confusion.

Custom Queries

it is possible to add his own query for a specific metric from a table?

Support for INNODB_METRICS capture

There is wealth of additional information available in INFORMATION_SCHEMA.INNODB_METRICS table. Many subsystems such as purging or working of index condition push down can be only analyzed using these metrics.

Relevant Documentation:
https://dev.mysql.com/doc/refman/5.7/en/innodb-information-schema-metrics-table.html

Only some of the metrics will be enabled by default. User can enable more as needed with

SET GLOBAL innodb_monitor_enable = [counter-name|module_name|pattern|all];

Exporter should only select rows with status = "enabled"

I would recommend grouping metrics based on the subsystem so it is easy to create graphs plotting all metrics for given subsystem if desired

Only COUNT value is really of any value

Example:

*************************** 217. row ***************************
NAME: innodb_rwlock_x_spin_rounds
SUBSYSTEM: server
COUNT: 1903254040
MAX_COUNT: 1903254040
MIN_COUNT: NULL
AVG_COUNT: 1343.9398661888538
COUNT_RESET: 1903254040
MAX_COUNT_RESET: 1903254040
MIN_COUNT_RESET: NULL
AVG_COUNT_RESET: NULL
TIME_ENABLED: 2016-01-02 10:48:18
TIME_DISABLED: NULL
TIME_ELAPSED: 1416175
TIME_RESET: NULL
STATUS: enabled
TYPE: status_counter
COMMENT: Number of rwlock spin loop rounds due to exclusive latch request

Split Buffer Pool Stats in the separate dimension

Innodb SHOW GLOBAL STATUS naming for Buffer Pool Stats is not very consistent

| Innodb_buffer_pool_pages_data | 1538216 |
| Innodb_buffer_pool_bytes_data | 25202130944 |
| Innodb_buffer_pool_pages_dirty | 576509 |
| Innodb_buffer_pool_bytes_dirty | 9445523456 |
| Innodb_buffer_pool_pages_flushed | 11392163927 |
| Innodb_buffer_pool_pages_free | 5684 |
| Innodb_buffer_pool_pages_LRU_flushed | 0 |
| Innodb_buffer_pool_pages_made_not_young | 58313675407 |
| Innodb_buffer_pool_pages_made_young | 1327142363 |
| Innodb_buffer_pool_pages_misc | 28772 |
| Innodb_buffer_pool_pages_old | 567754 |
| Innodb_buffer_pool_pages_total | 1572672 |
| Innodb_buffer_pool_read_ahead_rnd | 0 |
| Innodb_buffer_pool_read_ahead | 11540243 |
| Innodb_buffer_pool_read_ahead_evicted | 0 |
| Innodb_buffer_pool_read_requests | 338888865574 |
| Innodb_buffer_pool_reads | 17291274739 |
| Innodb_buffer_pool_wait_free | 17007371

Some of these correspond to actions and others to state of buffer pool. I would suggest so split

Innodb_buffer_pool_pages_total consists of

Innodb_buffer_pool_pages_data, Innodb_buffer_pool_pages_dirty, Innodb_buffer_pool_pages_free, Innodb_buffer_pool_pages_misc

should be separately available so it is easy to have the graph for all buffer pool content

Make Handler statistics reported through extra label

The Com_XXX handling in Prometheus is great as it allows me to get graph and be forward compatible with future versions and variants which can expose different commands

Handler_XXX is same and it would be great if it reported the same way

prometheus can't collect metrics, target is always down

Hey guys! I'm sorry to bother you but I faced with the problem that I don't know how to solve.
The thing is that my prometheus can't collect metrics from mysql_exporter, that is it shows that the target is always down (context deadline exceeded). I tried playing with the scrape time in prometheus config, changing it from 4 to 60 seconds and it still didn't work. It looks like myslq_exporter is working, 'cause I can see the metrics from inside another containers in my overlay network (http://db-exporter:9104/metrics). I have no idea what is wrong with it. I attach my docker-compose file and prometheus config. Hopefully, you'll be able to help me out, because I'm stuck here :(

version: '2'

volumes:
    prometheus_data: {}
    grafana_data: {}

services:
  db:
    image: mysql:5.6
    environment:
      MYSQL_ROOT_PASSWORD: 123
    ports:
      - 3306:3306
    networks:
      - back-tier
  prometheus:
    image: artemkin/blah
    container_name: prometheus
    environment:
     - constraint:node==artem-master
    volumes:
      - prometheus_data:/prometheus
    command:
      - '-config.file=/etc/prometheus/prometheus.yml'
      - '-storage.local.path=/prometheus'
    ports:
      - 9090:9090
    depends_on:
      - db-exporter
      - cadvisor1
    networks:
      - back-tier
  db-exporter:
    image: prom/mysqld-exporter
    environment:
      DATA_SOURCE_NAME: root:123@(db:3306)/
    depends_on:
      - db
    expose:
      - 9104
    networks:
      - back-tier
  node-exporter1:
    image: prom/node-exporter
    environment:
     - constraint:node==artem-slave1
    expose:
      - 9100
    networks:
      - back-tier
  cadvisor1:
    image: google/cadvisor
    environment:
      - constraint:node==artem-slave1
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:rw
      - /sys:/sys:ro
      - /var/lib/docker/:/var/lib/docker:ro
    expose:
      - 8080
    networks:
      - back-tier
networks:
  back-tier:
    driver: overlay

artemkin/blah is just prom/prometheus container where I put this prometheus config

# my global config
global:
  scrape_interval:     15s 
  evaluation_interval: 15s
  external_labels:
      monitor: 'my-project'
scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 6s
    static_configs:
         - targets: ['db-exporter:9104','cadvisor1:8080','node-exporter1:9100']

other exporters like node-exporter or cadvisor work perfectly. I have no idea what is wrong with this one.

Add metrics for config file deviations

Allow a list of mysql config files (default /etc/mysql/my.cnf) to be parsed and compared with global variables to provide metrics for deviations between config and running state.

Capture SHOW ENGINE INNODB STATUS

Typical problem in MySQL Is contention. It can be measured in Performance Schema but with high overhead and also it does not cover every single mutex in Innodb. Lower overhead solution is to use SHOW ENGINE INNODB MUTEX. The example is below.

It is a little bit messy so I would suggest

  1. We should report spins, waits, calls as separate metrics if they are available.

  2. Many mutexes has different instances which is not very valueable. Ie we can go ahead and sum all REDO_RSEG values together to the same instance. We can also of course automatically enumerate instances but I do not think it is very valuable

mysql> SHOW ENGINE INNODB MUTEX;
+--------+-----------------------------+---------------------------------------------------+
| Type | Name | Status |
+--------+-----------------------------+---------------------------------------------------+
| InnoDB | SRV_SYS | spins=3500411239,waits=267487891,calls=1150977077 |
| InnoDB | LOCK_SYS | spins=2770124116,waits=52386148,calls=369741367 |
| InnoDB | FIL_SYSTEM | spins=1927887219,waits=32227483,calls=990239836 |
| InnoDB | TRX_SYS | spins=1190134826,waits=28696583,calls=193053443 |
| InnoDB | LOG_SYS | spins=873904266,waits=19456959,calls=154920419 |
| InnoDB | BUF_POOL_LRU_LIST | spins=305708888,waits=8253271,calls=37353349 |
| InnoDB | BUF_POOL_LRU_LIST | spins=303686238,waits=8143288,calls=37886754 |
| InnoDB | BUF_POOL_LRU_LIST | spins=298254804,waits=7984666,calls=36866000 |
| InnoDB | BUF_POOL_LRU_LIST | spins=259590882,waits=6765874,calls=35781678 |
| InnoDB | BUF_POOL_LRU_LIST | spins=259624966,waits=6685413,calls=36449305 |
| InnoDB | BUF_POOL_LRU_LIST | spins=251385823,waits=6523969,calls=34917332 |
| InnoDB | BUF_POOL_LRU_LIST | spins=250547147,waits=6512303,calls=35102085 |
| InnoDB | BUF_POOL_LRU_LIST | spins=244496236,waits=6372300,calls=32958490 |
| InnoDB | TRX | spins=165920974,waits=2861196,calls=143755919 |
| InnoDB | DICT_SYS | spins=112221188,waits=2454475,calls=31314656 |
| InnoDB | PURGE_SYS_PQ | spins=72430126,waits=1145907,calls=17977583 |
| InnoDB | IBUF | spins=48347207,waits=871004,calls=3355836 |
| InnoDB | SRV_SYS_TASKS | spins=85910398,waits=474202,calls=116252876 |
| InnoDB | REDO_RSEG | spins=15626173,waits=474098,calls=1182167 |
| InnoDB | REDO_RSEG | spins=15620277,waits=473794,calls=1147402 |
| InnoDB | REDO_RSEG | spins=15420310,waits=467224,calls=1106054 |
| InnoDB | REDO_RSEG | spins=15373199,waits=467202,calls=1078613 |
| InnoDB | REDO_RSEG | spins=15359871,waits=466547,calls=1104583 |
| InnoDB | REDO_RSEG | spins=15350008,waits=465970,calls=1067286 |
| InnoDB | REDO_RSEG | spins=15382183,waits=465887,calls=1136128 |
| InnoDB | REDO_RSEG | spins=15318755,waits=465148,calls=1089808 |
| InnoDB | REDO_RSEG | spins=15340558,waits=465029,calls=1085688 |
| InnoDB | REDO_RSEG | spins=15310407,waits=465002,calls=1171755 |
| InnoDB | REDO_RSEG | spins=15311401,waits=464833,calls=1087047 |
| InnoDB | REDO_RSEG | spins=15318492,waits=464527,calls=1100488 |
| InnoDB | REDO_RSEG | spins=15278457,waits=464286,calls=1055289 |
| InnoDB | REDO_RSEG | spins=15303151,waits=464190,calls=1066879 |
| InnoDB | REDO_RSEG | spins=15284608,waits=464064,calls=1071319 |
| InnoDB | REDO_RSEG | spins=15330035,waits=464005,calls=1161665 |
| InnoDB | REDO_RSEG | spins=15288952,waits=463422,calls=1097419 |
| InnoDB | REDO_RSEG | spins=15277615,waits=463369,calls=1122432 |
| InnoDB | REDO_RSEG | spins=15257972,waits=463027,calls=1084185 |
| InnoDB | REDO_RSEG | spins=15251323,waits=462994,calls=1131013 |
| InnoDB | REDO_RSEG | spins=15278678,waits=462866,calls=1107131 |
| InnoDB | REDO_RSEG | spins=15263930,waits=462553,calls=1093379 |
| InnoDB | REDO_RSEG | spins=15246888,waits=462507,calls=1072777 |
| InnoDB | REDO_RSEG | spins=15234577,waits=462476,calls=1060580 |
| InnoDB | REDO_RSEG | spins=15258912,waits=462334,calls=1089671 |
| InnoDB | REDO_RSEG | spins=15233874,waits=462266,calls=1121470 |
| InnoDB | REDO_RSEG | spins=15213876,waits=462053,calls=1066006 |
| InnoDB | REDO_RSEG | spins=15216512,waits=461914,calls=1065139 |
| InnoDB | REDO_RSEG | spins=15215638,waits=461326,calls=1075909 |
| InnoDB | REDO_RSEG | spins=15156706,waits=459742,calls=1092541 |
| InnoDB | REDO_RSEG | spins=15158825,waits=459520,calls=1146328 |
| InnoDB | REDO_RSEG | spins=15112832,waits=457651,calls=1149953 |
| InnoDB | BUF_POOL_FREE_LIST | spins=19147008,waits=311651,calls=8283743 |
| InnoDB | BUF_POOL_FREE_LIST | spins=19305540,waits=309198,calls=8543700 |
| InnoDB | BUF_POOL_FREE_LIST | spins=18896017,waits=305508,calls=8319354 |
| InnoDB | BUF_POOL_FREE_LIST | spins=18441546,waits=282556,calls=8539621 |
| InnoDB | BUF_POOL_FREE_LIST | spins=17802231,waits=281080,calls=8015209 |
| InnoDB | BUF_POOL_FREE_LIST | spins=17603942,waits=275037,calls=7992564 |
| InnoDB | BUF_POOL_FREE_LIST | spins=17332541,waits=273486,calls=7895483 |
| InnoDB | IBUF_PESSIMISTIC_INSERT | spins=8183904,waits=267410,calls=84911 |
| InnoDB | BUF_POOL_FREE_LIST | spins=15021782,waits=235659,calls=6893558 |
| InnoDB | FLUSH_LIST | spins=8930079,waits=216830,calls=2832488 |
| InnoDB | LOCK_SYS_WAIT | spins=5935897,waits=194925,calls=219719 |
| InnoDB | REDO_RSEG | spins=5342080,waits=156818,calls=492335 |
| InnoDB | FLUSH_LIST | spins=6813677,waits=150487,calls=2377616 |
| InnoDB | REDO_RSEG | spins=5096171,waits=150266,calls=475555 |
| InnoDB | REDO_RSEG | spins=5085579,waits=149903,calls=490663 |
| InnoDB | REDO_RSEG | spins=5072863,waits=149264,calls=460653 |
| InnoDB | REDO_RSEG | spins=5053561,waits=148982,calls=448409 |
| InnoDB | REDO_RSEG | spins=5063171,waits=148912,calls=447761 |
| InnoDB | REDO_RSEG | spins=5054505,waits=148830,calls=451434 |
| InnoDB | REDO_RSEG | spins=5055290,waits=148805,calls=511804 |
| InnoDB | REDO_RSEG | spins=5050772,waits=148712,calls=472890 |
| InnoDB | REDO_RSEG | spins=5045732,waits=148663,calls=460930 |
| InnoDB | REDO_RSEG | spins=5043975,waits=148544,calls=448090 |
| InnoDB | REDO_RSEG | spins=5038922,waits=148534,calls=445477 |
| InnoDB | REDO_RSEG | spins=5041281,waits=148490,calls=449476 |
| InnoDB | REDO_RSEG | spins=5026618,waits=148464,calls=459996 |
| InnoDB | REDO_RSEG | spins=5036396,waits=148430,calls=457941 |
| InnoDB | REDO_RSEG | spins=5030671,waits=148336,calls=451112 |
| InnoDB | REDO_RSEG | spins=5033599,waits=148333,calls=447780 |
| InnoDB | REDO_RSEG | spins=5040033,waits=148320,calls=461696 |
| InnoDB | REDO_RSEG | spins=5030396,waits=148263,calls=450691 |
| InnoDB | REDO_RSEG | spins=5027055,waits=148226,calls=509834 |
| InnoDB | REDO_RSEG | spins=5031222,waits=148150,calls=452321 |
| InnoDB | REDO_RSEG | spins=5026201,waits=148069,calls=478661 |
| InnoDB | REDO_RSEG | spins=5023842,waits=148011,calls=469608 |
| InnoDB | REDO_RSEG | spins=5025718,waits=147970,calls=450526 |
| InnoDB | REDO_RSEG | spins=5024063,waits=147941,calls=462225 |
| InnoDB | REDO_RSEG | spins=5027594,waits=147882,calls=448647 |
| InnoDB | REDO_RSEG | spins=5018911,waits=147865,calls=451132 |
| InnoDB | REDO_RSEG | spins=5014486,waits=147773,calls=466817 |
| InnoDB | REDO_RSEG | spins=5020818,waits=147760,calls=491528 |
| InnoDB | REDO_RSEG | spins=5012253,waits=147749,calls=470739 |
| InnoDB | REDO_RSEG | spins=5016553,waits=147746,calls=446532 |
| InnoDB | REDO_RSEG | spins=5022528,waits=147675,calls=451637 |
| InnoDB | REDO_RSEG | spins=5017647,waits=147668,calls=510209 |
| InnoDB | REDO_RSEG | spins=5015310,waits=147647,calls=505735 |
| InnoDB | REDO_RSEG | spins=5026094,waits=147631,calls=447058 |
| InnoDB | REDO_RSEG | spins=5014468,waits=147620,calls=461763 |
| InnoDB | REDO_RSEG | spins=5015414,waits=147608,calls=487644 |
| InnoDB | REDO_RSEG | spins=5011666,waits=147574,calls=444176 |
| InnoDB | REDO_RSEG | spins=5003262,waits=147433,calls=447693 |
| InnoDB | REDO_RSEG | spins=5009768,waits=147385,calls=495284 |
| InnoDB | REDO_RSEG | spins=5008209,waits=147356,calls=491539 |
| InnoDB | REDO_RSEG | spins=5007224,waits=147353,calls=510011 |
| InnoDB | REDO_RSEG | spins=5008434,waits=147331,calls=466573 |
| InnoDB | REDO_RSEG | spins=5010627,waits=147324,calls=473984 |
| InnoDB | REDO_RSEG | spins=5005185,waits=147308,calls=461726 |
| InnoDB | REDO_RSEG | spins=5004322,waits=147280,calls=445766 |
| InnoDB | REDO_RSEG | spins=5004583,waits=147277,calls=451037 |
| InnoDB | REDO_RSEG | spins=4996624,waits=147238,calls=456276 |
| InnoDB | REDO_RSEG | spins=4996747,waits=147228,calls=448720 |
| InnoDB | REDO_RSEG | spins=5003972,waits=147206,calls=446709 |
| InnoDB | REDO_RSEG | spins=4993807,waits=147195,calls=447935 |
| InnoDB | REDO_RSEG | spins=4992877,waits=147057,calls=446730 |
| InnoDB | REDO_RSEG | spins=4996315,waits=146984,calls=496389 |
| InnoDB | REDO_RSEG | spins=4991689,waits=146941,calls=488683 |
| InnoDB | REDO_RSEG | spins=4992033,waits=146918,calls=450508 |
| InnoDB | REDO_RSEG | spins=4989775,waits=146891,calls=468775 |
| InnoDB | REDO_RSEG | spins=4994315,waits=146846,calls=491008 |
| InnoDB | REDO_RSEG | spins=4979573,waits=146734,calls=444547 |
| InnoDB | REDO_RSEG | spins=4987601,waits=146627,calls=449461 |
| InnoDB | REDO_RSEG | spins=4985557,waits=146455,calls=503852 |
| InnoDB | REDO_RSEG | spins=4979875,waits=146435,calls=450950 |
| InnoDB | REDO_RSEG | spins=4967878,waits=146114,calls=447416 |
| InnoDB | REDO_RSEG | spins=4961927,waits=146085,calls=461316 |
| InnoDB | REDO_RSEG | spins=4916113,waits=144768,calls=465814 |
| InnoDB | FLUSH_LIST | spins=5604567,waits=129192,calls=1841249 |
| InnoDB | FLUSH_LIST | spins=5354980,waits=122332,calls=1747357 |
| InnoDB | LOG_FLUSH_ORDER | spins=4115154,waits=66946,calls=1686229 |
| InnoDB | FLUSH_LIST | spins=2160697,waits=43455,calls=1056246 |
| InnoDB | PAGE_CLEANER | spins=1087059,waits=34926,calls=41245 |
| InnoDB | BUF_BLOCK_MUTEX | spins=976252,waits=31396,calls=130710 |
| InnoDB | FLUSH_LIST | spins=1352045,waits=24891,calls=626464 |
| InnoDB | FILE_FORMAT_MAX | spins=759599,waits=23302,calls=728164 |
| InnoDB | FLUSH_LIST | spins=1049383,waits=20958,calls=316570 |
| InnoDB | FLUSH_LIST | spins=984943,waits=18797,calls=379497 |
| InnoDB | TRX | spins=22709,waits=439,calls=1809 |
| InnoDB | TRX | spins=20272,waits=413,calls=1637 |
| InnoDB | TRX | spins=19877,waits=388,calls=1624 |
| InnoDB | TRX | spins=18775,waits=380,calls=1543 |
| InnoDB | TRX | spins=19494,waits=375,calls=1586 |
| InnoDB | TRX | spins=18513,waits=355,calls=1565 |
| InnoDB | TRX | spins=17570,waits=334,calls=1505 |
| InnoDB | TRX | spins=17340,waits=329,calls=1531 |
| InnoDB | BUF_POOL_FLUSH_STATE | spins=16509,waits=171,calls=9629 |
| InnoDB | TRX | spins=7008,waits=112,calls=1075 |
| InnoDB | TRX | spins=6541,waits=98,calls=1033 |
| InnoDB | TRX | spins=6035,waits=92,calls=987 |
| InnoDB | BUF_POOL_FLUSH_STATE | spins=5590,waits=89,calls=2318 |
| InnoDB | TRX | spins=6043,waits=85,calls=1057 |
| InnoDB | TRX | spins=5939,waits=85,calls=1050 |
| InnoDB | TRX | spins=5973,waits=85,calls=1037 |
| InnoDB | BUF_POOL_FLUSH_STATE | spins=5039,waits=84,calls=1990 |
| InnoDB | BUF_POOL_FLUSH_STATE | spins=6566,waits=84,calls=3373 |
| InnoDB | BUF_POOL_FLUSH_STATE | spins=5647,waits=84,calls=2573 |
| InnoDB | TRX_POOL | spins=2630,waits=82,calls=176 |
| InnoDB | BUF_POOL_FLUSH_STATE | spins=2797,waits=43,calls=1728 |
| InnoDB | BUF_POOL_FLUSH_STATE | spins=2680,waits=42,calls=1567 |
| InnoDB | BUF_POOL_FLUSH_STATE | spins=2489,waits=38,calls=1544 |
| InnoDB | RW_LOCK_LIST | spins=1044,waits=34,calls=79 |
| InnoDB | SRV_INNODB_MONITOR | spins=122,waits=4,calls=5 |
| InnoDB | TRX_POOL_MANAGER | spins=45,waits=0,calls=51 |
| InnoDB | rwlock: dict0dict.cc:294 | waits=9 |
| InnoDB | rwlock: dict0dict.cc:294 | waits=5 |
| InnoDB | rwlock: dict0dict.cc:2673 | waits=113033816 |
| InnoDB | rwlock: dict0dict.cc:294 | waits=1 |
| InnoDB | rwlock: dict0dict.cc:2673 | waits=20 |
| InnoDB | rwlock: fil0fil.cc:1356 | waits=2 |
| InnoDB | rwlock: fil0fil.cc:1356 | waits=3154 |
| InnoDB | rwlock: trx0purge.cc:238 | waits=96 |
| InnoDB | rwlock: ibuf0ibuf.cc:575 | waits=10928996 |
| InnoDB | rwlock: dict0dict.cc:1184 | waits=62678 |
| InnoDB | rwlock: fil0fil.cc:1356 | waits=287744 |
| InnoDB | rwlock: log0log.cc:889 | waits=109193 |
| InnoDB | rwlock: btr0sea.cc:195 | waits=478245855 |
| InnoDB | rwlock: btr0sea.cc:195 | waits=9817649 |
| InnoDB | rwlock: btr0sea.cc:195 | waits=18 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=38811 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=38344 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=40090 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=40130 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=38972 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=39939 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=39964 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=39700 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=39186 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=39917 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=39518 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=40125 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=40374 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=40195 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=40011 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=41201 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=334979 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=46465 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=5533810 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=385885 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=208747 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=48204 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=192571 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=237714 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=49308 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=48972 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=52493 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=49051 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=49319 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=205644 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=51569 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=62837 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=38805 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=38659 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=38936 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=40551 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=40310 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=41528 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=41712 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=40201 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=40412 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=40384 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=40372 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=40414 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=40696 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=40269 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=40190 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=41850 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=34960 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=34035 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=34760 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=35177 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=34946 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=34924 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=36190 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=34959 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=38746 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=34936 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=36951 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=36331 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=38140 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=36952 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=38048 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=37505 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=29173 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=28959 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=29324 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=29730 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=30663 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=31838 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=28755 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=30440 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=30345 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=30682 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=31079 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=30549 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=31203 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=30523 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=29054 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=31998 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=31074 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=31086 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=31792 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=32587 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=31218 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=32143 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=32690 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=32169 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=32350 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=32739 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=32875 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=33421 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=32898 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=32547 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=33110 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=33908 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=29965 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=29969 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=31032 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=30499 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=30374 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=31068 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=31245 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=30974 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=31095 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=31463 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=31243 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=31380 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=31432 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=31398 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=31970 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=32429 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=35113 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=32723 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=33908 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=34156 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=33562 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=33822 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=33803 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=34242 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=34348 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=34268 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=34617 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=34841 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=34414 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=34327 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=34520 |
| InnoDB | rwlock: hash0hash.cc:353 | waits=46112 |
| InnoDB | sum rwlock: buf0buf.cc:1413 | waits=32692948 |
+--------+-----------------------------+---------------------------------------------------+
306 rows in set (0.04 sec)

INNODB_METRICS does not handle some of the data

Hi,

After enabling it on 5.7 getting:

o a uint64: strconv.ParseUint: parsing "-1": invalid syntax" file="mysqld_exporter.go" line=830
time="2016-02-14T14:42:24-05:00" level=info msg="Error scraping information_schema.innodb_metrics: sql: Scan error on column index 4: converting string "-1" to a uint64: strconv.ParseUint: parsing "-1": invalid syntax" file="mysqld_exporter.go" line=830
time="2016-02-14T14:42:25-05:00" level=info msg="Error scraping information_schema.innodb_metrics: sql: Scan error on column index 4: converting string "-1" to a uint64: strconv.ParseUint: parsing "-1": invalid syntax" file="mysqld_exporter.go" line=830
time="2016-02-14T14:42:26-05:00" level=info msg="Error scraping information_schema.innodb_metrics: sql: Scan error on column index 4: converting string "-1" to a uint64: strconv.ParseUint: parsing "-1": invalid syntax" file="mysqld_exporter.go" line=830
time="2016-02-14T14:42:27-05:00" level=info msg="Error scraping information_schema.innodb_metrics: sql: Scan error on column index 4: converting string "-1" to a uint64: strconv.ParseUint: parsing "-1": invalid syntax" file="mysqld_exporter.go" line=830

It might be caused by this: mysql> select * from innodb_metrics where count=-1 \G
*************************** 1. row ***************************
NAME: metadata_table_reference_count
SUBSYSTEM: metadata
COUNT: -1
MAX_COUNT: 18
MIN_COUNT: -1
AVG_COUNT: -0.001092896174863388
COUNT_RESET: -1
MAX_COUNT_RESET: 18
MIN_COUNT_RESET: -1
AVG_COUNT_RESET: NULL
TIME_ENABLED: 2016-02-14 14:29:20
TIME_DISABLED: NULL
TIME_ELAPSED: 915
TIME_RESET: NULL
STATUS: enabled
TYPE: counter
COMMENT: Table reference counter
1 row in set (0.00 sec)

This might be server bug having count=1 but it is not reason to abort

Make all data sources configurable

mysqld exporter can report on multiple data sources, most of them can be enabled through collect.info_schema.userstats and similar settings however things like global status and global variables are always scrapped and reported:

if err = scrapeGlobalStatus(db, ch); err != nil {
log.Println("Error scraping global state:", err)
return
}

It would be good to have options for them too so they can be disabled if needed (keeping them on by default)

Can't read from information_schema.table_statistics

Hello,

I created a dedicated user prom with all required grants.
However there is no data related to collect.info_schema.tables in Prometheus. I tried to go inside the code in this repo and found out that the following query is used:

SELECT
  TABLE_SCHEMA,
  TABLE_NAME,
  ROWS_READ,
  ROWS_CHANGED,
  ROWS_CHANGED_X_INDEXES
  FROM information_schema.table_statistics

When I run this query on behalf of root I do receive results

mysql> select user();
+----------------+
| user()         |
+----------------+
| root@localhost |
+----------------+
1 row in set (0.00 sec)

mysql> SELECT count(*) FROM information_schema.table_statistics;
+----------+
| count(*) |
+----------+
|       30 |
+----------+
1 row in set (0.00 sec)

However when I run this query on behalf of prom` I receive an empty set.

mysql> select user();
+----------------+
| user()         |
+----------------+
| prom@localhost |
+----------------+
1 row in set (0.00 sec)

mysql> SELECT count(*) FROM information_schema.table_statistics;
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set (0.00 sec)

I think this is the problem why prom can not get desired information and push it further to Prometheus. Could you please help to figure out what additional grants should user prom get in order to be able to fetch results from information_schema.table_statistics?

Current privileges are the following:

mysql> show grants for 'prom'@'localhost';
+----------------------------------------------------------------------------------------------------------------------------+
| Grants for prom@localhost                                                                                                  |
+----------------------------------------------------------------------------------------------------------------------------+
| GRANT PROCESS, REPLICATION CLIENT ON *.* TO 'prom'@'localhost' IDENTIFIED BY PASSWORD <secret> WITH MAX_USER_CONNECTIONS 3 |
| GRANT SELECT ON `performance_schema`.* TO 'prom'@'%'                                                                       |
+----------------------------------------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)
mysql> select version();
+-----------------+
| version()       |
+-----------------+
| 5.6.29-76.2-log |
+-----------------+
1 row in set (0.00 sec)

Include individual index IO stats

It turns out it's incorrect to ignore stats from indexes that are not the "NULL" stat in collect.perf_schema.indexiowaits. There are per-index io stats that are probably useful to see where the database is spending it's time updating individual indexes.

We need to re-evaluate which index stats are included.

Protect MySQL Server from Overload - Limit number of Scrapes which can happen at the same time

We have some cases when users have tens of thousands of tables, which means the information_schema related scrapes may take a long time, causing folks to have 100+ information schema queries triggered by Prometheus to happen at the same time possibly driving server out of connections.

Sometimes it happens only when server is under duress rather than all the time which makes it a bad idea to simply disable this data collection.

It would be great if I could configure what exporter allows no more than N concurrent requests to handle in parallel, so for example if there is more than 3 scrape requests are underway error is returned and data is unavailable rather than server being overloaded.

Implement Memory information capture from performance_schema on MySQL 5.7

Note this table only exists on MySQL 5.7 so graceful handling of its absence is recommended

mysql> select * from memory_summary_global_by_event_name where count_alloc>0 order by current_number_of_bytes_used desc limit 2 \G
*************************** 1. row ***************************
EVENT_NAME: memory/innodb/buf_buf_pool
COUNT_ALLOC: 192
COUNT_FREE: 0
SUM_NUMBER_OF_BYTES_ALLOC: 26398949376
SUM_NUMBER_OF_BYTES_FREE: 0
LOW_COUNT_USED: 0
CURRENT_COUNT_USED: 192
HIGH_COUNT_USED: 192
LOW_NUMBER_OF_BYTES_USED: 0
CURRENT_NUMBER_OF_BYTES_USED: 26398949376
HIGH_NUMBER_OF_BYTES_USED: 26398949376
*************************** 2. row ***************************
EVENT_NAME: memory/innodb/hash0hash
COUNT_ALLOC: 82
COUNT_FREE: 6
SUM_NUMBER_OF_BYTES_ALLOC: 1976975584
SUM_NUMBER_OF_BYTES_FREE: 1223839920
LOW_COUNT_USED: 0
CURRENT_COUNT_USED: 76
HIGH_COUNT_USED: 76
LOW_NUMBER_OF_BYTES_USED: 0
CURRENT_NUMBER_OF_BYTES_USED: 753135664
HIGH_NUMBER_OF_BYTES_USED: 1161079744
2 rows in set (0.01 sec)

There are a lot of columns here. I think most interesting are ones related to the memory usage not number of allocation calls:

CURRENT_NUMBER_OF_BYTES_USED: 753135664
HIGH_NUMBER_OF_BYTES_USED: 1161079744

Which correspond to current usage and highest usage ever. Former is helpful for catching high memory usage by stored procedures or other areas where allocation is done for short term.

mysqld_exporter_test not included in release tarball

hi,
it looks like mysqld_exporter_test.go isn't included in the 0.7.1 source tarball

$ tar tzvf 0.7.1.tar.gz 
drwxrwxr-x root/root         0 2016-02-16 13:35 mysqld_exporter-0.7.1/
-rw-rw-r-- root/root        59 2016-02-16 13:35 mysqld_exporter-0.7.1/.gitignore
-rw-rw-r-- root/root       355 2016-02-16 13:35 mysqld_exporter-0.7.1/AUTHORS.md
-rw-rw-r-- root/root      3618 2016-02-16 13:35 mysqld_exporter-0.7.1/CHANGELOG.md
-rw-rw-r-- root/root       859 2016-02-16 13:35 mysqld_exporter-0.7.1/CONTRIBUTING.md
-rw-rw-r-- root/root       132 2016-02-16 13:35 mysqld_exporter-0.7.1/Dockerfile
-rw-rw-r-- root/root     11325 2016-02-16 13:35 mysqld_exporter-0.7.1/LICENSE
-rw-rw-r-- root/root       652 2016-02-16 13:35 mysqld_exporter-0.7.1/Makefile
-rw-rw-r-- root/root      4330 2016-02-16 13:35 mysqld_exporter-0.7.1/Makefile.COMMON
-rw-rw-r-- root/root        65 2016-02-16 13:35 mysqld_exporter-0.7.1/NOTICE
-rw-rw-r-- root/root      4952 2016-02-16 13:35 mysqld_exporter-0.7.1/README.md
-rw-rw-r-- root/root     71719 2016-02-16 13:35 mysqld_exporter-0.7.1/mysqld_exporter.go

Report more information from PROCESSLIST

Hi,

Right now PROCESSLIST is parsed and number of gauges are created based on status

mysql_info_schema_threads{state="after create"} 0
mysql_info_schema_threads{state="altering table"} 0

It would be nice if we also report total query execution time (as sum) and longest query execution time, excluding replication.

This is very helpful for alerting on run away queries as query will not appear in the slow query log etc until it is complete and single run away query running for 3 hours can do a lot of damage.

Support TRUNCATE TABLE for performance_schema tables

For performance_schema.events_statements_summary_by_digest, we limit the number of metrics we extract by using ORDER BY SUM_TIMER_WAIT DESC LIMIT ?. This can cause some queries that were active at one time in the past to stay at the top of the list.

We should provide a feature to TRUNCATE TABLE on a periodic basis to keep the list up to date with current use patterns.

allow paramaterization of DATA_SOURCE_NAME

Having this hardcoded means you can't run two mysqld_exporter processes for separate mysql instances in the same configuration namespace. While rare I have encountered this situation.

getsockopt: connection refused

Trying to use with mariadb running the exporter on the same host and getting the below error while starting via DSN or config file . Checked everything that I could think off and have hit the wall. Please help.

Error pinging mysqld: dial tcp [::1]:3306: getsockopt: connection refused

Add metrics for Galera based clusters

Hi,

It will be very helpful if the exporter expose Galera based variables, those that start with wsrep_.
Or at least allow configuring the exporter to expose custom variables.

Thanks

PR #116 Breaks MySQL 5.6 INNODB_SYS_TABLESPACES

As reported in #116, the column names are different between 5.6 and 5.7.

We need to decide if we plan to support this feature on 5.6 or not.

For now, we should disable this collector by default since it's also going to break with MySQL 5.1.

Expose Errors as metrics; Skip errors

Hi,

Right now if there is any error during scraping it looks like it is reports error in error log and aborts. This makes it rather fragile ie if there is some format change between MySQL versions.

I would suggest

  1. do not abort capture process on error just skip given data source

  2. Additionally add metrics which show when capture of given data source has failed (similar as metrics are available about prometheus server data captures)

This would allow to monitor and react on the problems on Prometheus itself rather than data not coming in (which also might be disabled capture)

The code in question:

if *collectPerfFileEvents {
if err = scrapePerfFileEvents(db, ch); err != nil {
log.Println("Error scraping performance schema:", err)
return

Expose charset encoding & autoincrement value

Hi

I suggest to enrich the mysql_info_schema_table_version metric by adding both the character set and autoincrement value as new attributes :

mysql_info_schema_table_version{create_options="",engine="InnoDB",row_format="Compact",schema="drupal",table="shortcut_set",type="BASE TABLE"} 10

would become something like:

mysql_info_schema_table_version{create_options="",engine="InnoDB",row_format="Compact",schema="drupal",table="shortcut_set",type="BASE TABLE",charset="utf8_general_ci",autoincrement="11598789"} 10

Many thanks

mysql_global_status_connections keeps going up

With a MySQL server with no other activity from any other source except mysql_exporter why does the number of connections keep going up?
I have set:
MAX_USER_CONNECTIONS to 3 as recommended.

However this number keeps going up. Is it the case that when the queries are executed, the connection is left open, i.e. you are not logging back out?

Scrape addtional perf_schema tables

These tables would be good to collect:

  • performance_schema.events_waits_summary_global_by_event_name
  • performance_schema.file_summary_by_event_name

Workaround MySQL bug 79533

It turns out that MySQL sometimes duplicates rows in events_statements_summary_by_digest

Possibly in response to TRUNCATE TABLE

> select DIGEST, SCHEMA_NAME, COUNT(*), DIGEST_TEXT FROM events_statements_summary_by_digest group by DIGEST, SCHEMA_NAME HAVING COUNT(*) > 1;
+----------------------------------+-----------------------+----------+--------------------------------------------------------------------+
| DIGEST                           | SCHEMA_NAME           | COUNT(*) | DIGEST_TEXT                                                        |
+----------------------------------+-----------------------+----------+--------------------------------------------------------------------+
| a9d847b1b646cdc5781d41422ffb4aa2 | soundcloud_production |        2 | BEGIN                                                              |

See: https://bugs.mysql.com/bug.php?id=79533

mysql-exporter use invalid ip

I have two docker compose file one for the prometheus stuff one for my project. I share the network but the mysql-exporter use another ip address?

Error message Error pinging mysqld: dial tcp 10.67.76.55:3306: getsockopt: no route to host

When i log into the container with docker exec -it prometheus_mysqld-exporter_1 sh and call ping db i get 172.21.0.2 instate of 10.67.76.55

ping db

PING db (172.21.0.2): 56 data bytes
64 bytes from 172.21.0.2: seq=0 ttl=64 time=0.070 ms
64 bytes from 172.21.0.2: seq=1 ttl=64 time=0.055 ms

mysql-exporter docker-compose.yml

version: '2'

services:
mysqld-exporter:
    image: "prom/mysqld-exporter"
    expose:
      - "9104"
    external_links:
      - db-master:db
    environment:
      - DATA_SOURCE_NAME=root:123456@(db:3306)/
    networks:
      - default
      - project

networks:
  project:
    external:
      name: project_default

project specific docker-compose.yml

version: '2'

services:
  db-master:
    restart: always
    image: mariadb:10.1.12
    expose:
      - "3306"
    environment:
      - MYSQL_ROOT_PASSWORD=123456

Disable collect.info_schema.tables by default

I suggest to disable collect.info_schema.tables by default. Having this enabled, even these are just 3 series mysql_info_schema_table_* will grow your disk space usage by Prometheus a lot depending from number of schemas and tables.

I think it would be better when the default behaviour is not to run huge selects from I_S and store so much data if people not intended to use it.

For example. in my case, with 2 prod hosts, 5s resolution, 5 day worth data, the basic set of metrics from node and mysqld exporters + mysql_info_schema_table_* I have got prometheus data reached 2.2G. When purging those 3 series via API, the space usage dropped down to 600M. I didn't plan to use those 3 series so far as I have a lot of tables. Of course, I can disable this option on exporter but bear with me, it's always better when defaults are not causing any impact.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.