Code Monkey home page Code Monkey logo

mysql-statsd's Issues

Cannot start mysql_statsd, get

Hi,

I'm trying to get mysql_statsd to run on an ansible-deployed server.

In order to do so I've had to mess around with users and installing the system through pip, since it complains when I try to install it as root.
So what I've done now is run the pip installer as a dedicated user 'statsd', and set up a service file that calls the pip managed mysql_statsd file in the user's home dir.

The service is called like so:

[Unit]
Name = statsd
Description = mysql statsd

[Service]
ExecStart=/home/statsd/.local/bin/mysql_statsd --config /etc/mysql_statsd.conf
Restart=always
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=mysql_statsd
WorkingDirectory=/home/statsd
User=statsd
Group=statsd
RestartSec=10
# Environment=NODE_ENV=production PORT=5000

[Install]
WantedBy=multi-user.target

with the configuration file (in /etc/, that does seem to work)

[daemon]
logfile = /home/statsd/mysql_statsd/log/daemon.log
pidfile = /home/statsd/mysql_statsd.pid

[statsd]
host = {{ mariadb_statsd_host }}
port = {{ mariadb_statsd_port }}
prefix = mysql
include_hostname = true

[mysql]
; host = localhost
socket = /tmp/mysql.sock
username = {{ mariadb_statsd_user }}
password = {{ mariadb_statsd_pass }}
stats_types = status,variables,innodb,slave
query_variables = SHOW GLOBAL VARIABLES
interval_variables = 1000
query_status = SHOW GLOBAL STATUS
interval_status = 1000
query_innodb = SHOW ENGINE INNODB STATUS
interval_innodb = 1000
query_slave = SHOW SLAVE STATUS
interval_slave = 1000
query_commit = COMMIT
interval_commit = 1000
sleep_interval = 0


[metrics]
; g = gauge, c = counter (increment), t = timer, r = raw value, d = delta
variables.max_connections = g
status.max_used_connections = g
status.connections = d
status.aborted_connects = d
status.open_tables = g
status.open_files = g
status.open_streams = g
status.opened_tables = d
status.queries = d
status.empty_queries = d
status.slow_queries = d
status.questions = d
status.com_select = d
status.com_insert = d
status.com_update = d
status.com_delete = d
status.com_insert_select = d
status.qcache_queries_in_cache = g
status.qcache_inserts = d
status.qcache_hits = d
status.qcache_prunes = d
status.qcache_not_cached = d
status.qcache_lowmem_prunes = d
status.qcache_free_memory = g
status.qcache_free_blocks = g
status.qcache_total_blocks = g
status.flush_commands = g
status.created_tmp_disk_tables = d
status.created_tmp_tables = d
status.threads_running = g
status.threads_created = d
status.threads_connected = g
status.threads_cached = g
status.wsrep_flow_control_sent = g
status.wsrep_flow_control_recv = g
status.wsrep_local_sent_queue = g
status.wsrep_local_recv_queue = g
status.wsrep_cert_deps_distance = g
status.wsrep_local_cert_failures = d
status.rep_local_bf_aborts = d
status.wsrep_last_committed = d
status.wsrep_flow_control_paused = g
innodb.spin_waits = d
innodb.spin.rounds = d
innodb.os_waits = d
innodb.spin_rounds = d
innodb.os_waits = d
innodb.pending_normal_aio_reads = g
innodb.pending_normal_aio_writes = g
innodb.pending_ibuf_aio_reads = g
innodb.pending_aio_log_ios = g
innodb.pending_aio_sync_ios = g
innodb.pending_log_flushes = g
innodb.pending_buf_pool_flushes = g
innodb.pending_log_writes = g
innodb.pending_chkp_writes = g
innodb.file_reads = d
innodb.file_writes = d
innodb.file_fsyncs = d
innodb.ibuf_inserts = d
innodb.ibuf_merged = d
innodb.ibuf_merges = d
innodb.log_bytes_written = d
innodb.unflushed_log = g
innodb.log_bytes_flushed = d
innodb.log_writes = d
innodb.pool_size = g
innodb.free_pages = g
innodb.database_pages = g
innodb.modified_pages = g
innodb.pages_read = d
innodb.pages_created = d
innodb.pages_written = d
innodb.queries_inside = d
innodb.queries_queued = d
innodb.read_views = d
innodb.rows_inserted = d
innodb.rows_updated = d
innodb.rows_deleted = d
innodb.rows_read = d
innodb.innodb_transactions = d
innodb.unpurged_txns = d
innodb.history_list = g
innodb.current_transactions = g
innodb.active_transactions = g
innodb.locked_transactions = g
innodb.innodb_locked_tables = g
innodb.innodb_tables_in_use = g
innodb.read_views = g
innodb.hash_index_cells_total = g
innodb.hash_index_cells_used = g
innodb.total_mem_alloc = d
innodb.additional_pool_alloc = d
innodb.last_checkpoint = d
innodb.uncheckpointed_bytes = g
innodb.ibuf_used_cells = g
innodb.ibuf_free_cells = g
innodb.ibuf_cell_count = g
innodb.adaptive_hash_memory = g

slave.seconds_behind_master = g

; innodb.bufferpool_*.<metric> will whitelist these metrics for all bufferpool instances
; If you don't have multiple bufferpools it won't do anything
innodb.bufferpool_*.pool_size = g
innodb.bufferpool_*.pool_size_bytes = g
innodb.bufferpool_*.free_pages = g
innodb.bufferpool_*.database_pages = g
innodb.bufferpool_*.old_database_pages = g
innodb.bufferpool_*.modified_pages = g
innodb.bufferpool_*.pending_reads = g
innodb.bufferpool_*.pending_writes_lru = g
innodb.bufferpool_*.pending_writes_flush_list = g
innodb.bufferpool_*.pending_writes_single_page = g
innodb.bufferpool_*.pages_made_young = d
innodb.bufferpool_*.pages_not_young = d
innodb.bufferpool_*.pages_made_young_ps = g
innodb.bufferpool_*.pages_not_young_ps = g
innodb.bufferpool_*.pages_read = d
innodb.bufferpool_*.pages_created = d
innodb.bufferpool_*.pages_written = d
innodb.bufferpool_*.pages_read_ps = g
innodb.bufferpool_*.pages_created_ps = g
innodb.bufferpool_*.pages_written_ps = g
innodb.bufferpool_*.buffer_pool_hit_total = g
innodb.bufferpool_*.buffer_pool_hits = g
innodb.bufferpool_*.buffer_pool_young = g
innodb.bufferpool_*.buffer_pool_not_young = g
innodb.bufferpool_*.pages_read_ahead = g
innodb.bufferpool_*.pages_read_evicted = g
innodb.bufferpool_*.pages_read_random = g
innodb.bufferpool_*.lru_len = g
innodb.bufferpool_*.lru_unzip = g
innodb.bufferpool_*.io_sum = d
innodb.bufferpool_*.io_sum_cur = g
innodb.bufferpool_*.io_unzip = d
innodb.bufferpool_*.io_unzip_cur = g

Now, the daemon does start and the mysql_statsd script seems to pick up the configuration just fine, but it continually generates errors for all the threads it spawns. Then they all die, systemd restarts the entire service 10 seconds later and the same thing happens all over again.

I have no idea how to debug this as it's all python code that I'm not good at.

These are the errors that are put in the daemon.log file (specified in the config, that's how I know it's being read):

Caught CTRL+C / SIGKILL
Stopping threads
Waiting for 3 threads
All threads stopped
<mysql_statsd.mysql_statsd.MysqlStatsd instance at 0x7f7ab181acf8>
Caught CTRL+C / SIGKILL
Stopping threads
Waiting for 2 threads
All threads stopped
<mysql_statsd.mysql_statsd.MysqlStatsd instance at 0x7fc25eba7cf8>
Exception in thread Thread-2:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
    self.run()
TypeError: 'bool' object is not callable

Caught CTRL+C / SIGKILL
Stopping threads
Waiting for 2 threads
All threads stopped
<mysql_statsd.mysql_statsd.MysqlStatsd instance at 0x7f809f130cf8>
Exception in thread Thread-2:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
    self.run()
TypeError: 'bool' object is not callable

Caught CTRL+C / SIGKILL
Stopping threads
Waiting for 3 threads
All threads stopped
<mysql_statsd.mysql_statsd.MysqlStatsd instance at 0x7f43503f0cf8>
Caught CTRL+C / SIGKILL
Stopping threads
Waiting for 3 threads
All threads stopped
<mysql_statsd.mysql_statsd.MysqlStatsd instance at 0x7fae44e33cf8>
Caught CTRL+C / SIGKILL
Stopping threads
Waiting for 3 threads
All threads stopped
<mysql_statsd.mysql_statsd.MysqlStatsd instance at 0x7f9296f52cf8>

and finally the (I believe) relevant sections of the ansible script:

- name: Add statsd group
  group:
    name: statsd
    system: yes

- name: Add statsd user
  user:
    name: statsd
    shell: /bin/bash
    group: statsd
    home: /home/statsd
    system: yes

- name: install pymysql dependencies
  apt:
    name: default-libmysqlclient-dev
    state: present

- name: copy statsd configuration file
  template:
    src: mysql-statsd.conf.j2
    dest: /etc/mysql_statsd.conf
    owner: root
    group: root
    mode: 0744

- name: Install statsd
  become: yes
  become_user: statsd
  pip:
    name: ["mysql_statsd"]
    state: present

- name: install statsd service file
  copy:
    src: statsd.service
    dest: /etc/systemd/system/statsd.service
    mode: 0644
    owner: root
    group: root

- name: create log directory
  file:
    path: /home/statsd/mysql_statsd/log/
    state: directory
    owner: statsd
    group: statsd
    mode: 0755

Is there any way to figure out where things are going wrong?

If you need any more information I'd be happy to provide it.
Thanks!

huge dips in data being sent to graphite

First thank you for this great way to send stats.

I'm having some issues with all my graphs in Graphite having huge dips in the data. Here's a sample of two showing data from a 15min period.

select render 1

I checked the raw data through the rawData=true command and it confirms the data not being consistent.

9299850.3
9300506.3
9301359.7
8371844.9
9302794.1
9303719.2
9304351.2

Any idea what is causing this and how I can fix it?

Thanks!

Please give this repo some love.

I don't get why your repo doesn't have PR as there are no alternative to monitor mysql / mariadb through statds/graphite. Your initiative is a good one (though as a PHP developper, can only thank you and not contribute).

Maybe talk a little bit more about the project on twitter? :)

You should consider at least writing documentation, maybe it'll help user to contribute.

❤️

Sending rates

Hi guys,

how did you solve sending rates to graphite instead of ever incrementing counters. I mean for example value for "Innodb_rows_read". Statsd by itself doesn't have implemented sending rates and using derive function in graphite is not a good solution (cause problems with missing data points).

Thank you,

Vitek

Can't install via easy_install

I'm executing:

easy_install mysql-statsd

and getting:

No local packages or download links found for mysql-statsd
error: Could not find suitable distribution for Requirement.parse('mysql-statsd')

The pypi version is outdated

Exec pip install mysql-statsd and compare this file(mysql_statsd/thread_mysql.py) with the master.

Pypi file appears to be out off sync with the repo.

ThreadStatsd crashes when mysql replication thread is stopped

ThreadStatsd is crashing when SQL Slave is down, because slave.seconds_behind_master is Null

Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 552, in __bootstrap_inner
self.run()
File "/home/ex/mysql-statsd/mysql_statsd/thread_statsd.py", line 71, in run
self.send_stat(item)
File "/home/ex/mysql-statsd/mysql_statsd/thread_statsd.py", line 55, in send_stat
sender(k, float(v))
TypeError: float() argument must be a string or a number

Here is workaround:
try:
sender(k, float(v))
except Exception, e:
print "k", k, "v", v
print e

$ python mysql_statsd/mysql_statsd.py -d -f -c ~ex/mysql-statsd.conf
k slave.seconds_behind_master v None
float() argument must be a string or a number

TypeError: unsupported operand type(s) for -: 'str' and 'str'

Hello

I'm using the following veriosn of mysql-statsd on CentOS 6.

# pip list | grep mysql-statsd
mysql-statsd (0.1.4)

When I run mysql_statsd I got the following error after few seconds

# mysql_statsd --config /root/statsd-mysql/mysql-statsd.conf -d --foreground
('status.aborted_connects', '1', 'd')
('status.com_delete', '707701', 'd')
('status.com_insert', '707700', 'd')
('status.com_insert_select', '0', 'd')
...
...
...
('status.threads_created', '90', 'd')
('status.threads_running', '17', 'g')
('variables.max_connections', '151', 'g')
Exception in thread Thread-1:
Traceback (most recent call last):
  File "/usr/lib64/python2.6/threading.py", line 532, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.6/site-packages/mysql_statsd/thread_mysql.py", line 149, in run
    self._run()
  File "/usr/lib/python2.6/site-packages/mysql_statsd/thread_mysql.py", line 92, in _run
    rows = self._preprocess(check_type, column_names, cursor.fetchall())
  File "/usr/lib/python2.6/site-packages/mysql_statsd/thread_mysql.py", line 131, in _preprocess
    return executing_class.process(rows, *extra_args)
  File "/usr/lib/python2.6/site-packages/mysql_statsd/preprocessors/innodb_preprocessor.py", line 66, in process
    self.process_line(line)
  File "/usr/lib/python2.6/site-packages/mysql_statsd/preprocessors/innodb_preprocessor.py", line 207, in process_line
    self.tmp_stats['unpurged_txns'] = self.tmp_stats['innodb_transactions'] - self.make_bigint(innorow[6], innorow[7])
TypeError: unsupported operand type(s) for -: 'str' and 'str'

Incompatibility with Percona output for Innodb Status

Percona has additions in innodb output:

--------------
ROW OPERATIONS
--------------
0 queries inside InnoDB, 0 queries in queue
1 read views open inside InnoDB
1 transactions active inside InnoDB
1 out of 1000 descriptors used
---OLDEST VIEW---
Normal read view
Read view low limit trx n:o EA05374F2
Read view up limit trx id EA05374F1
Read view low limit trx id EA05374F2
Read view individually stored trx ids:
Read view trx id EA05374F1
-----------------
Main thread process no. 28092, id 140035932575488, state: flushing log
Number of rows inserted 47675820507, updated 5833120651, deleted 10936909882, read 501029518850
0.00 inserts/s, 43442.28 updates/s, 0.00 deletes/s, 76878.06 reads/s
------------
TRANSACTIONS
------------

In this case the line '-----------------' after OLDEST VIEW is treated as beginning of chunk name and TRANSACTION as a chunk text.

Solution:

# diff -u innodb_preprocessor.py.orig innodb_preprocessor.py
--- innodb_preprocessor.py.orig 2015-10-30 05:39:18.073914726 -0700
+++ innodb_preprocessor.py  2015-10-30 07:37:29.439914287 -0700
@@ -39,19 +39,23 @@
         chunks = {'junk': []}
         current_chunk = 'junk'
         next_chunk = False
+        oldest_view = False

         self.clear_variables()
         for row in rows:
             innoblob = row[2].replace(',', '').replace(';', '').replace('/s', '').split('\n')
             for line in innoblob:
                 # All chunks start with more than three dashes. Only the individual innodb bufferpools have three dashes
+                if line.startswith('---OLDEST VIEW---'):
+                    oldest_view = True
                 if line.startswith('----'):
                     # First time we see more than four dashes have to record the new chunk
-                    if next_chunk == False:
+                    if next_chunk == False and oldest_view == False:
                         next_chunk = True
                     else:
                     # Second time we see them we just have recorded the chunk
                         next_chunk = False
+                        oldest_view = False
                 elif next_chunk == True:
                     # Record the chunkname and initialize the array
                     current_chunk = line

Multiple buffer pool instances

If one is having multiple buffer pool instances (innodb_buffer_pool_instances), metrics sent to statsd regarding the buffer pool will mostly be the value of the last instance.
I believe the best way is to check if that variable is already initialized in the dictionary.

mysql_statsd listens on UDP:20981 - why?

I installed mysql_statsd via pip and just noticed that when it runs it listens on *:20981 - why does it do that? I cannot find anything in the code or in the docs to indicate that it listens for anything.
Also is there a way to bind it to a specifig IP?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.