Code Monkey home page Code Monkey logo

ceres's People

Contributors

cdavis avatar codacy-badger avatar dancech avatar deniszh avatar esc avatar iain-buclaw-sociomantic avatar iksaif avatar mleinart avatar obfuscurity avatar sejeff avatar vladimir-smirnov-sociomantic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ceres's Issues

configurable datatypes

just throwing a thought out there..
what if instead of always having double precision floats, we had a rule-matching system like storage-schemas.conf where we can define matching rules that tell certain archives to work not just with single precision floats, but optionally (un)signed ints, bigints, etc.

If serie don't have data for requested interval, result should have proper time step

vladimir-smirnov-sociomantic@555f974

If you request metric without any usable slices (e.x. no data for a long time and everything was rolled up out of the disk, but metadata still there) - you'll have resulting list with timeStep = 1, which isn't that good (it'll require recalculation of this slice on later stages). With this patch, ceres will return slice with more proper time steps (will look for them in the config file or use default one, instead of '1').

Though in current form it depends on dkulikovskiy's branch, because this commit also improves his detection of unneeded slices.

can't install using pip: No distributions at all found for ceres

search:

pip search ceres
python-saucerest          - Python wrapper around the saucelabs REST API
ResourceReservation       - Resource Reservation plugin for Trac
ceres                     - Distributed time-series database

install:

pip install ceres
Downloading/unpacking ceres
  Could not find any downloads that satisfy the requirement ceres
Cleaning up...
No distributions at all found for ceres

Could you please fix it?

How usable is ceres?

There is no mention anywhere about the release date and/or the current state of the project. It is near impossible to tell unless one is willing to risk production metrics or digs into git issues/code.

Could we add that to the readme and keep updating it, after major commits? Or on an issue thread maybe?

:)

Add owners to readthedocs ceres project

Someone did us the favor of creating a Ceres readthedocs project. A few of the project maintainers should be added so that we can adjust it as needed.

@chuxpy can you make this happen for us :)
At least these people should be added:

  • chrismd
  • mleinart
  • SEJeff

ceres maintenance rollup issue

Hi,
I'm trying to understand ceres-maintenance as it is completely undocumented, and I've got multiple issues with rollup plugin.
I've got about 20 slice files (*@10.slice) per node during two weeks, and I decided to defrag them (remove slice gaps) and archive them (to the next lower resolution @60.slice).

This is a test server, which has just a bit less than 500 metric collected. Only 300 of them are being sent from single collectd daemon with 10 sec interval.

I started rollup and I was surprized by the fact that it took 5 hours to complete.

# ceres-maintenance  --configdir=$CONF_DIR/writer-1 rollup --log=rollup.log
real  310m32.189s
user  309m2.727s
sys 0m33.202s

In the log file I found many errors like this.

[Thu Jun  6 11:12:10 2013]  --- Error in node_found event-handler ---
[Thu Jun  6 11:12:10 2013]  Traceback (most recent call last):
  File "/opt/graphite/bin/ceres-maintenance", line 95, in dispatch
    handler(*args, **kwargs)
  File "/opt/graphite/plugins/maintenance/rollup.py", line 61, in node_found
    do_rollup(node, archive, archives[i+1])
  File "/opt/graphite/plugins/maintenance/rollup.py", line 102, in do_rollup
    coarseValue = aggregate(node, fineDatapoints)
  File "/opt/graphite/plugins/maintenance/rollup.py", line 22, in aggregate
    return float(sum(values)) / len(values) # values is guaranteed to be nonempty
TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'

I've found the reason and fixed the aggregate() function locally, but it did not help much.
Also, I noticed that some nodes are processed fast (less than 2sec), while other nodes need minutes and even tens of minutes to process. I made logs more verbose.

Here is a part of the log, you may see two nodes found and being porocessed:

node_found node:collectd.redfish.interface-vboxnet0.if_errors.tx
node_found node:collectd.redfish.interface-vboxnet0.if_errors.rx

and there was 43200 iterations performed in do_rollup loop for both.
(because my next retention is [60,43200]).

But,

  • node_found node:collectd.redfish.interface-vboxnet0.if_errors.tx - all 43200 iterations were completed within 2 seconds,
  • node_found node:collectd.redfish.interface-vboxnet0.if_errors.rx - about 14 seconds to perform just 1000 iterations.
[Thu Jun  6 11:06:38 2013]  node_found: collectd.redfish.interface-vboxnet0.if_errors.tx
[Thu Jun  6 11:06:38 2013]  archive 0: retention:8640, startTime:1370444790, endTime:1370531190, slices:1
[Thu Jun  6 11:06:38 2013]  do_rollup:: collectd.redfish.interface-vboxnet0.if_errors.tx, 1370444790
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1367852820, windowEnd: 1367852880, fineDatapoints: 0, 1/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1367912820, windowEnd: 1367912880, fineDatapoints: 0, 1001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1367972820, windowEnd: 1367972880, fineDatapoints: 0, 2001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1368032820, windowEnd: 1368032880, fineDatapoints: 0, 3001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1368092820, windowEnd: 1368092880, fineDatapoints: 0, 4001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1368152820, windowEnd: 1368152880, fineDatapoints: 0, 5001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1368212820, windowEnd: 1368212880, fineDatapoints: 0, 6001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1368272820, windowEnd: 1368272880, fineDatapoints: 0, 7001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1368332820, windowEnd: 1368332880, fineDatapoints: 0, 8001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1368392820, windowEnd: 1368392880, fineDatapoints: 0, 9001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1368452820, windowEnd: 1368452880, fineDatapoints: 0, 10001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1368512820, windowEnd: 1368512880, fineDatapoints: 0, 11001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1368572820, windowEnd: 1368572880, fineDatapoints: 0, 12001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1368632820, windowEnd: 1368632880, fineDatapoints: 0, 13001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1368692820, windowEnd: 1368692880, fineDatapoints: 0, 14001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1368752820, windowEnd: 1368752880, fineDatapoints: 0, 15001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1368812820, windowEnd: 1368812880, fineDatapoints: 0, 16001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1368872820, windowEnd: 1368872880, fineDatapoints: 0, 17001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1368932820, windowEnd: 1368932880, fineDatapoints: 0, 18001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1368992820, windowEnd: 1368992880, fineDatapoints: 0, 19001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1369052820, windowEnd: 1369052880, fineDatapoints: 0, 20001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1369112820, windowEnd: 1369112880, fineDatapoints: 0, 21001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1369172820, windowEnd: 1369172880, fineDatapoints: 0, 22001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1369232820, windowEnd: 1369232880, fineDatapoints: 0, 23001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1369292820, windowEnd: 1369292880, fineDatapoints: 0, 24001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1369352820, windowEnd: 1369352880, fineDatapoints: 0, 25001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1369412820, windowEnd: 1369412880, fineDatapoints: 0, 26001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1369472820, windowEnd: 1369472880, fineDatapoints: 0, 27001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1369532820, windowEnd: 1369532880, fineDatapoints: 0, 28001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1369592820, windowEnd: 1369592880, fineDatapoints: 0, 29001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1369652820, windowEnd: 1369652880, fineDatapoints: 0, 30001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1369712820, windowEnd: 1369712880, fineDatapoints: 0, 31001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1369772820, windowEnd: 1369772880, fineDatapoints: 0, 32001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1369832820, windowEnd: 1369832880, fineDatapoints: 0, 33001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1369892820, windowEnd: 1369892880, fineDatapoints: 0, 34001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1369952820, windowEnd: 1369952880, fineDatapoints: 0, 35001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1370012820, windowEnd: 1370012880, fineDatapoints: 0, 36001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1370072820, windowEnd: 1370072880, fineDatapoints: 0, 37001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1370132820, windowEnd: 1370132880, fineDatapoints: 0, 38001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1370192820, windowEnd: 1370192880, fineDatapoints: 0, 39001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1370252820, windowEnd: 1370252880, fineDatapoints: 0, 40001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1370312820, windowEnd: 1370312880, fineDatapoints: 0, 41001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1370372820, windowEnd: 1370372880, fineDatapoints: 0, 42001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.tx, windowStart: 1370432820, windowEnd: 1370432880, fineDatapoints: 0, 43001/43200
[Thu Jun  6 11:06:38 2013]  do_rollup:: slice.write 1370363940@60
[Thu Jun  6 11:06:38 2013]  archive 1: retention:43200, startTime:1367852760, endTime:1370444760, slices:1
[Thu Jun  6 11:06:38 2013]  do_rollup:: collectd.redfish.interface-vboxnet0.if_errors.tx, 1367852760
[Thu Jun  6 11:06:38 2013]  archive 2: retention:105120, startTime:1336316700, endTime:1367852700, slices:0
[Thu Jun  6 11:06:38 2013]  do_rollup:: collectd.redfish.interface-vboxnet0.if_errors.tx, 1336316700

[Thu Jun  6 11:06:38 2013]  node_found: collectd.redfish.interface-vboxnet0.if_errors.rx
[Thu Jun  6 11:06:38 2013]  archive 0: retention:8640, startTime:1370444790, endTime:1370531190, slices:2
[Thu Jun  6 11:06:38 2013]  do_rollup:: collectd.redfish.interface-vboxnet0.if_errors.rx, 1370444790
[Thu Jun  6 11:06:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1367852820, windowEnd: 1367852880, fineDatapoints: 0, 1/43200
[Thu Jun  6 11:06:52 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1367912820, windowEnd: 1367912880, fineDatapoints: 0, 1001/43200
[Thu Jun  6 11:07:06 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1367972820, windowEnd: 1367972880, fineDatapoints: 0, 2001/43200
[Thu Jun  6 11:07:20 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1368032820, windowEnd: 1368032880, fineDatapoints: 0, 3001/43200
[Thu Jun  6 11:07:33 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1368092820, windowEnd: 1368092880, fineDatapoints: 0, 4001/43200
[Thu Jun  6 11:07:47 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1368152820, windowEnd: 1368152880, fineDatapoints: 0, 5001/43200
[Thu Jun  6 11:08:01 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1368212820, windowEnd: 1368212880, fineDatapoints: 0, 6001/43200
[Thu Jun  6 11:08:14 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1368272820, windowEnd: 1368272880, fineDatapoints: 0, 7001/43200
[Thu Jun  6 11:08:28 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1368332820, windowEnd: 1368332880, fineDatapoints: 0, 8001/43200
[Thu Jun  6 11:08:42 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1368392820, windowEnd: 1368392880, fineDatapoints: 0, 9001/43200
[Thu Jun  6 11:08:55 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1368452820, windowEnd: 1368452880, fineDatapoints: 0, 10001/43200
[Thu Jun  6 11:09:09 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1368512820, windowEnd: 1368512880, fineDatapoints: 0, 11001/43200
[Thu Jun  6 11:09:22 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1368572820, windowEnd: 1368572880, fineDatapoints: 0, 12001/43200
[Thu Jun  6 11:09:36 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1368632820, windowEnd: 1368632880, fineDatapoints: 0, 13001/43200
[Thu Jun  6 11:09:50 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1368692820, windowEnd: 1368692880, fineDatapoints: 0, 14001/43200
[Thu Jun  6 11:10:03 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1368752820, windowEnd: 1368752880, fineDatapoints: 0, 15001/43200
[Thu Jun  6 11:10:17 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1368812820, windowEnd: 1368812880, fineDatapoints: 0, 16001/43200
[Thu Jun  6 11:10:30 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1368872820, windowEnd: 1368872880, fineDatapoints: 0, 17001/43200
[Thu Jun  6 11:10:44 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1368932820, windowEnd: 1368932880, fineDatapoints: 0, 18001/43200
[Thu Jun  6 11:10:57 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1368992820, windowEnd: 1368992880, fineDatapoints: 0, 19001/43200
[Thu Jun  6 11:11:11 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1369052820, windowEnd: 1369052880, fineDatapoints: 0, 20001/43200
[Thu Jun  6 11:11:24 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1369112820, windowEnd: 1369112880, fineDatapoints: 0, 21001/43200
[Thu Jun  6 11:11:38 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1369172820, windowEnd: 1369172880, fineDatapoints: 0, 22001/43200
[Thu Jun  6 11:11:52 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1369232820, windowEnd: 1369232880, fineDatapoints: 0, 23001/43200
[Thu Jun  6 11:12:05 2013]  do_rollup:: node:collectd.redfish.interface-vboxnet0.if_errors.rx, windowStart: 1369292820, windowEnd: 1369292880, fineDatapoints: 0, 24001/43200
[Thu Jun  6 11:12:10 2013]  do_rollup:: slice.write 1369312440@60
...

Here are the file sizes:

ll /opt/graphite/storage/ceres/collectd/redfish/interface-vboxnet0/if_errors/*/*slice
-rw-r--r-- 1 root root 160760 Jun  7 09:26 /opt/graphite/storage/ceres/collectd/redfish/interface-vboxnet0/if_errors/rx/[email protected]
-rw-r--r-- 1 root root  20560 Jun  7 07:49 /opt/graphite/storage/ceres/collectd/redfish/interface-vboxnet0/if_errors/tx/[email protected]

rx is 8 times bigger that tx but I hardly believe this can explain the situation.

Any ideas why it is so or what to do to find a reason of this ?

Create a README.md for this project

It would be nice to have information about this project - it's intent and usage for instance - easily available on the github project page.

Thanks!

Ceres future / Maintainer needed ?

Hi all,

We need to discuss Ceres future. As far as I understand, it still requires megacarbon branch and it seems it's not intended to replace Whisper as the default storage format for Graphite anymore.
So, I see a couple of options here:

  1. find some active Ceres user, who can help with merging Ceres support to master and fix (or backport from an own branch) all fixes. (Maybe we can include Ceres as pluggable backend then?) ๐Ÿšข
  2. Just admit current state and officially abandon that, clearly stating that in README and documentation. ๐Ÿ”ฅ
  3. Do nothing and continue to confuse users. ๐Ÿ™„
    Let's discuss here.
    /cc @cbowman0 @iain-buclaw-sociomantic
    /cc @obfuscurity @DanCech @iksaif
    /cc @mleinart @SEJeff

If someone knows some Ceres' users - please show them this thread.

movingAverage shows no data under ceres file format if time window is "empty" at the start

We have switched over to using Ceres on out main stats engine.
All is well, (170G down to 1.4G) so happy days.

when we went back over some of the saved Graphs that show data in a movingAverage, we noticed that with Ceres (v0.10.0) the data does not show (No Data) if the time window selected starts "before" the data was created.. (for example.)

Out data started coming in at 12:12AM.. and we set a movingAverage(...,2)

When we put the time window starting at 12:12, No Data
change it to 12:13, and the graph appears

I am logging this here under ceres because it seems to be related..
any thoughts as to the cause ? I am happy to triage this more .. (i.e. has movingAverage changed (or not had a 0.9.12+ patch applied ? )

Errors when running rollup plugin

I see a fair few of these when running rollup (but not all the time):

[Thu Jan  3 15:50:29 2013]  --- Error in node_found event-handler ---
[Thu Jan  3 15:50:29 2013]  Traceback (most recent call last):
  File "/opt/graphite/bin/ceres-maintenance", line 95, in dispatch
    handler(*args, **kwargs)
  File "/opt/graphite/plugins/maintenance/rollup.py", line 51, in node_found
    do_rollup(node, archive, archives[i+1])
  File "/opt/graphite/plugins/maintenance/rollup.py", line 86, in do_rollup
    coarseValue = aggregate(node, fineDatapoints)
  File "/opt/graphite/plugins/maintenance/rollup.py", line 14, in aggregate
    return float(sum(values)) / len(values) # values is guaranteed to be nonempty
TypeError: unsupported operand type(s) for +: 'float' and 'NoneType'

Seems like some of my values are None - could that be because of doing defrag wrongly, or just not seeing enough data points for the metric schema?

Will look in to it.

Redefining built-in 'slice'

Pylint output:

W:245,65: Redefining built-in 'slice' (redefined-builtin)
W:263,12: Redefining built-in 'slice' (redefined-builtin)
W:341, 8: Redefining built-in 'slice' (redefined-builtin)
W:409,10: Redefining built-in 'slice' (redefined-builtin)
W:527, 4: Redefining built-in 'slice' (redefined-builtin)

Clearing sliceCache should also throw away timeStep

A common maintenance task would be to start writing out a new stat, only to find that we want more coarse or finer precision for it later down the line.

Carbon can reload it's storage schemas no problem, and we have plugins that can rewrite metadata files to sync with Carbon config, merging any "orphaned" slices in the process. However CeresNode keeps the timeStep in memory and will continue to create slices at a retention that no longer matches what Carbon or the CeresMetadata on disk says about the node.

Has this project been discontinued?

In the README it said that:

Ceres is ... intended to replace Whisper as the default storage format for Graphite.

However the latest commit was one year ago. Is the above statement still true? Or Ceres has been discontinued?

Megacarbon: Rollup plugin (for ceres-maintenance) does not finish nightly

I'm running the rollup plugin nightly, but it's taking a looooooong time to run, and not finishing before the next night's run kicks-off.

I'm running with < 3GB of total metric data (which doesn't seem like that much), and there's only 3 months worth of data (so not too much rolling-up to be done).

I created this ticket to see if anyone else was having the same issue, and get @mleinart's input. I will investigate - the plugin seems to spend almost all its time in https://github.com/graphite-project/carbon/blob/megacarbon/plugins/maintenance/rollup.py#L83

Data from several slices should be merged, not appended.

If for some reasons you have situation when you have overlapping slices, reading data from the interval that matched one of them, will result in more points, then it should be there.

For example - you have some metric that due to broken merge/rollup got into this shape for the interval 1404000000 - 1412640000:

[email protected] with 100 points in it (up to 1412640000)

[email protected] with 100 points in it (up to 1414454400)

If you'll try to read the data from 1404000000 to 1412640000, you'll get in result 104 points (100 from [email protected] and 4 approximated points from [email protected] appended to the end).

I understand that it shouldn't happen, but right now there is no way to fix this situation (except to remove wrong data by hands/script), but I think that it should be handled by ceres.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.