Code Monkey home page Code Monkey logo

timegaps's Introduction

timegaps

Timegaps is a cross-platform command line program. It sorts a set of items into rejected and accepted ones, based on the age of each item and user-given time categorization rules.

Timegaps allows for thinning out a collection of items, whereas the time gaps between accepted items become larger with increasing age of items. This is useful for implementing backup retention policies with the goal to keep backups "logarithmically" distributed in time, e.g. one for each of the last 24 hours, one for each of the last 30 days, one for each of the last 8 weeks, and so on.

Timegaps is built with a focus on reliability. It is backed by a considerable set of unit tests, including direct command line interface tests. Currently, each commit is automatically tested against CPython 2.7/3.3/3.4 on Linux via Travis CI. Releases are tested on Linux as well as on Windows. Simplicity and compliance with the Unix philosophy are the major design goals of timegaps. Version tags follow the concept of semantic versioning.

Requirements

Timegaps requires Python. Releases are tested on CPython 2.7/3.3/3.4, on Linux as well as on Windows. This is where you can expect it to run properly.

Installation

Installation via pip is recommended:

$ pip install timegaps

This downloads the latest timegaps releases from PyPI and installs it. A previously installed version can be upgraded via:

$ pip install --upgrade timegaps

This is how to install the latest development version:

$ pip install git+https://github.com/jgehrcke/timegaps

Documentation and changelog

  • Docs and resources: the official home of this program is http://gehrcke.de/timegaps. The documentation consists of this README, timegaps --help, and timegaps --extended-help.
  • Changelog.

Hands-on introduction

Consider the following situation: all *.tar.gz files in the current working directory happen to be daily snapshots of something. The task is to accept one snapshot for each of the last 20 days, one for each for the last 8 weeks, and one for each of the last 12 months, and to reject all others. Use timegaps for performing this categorization into rejected and accepted items and print the rejected ones:

$ timegaps days20,weeks8,months12 *.tar.gz | sort
daily-2013-09-17-133413.tar.gz
[...]
daily-2014-02-27-070001.tar.gz

This was a read-only, non-invasive operation. By default, timegaps prints the rejected items to stdout, separated by newline characters (for compatibility with other Unix command line tools). Repeat the operation and count the rejected items:

$ timegaps days20,weeks8,months12 *.tar.gz | wc -l
125

Given this specific set of rules and set of items, timegaps identified 125 items to be rejected. Move them to the directory notneededanymore (and suppress stdout):

$ mkdir notneededanymore
$ timegaps --move notneededanymore days20,weeks8,months12 *.tar.gz > /dev/null

Count files in the newly created directory for validation purposes (must also be 125):

$ /bin/ls -1 notneededanymore/* | wc -l
125

Okay, so far the item (file) modification time was determined from the corresponding inode via the stat() system call. In a different mode of operation (--time-from-basename), timegaps can read the "modification time" from the basename of the file or directory. The file names of the tarred snapshots in this hands-on session carry meaningful time information in the format daily-%Y-%m-%d-%H%M%S.tar.gz. Providing this format string, we can instruct timegaps to parse the item modification time from the file names:

$ mv notneededanymore/* .
$ timegaps --time-from-basename daily-%Y-%m-%d-%H%M%S.tar.gz \
    days20,weeks8,months12 *.tar.gz | wc -l
125

The above can be useful in cases where the actual file modification time is screwed, and the real timing information is only contained in the file name. In another mode of operation (--stdin), timegaps can read newline-separated items from stdin, instead of reading items from the command line:

$ /bin/ls -1 *tar.gz | timegaps --stdin days20,weeks8,months12 | wc -l
125

Given -0/--nullsep, timegaps expects NUL character-separated items on stdin. In this mode of operation, timegaps also NUL-separates the items on stdout:

$ find . -name "*tar.gz" -print0 | \
    timegaps -0 --stdin days20,weeks8,months12 | \
    tr '\0' '\n' | wc -l
125

By default, the reference time for determining the age of items is the time of program invocation. Use -t/--reference-time for changing the reference time from now to an arbitrary point in time (January 1st, 2020, 00:00:00 local timein this case):

$ timegaps --reference-time 20200101-000000 years10 *.tar.gz | wc -l
153

With a different reference time and different rules the number of rejected items obviously changed (from 125 to 153). Instead of printing the rejected items, timegaps can invert the output and print the accepted ones:

$ timegaps -a -t 20200101-000000 years10 *.tar.gz
daily-2014-02-27-070001.tar.gz
daily-2014-01-01-070001.tar.gz

There are more features, such as deleting files, or a mode in which items are treated as simple strings instead of paths. See the help message:

$ timegaps --help
usage: timegaps [-h] [--extended-help] [--version] [-s] [-0] [-a] [-t TIME]
                [--time-from-basename FMT | --time-from-string FMT]
                [-d | -m DIR] [-r] [-v]
                RULES [ITEM [ITEM ...]]

Accept or reject items based on age categorization.

positional arguments:
  RULES                 A string defining the categorization rules. Must be of
                        the form <category><maxcount>[,<category><maxcount>[,
                        ... ]]. Example: 'recent5,days12,months5'. Valid
                        <category> values: years, months, weeks, days, hours,
                        recent. Valid <maxcount> values: positive integers.
                        Default maxcount for unspecified categories: 0.
  ITEM                  Treated as path to file system entry (default) or as
                        string (--time-from-string mode). Must be omitted in
                        --stdin mode. Warning: duplicate items are treated
                        independently.

optional arguments:
  -h, --help            Show help message and exit.
  --extended-help       Show extended help message and exit.
  --version             Show version information and exit.
  -s, --stdin           Read items from stdin. The default separator is one
                        newline character.
  -0, --nullsep         Input and output item separator is NUL character
                        instead of newline character.
  -a, --accepted        Output accepted items and perform actions on accepted
                        items. Overrides default, which is to output rejected
                        items (and act on them).
  -t TIME, --reference-time TIME
                        Parse reference time from local time string TIME.
                        Required format is YYYYmmDD-HHMMSS. Overrides default
                        reference time, which is the time of program
                        invocation.
  --time-from-basename FMT
                        Parse item modification time from the item path
                        basename, according to format string FMT (cf. Python's
                        strptime() docs at bit.ly/strptime). This overrides
                        the default behavior, which is to extract the
                        modification time from the inode.
  --time-from-string FMT
                        Treat items as strings (do not validate paths). Parse
                        time from item string using format string FMT (cf.
                        bit.ly/strptime).
  -d, --delete          Attempt to delete rejected paths.
  -m DIR, --move DIR    Attempt to move rejected paths to directory DIR.
  -r, --recursive-delete
                        Enable deletion of non-empty directories.
  -v, --verbose         Control verbosity. Can be specified multiple times for
                        increasing verbosity level. Levels: error (default),
                        info, debug.

Version 0.1.0

For a detailed specification of program behavior and the time categorization method, please confer timegaps --extended-help.

General description

Timegaps' input item set is either provided with command line arguments or read from stdin. The output is the set of rejected or accepted items, written to stdout.

Timegaps by default treats items as paths. It retrieves the modification time (st_mtime) of the corresponding file system entries via the stat system call. By default, timegaps works in a non-invasive read-only mode and simply lists the rejected items. If explicitly requested, timegaps can also directly delete or move the corresponding file system entries, using well-established functions from the shutil module in Python's standard library.

In a special mode of operation, timegaps can treat items as simple strings without path validation and extract the "modification time" from each string, according to a given time string format. This feature can be used for filtering any kind of time-dependent data, such as e.g. ZFS snapshots (if properly named).

Main motivation

The well-established backup solution rsnapshot has the useful concept of hourly / daily / weekly / ... snapshots already built in and creates such a structure on the fly. Unfortunately, other backup approaches often lack such a fine-grained backup retention logic, and people tend to hack simple filters themselves. Furthermore, even rsnapshot is not able to post-process and thin out an existing set of snapshots. This is where timegaps comes in: you can use the backup solution of your choice for periodically (e.g. hourly) creating a snapshot. You can then — independently and at any time — process this set of snapshots with timegaps and identify those snapshots that need to be eliminated (removed or displaced) in order to maintain a certain “logarithmic” distribution of snapshots in time. This is the main motivation behind timegaps, but of course you can use it for filtering any kind of time-dependent data.

How can the unit tests be run?

If you run into troubles with timegaps, or if you want to verify whether it properly runs on your platform, it is a good idea to run the unit test suite under your conditions. Timegaps' unit tests are written for pytest. With timegaps/test being the current working directory, run the tests like this:

$ py.test -v

Author & license

Timegaps is written and maintained by Jan-Philip Gehrcke. It is licensed under an MIT license (see LICENSE file).

timegaps's People

Contributors

jgehrcke avatar

Stargazers

 avatar Matteo Franzil avatar Maël Chouteau avatar  avatar Dave avatar Ethan Girouard avatar Illia Shestakov avatar  avatar Romuald avatar Matze avatar  avatar  avatar HeliXZz avatar Flash Sheridan avatar Vladimir Ulupov avatar Nikolaus Schlemm avatar  avatar  avatar Josh Bowden avatar Thomas Güttler avatar  avatar Lucas Dousse avatar salah731 avatar  avatar Marco Baxemyr avatar Ivan Ivanov avatar  avatar  avatar Karol Babioch avatar Dennis Stengele avatar Mark Burnett avatar Roland avatar Benjamin Schwarze avatar eater avatar Rene Schwietzke avatar Benoit Chabord avatar David Pérez Carmona avatar Mike Bryant avatar Jean-François Dagenais avatar Richard Nichols avatar DmitrII Gerasimenko avatar William Shelley avatar Minho Ryang avatar matbor avatar Cristian avatar Korjavin Ivan avatar

Watchers

 avatar Jean-François Dagenais avatar Flash Sheridan avatar wriver4 avatar Rene Schwietzke avatar  avatar

timegaps's Issues

Confusing if "recent" is not used.

crontab:

@hourly pg_dump -Fc > backup/db-$(date -Isecond).dump; timegaps --delete hours24,days7,weeks4,months6 backup/*.dump

This immediately deletes the files created a millisecond before. The directory backup will be empty for ever.

That's confusing.

I found the solution: You need to add recent.

Example:

@hourly pg_dump -Fc > backup/db-$(date -Isecond).dump; timegaps --delete recent1,hours24,days7,weeks4,months6 backup/*.dump

The "first time user experience" could get improved.

Why not switch to

if "recent" is missing, all recent file are kept?

FeatureRequest: Parameter --ignore-invalid-items or --time-from-string-regex

When using the --time-from-string parameter and not all input lines are in the expected match format timegaps exits with error

ERROR: Error while parsing time from item string. Error: time data u'pool/test@2018-03-29__14-11-35' does not match format 'storagepool/test@backup_%Y-%m-%d__%H-%M-%S'

As it happens that I have regular zfs snapshots with name 2018-03-29__14-30-00 but also other named snapshots when offsite-backup-script transferes snapshots to another system and creates a snapshot before transfering with name backup_2018-03-29__18-00-00

For this cases it would probably be usefull to have a --ignore-invalid-items parameter where the backup_... snapshots would be simply keept/accepted.

My command is as follows:
sudo zfs list -r -H -t snapshot -o name pool/test | timegaps --stdin --time-from-string "pool/test@%Y-%m-%d__%H-%M-%S" recent2,hours10,days30,weeks12,months14,years3

Probably also a --time-from-string-regex would be a solution - so User can use wildcards + date patterns for item selection.

Interested in what you think about my usecase.
Greetings
Wolfgang

Make rules more flexible

I like the idea of a stand-alone tool to decide about file deletion. But the rules can be more flexible.

Currently the rule "days20" would keep one per day for 20 days. If i want to keep 4 per day, i cant.

Coming from btrfs-sxbackup im used to rules like this:
4d:8/d, 1w:4/d, 2w:daily, 1m:weekly, 3m:2/m, 12m:none
which translates to:
If younger than 4 Days, keep everything
For older than 4 days, keep 8 per Day
For older than 1 Week, keep 4 per Day
For older than 2 Weeks, keep 1 per Day
For older than 1 Month, keep 2 per Month
for older than 12 months, keep none.

As Btrfs-SXBackup is written in python too, maybe those rules can be easily ported over?

Problem installing from PyPI

$ pip install timegaps
Downloading/unpacking timegaps
Downloading timegaps-0.1.0.zip (40kB): 40kB downloaded
Running setup.py (path:/private/var/folders/b6/7mmmgl3x71bd8d9t_ts2g2ldcrykkj/T/pip_build_murphyke/timegaps/setup.py) egg_info for package timegaps
Traceback (most recent call last):
File "", line 17, in
File "/private/var/folders/b6/7mmmgl3x71bd8d9t_ts2g2ldcrykkj/T/pip_build_murphyke/timegaps/setup.py", line 28, in
long_description=open("README.rst", "rb").read().decode('utf-8'),
IOError: [Errno 2] No such file or directory: 'README.rst'
Complete output from command python setup.py egg_info:
Traceback (most recent call last):

File "", line 17, in

File "/private/var/folders/b6/7mmmgl3x71bd8d9t_ts2g2ldcrykkj/T/pip_build_murphyke/timegaps/setup.py", line 28, in

long_description=open("README.rst", "rb").read().decode('utf-8'),

IOError: [Errno 2] No such file or directory: 'README.rst'

Unexpected loss of items with "accept oldest item in category" approach

Okay so a number of years ago we switched from "keep youngest item per category" (first release of timegaps im 2014) to "keep oldest item per category (in master branch since 2014, not released) as of the discussion in #3.

We were pretty confident that changing the approach made sense, also supported by @lapseofreason who wrote in 2017:

“It might be worthwhile to look at how the retention policy of btrbk. It implements a similar retention policy for btrfs snaphots. They seem to have come across the same problem and have [changed the retention policy] digint/btrbk@bd34d9f) to keeping the first backup in each bucket in version 0.23.

In 2014 I wrote

“Update: In my thought experiments I came across another constellation that might lead to unexpected results also with the accept-oldest-approach. I need to come up with a more systematic approach.”

Sadly I didn't note down details, but I believed past Jan-Philip and was not confident making another release w/o much more exhaustive and systematic testing. Around Christmas 2017 I did more systematic testing based on certain invariants and indeed I found a conceptual problem with the "accept oldest item in category" approach, also leading to data loss. I never took the time to publicly document this because I wanted to further simplify the test that failed. Didn't happen in more than a year so now I simply note the details of the exact experiment that failed.

With the 'simple' keep-oldest-item approach the following experiment revealed a problem:

Initial condition:

Have 7779000 / 300 = 25931 items spread across a timeline of about 3 months with a time delta of precisely 5 minutes between adjacent items.

Set reference time arbitrarily, generate items:

        reftime = 1514000000.0
        modtimes = (reftime - n * 300 for n in range(0, 25930+1))
        items = [FilterItem(modtime=t) for t in modtimes]

First timegaps run

With these rules (arbitrarily chosen):

    recent12,hours24,days7,weeks5,months2

Expected item count: 50
After the first run exactly 50 items remained.

Simulate constant influx of items and periodic invocation of timegaps

Simulate the following:

  • Every 5 minutes from reftime into the future add a new item.
  • At the same time, after adding the item, run timegaps.

Check for invariant: after every timegaps run see if the number of accepted items is still 50.

Unexpected failure:

The check for the invariant failed for run 288 where only 49 items remained.

Confusing error message about codec

I ran timegaps in a crontab, and the stripped-down environment caused timegaps to emit this error message:

ERROR: Please explicitly specify the codec that should be used for decoding data read from stdin, and for encoding data that is to be written to stdout: set environment variable PYTHONIOENCODING. Example: export PYTHONIOENCODING=UTF-8.

I am up and running by defining PYTHONIOENCODING for the crontab timegaps command.

However, I found the error confusing because it referred to stdout and stdin and made me wonder if timegaps was not seeing the file arguments and silently attempting to default to stdin mode. Apologies if you find such a terrible speculation insulting ;-)

Maybe the message could be modified to indicate that the codec is also used for command line processing, if that is what is happening. Unless the user has specified --stdin, there's no need to mention it in the error message, but I would understand if you wanted to have a one-size-fits-all error message.

Keep last n items

Usecase - productive ZFS server makes snapshots via cronjob every 0 and 30 minutes after the hour and only once a day these snapshots are transfered offsite to another server with zfs send command.

After transfering snapshots to offsite system I would like to clear some snapshots on productive system. But I must assure to keep (in any case) the last snapshot for the next incremental zfs send - so a keeplast1 would be needed.

Another usecase from this would be that on the productive Server I would like to assure to keep the latest 10 snapshots in any case (independed of creation time).
Everything after the 10 recent snapshots can be thinned out with hours,days,weeks etc.

So categorie keeplast would be helpfull.

Probably a helpful source would be the program Restic Backup
I use Restic for some other backups and that's where I'm used to have a keep-last parameter.

Again ;-) interessted in your thoughts.
Greetings Wolfgang

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.