Code Monkey home page Code Monkey logo

automated-ebs-snapshots's People

Contributors

brandongalbraith avatar danieljensen avatar daviey avatar flippingtables avatar hotsun avatar mans0954 avatar robaman avatar sebdah avatar yumminhuang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

automated-ebs-snapshots's Issues

when Restoring from AutoSnapshot there is a major issue.

when restoring from Auto snapshot there is a issue. the big one.

This is what i did, created volume snapshot from your script. created AMI from snapshot and then launched an instance from it. however, when done, i was suprised to see the results. the webpage of my insntace (it was a sugarcrm instance) wasn't showing correctly and nothing was working.

so just for sake of testing, i created Manual snaphost of volume from aws interface, launched ami, and sugarcrm instance as exactly as of original.

then again, believing, i craeted new snaphost from your script, created AMI, and launched it. but it still failed to show me intended results. nothing was working on my "restored" instance.

please let me know if you need any logs. i have attached comparative screenshots from instance launched from AMI Created from Manual snapshot as well as one from your script.
ami_from_autosnapshot__script_1
ami_from_autosnapshot__script_2
ami_from_manual_snap
ami_from_manual_snap_2

getting "socket.error: [Errno 111] Connection refused" ,what am i doing wrong

Traceback (most recent call last):
File "./automated-ebs-snapshots", line 29, in
automated_ebs_snapshots.main()
File "/data/automated-ebs-snapshots/automated_ebs_snapshots/init.py", line 159, in main
volume_manager.watch_from_file(connection, args.watch_file)
File "/data/automated-ebs-snapshots/automated_ebs_snapshots/volume_manager.py", line 206, in watch_from_file
get_volume_id(connection, volume),
File "/data/automated-ebs-snapshots/automated_ebs_snapshots/volume_manager.py", line 173, in get_volume_id
connection.get_all_volumes(volume_ids=[volume])
File "/usr/lib/python2.7/site-packages/boto/ec2/connection.py", line 2157, in get_all_volumes
[('item', Volume)], verb='POST')
File "/usr/lib/python2.7/site-packages/boto/connection.py", line 1170, in get_list
response = self.make_request(action, params, path, verb)
File "/usr/lib/python2.7/site-packages/boto/connection.py", line 1116, in make_request
return self._mexe(http_request)
File "/usr/lib/python2.7/site-packages/boto/connection.py", line 1030, in _mexe
raise ex

general]
access-key-id: xxx
secret-access-key: xxx
region: eu-central-1

Errors running on Centos 6.5 32 bit

[root@host:]# automated-ebs-snapshots --config ~/automated-ebs-snapshots.conf --run

Traceback (most recent call last):

File "/usr/bin/automated-ebs-snapshots", line 24, in

import automated_ebs_snapshots

File "/usr/lib/python2.6/site-packages/automated_ebs_snapshots/init.py", line 7, in

from automated_ebs_snapshots.command_line_options import args

File "/usr/lib/python2.6/site-packages/automated_ebs_snapshots/command_line_options.py", line 11, in

os.path.dirname(os.path.realpath(file))))

ValueError: zero length field name in format

ValueError: zero length field name in format

[ec2-user@ip automated-ebs-snapshots]$ /usr/bin/automated-ebs-snapshots --config /home/ec2-user/automated-ebs-snapshots/config --list
Traceback (most recent call last):
  File "/usr/bin/automated-ebs-snapshots", line 4, in <module>
    __import__('pkg_resources').run_script('automated-ebs-snapshots==0.3.2', 'automated-ebs-snapshots')
  File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 534, in run_script
    self.require(requires)[0].run_script(script_name, ns)
  File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 1438, in run_script
    execfile(script_filename, namespace, namespace)
  File "/usr/lib/python2.6/site-packages/automated_ebs_snapshots-0.3.2-py2.6.egg/EGG-INFO/scripts/automated-ebs-snapshots", line 24, in <module>
    import automated_ebs_snapshots
  File "/usr/lib/python2.6/site-packages/automated_ebs_snapshots-0.3.2-py2.6.egg/automated_ebs_snapshots/__init__.py", line 7, in <module>
    from automated_ebs_snapshots.command_line_options import args
  File "/usr/lib/python2.6/site-packages/automated_ebs_snapshots-0.3.2-py2.6.egg/automated_ebs_snapshots/command_line_options.py", line 11, in <module>
    os.path.dirname(os.path.realpath(__file__))))
ValueError: zero length field name in format


$ /usr/bin/automated-ebs-snapshots --config /home/ec2-user/automated-ebs-snapshots/config --watch vol-XXXXX --interval weekly
Traceback (most recent call last):
  File "/usr/bin/automated-ebs-snapshots", line 4, in <module>
    __import__('pkg_resources').run_script('automated-ebs-snapshots==0.3.2', 'automated-ebs-snapshots')
  File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 534, in run_script
    self.require(requires)[0].run_script(script_name, ns)
  File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 1438, in run_script
    execfile(script_filename, namespace, namespace)
  File "/usr/lib/python2.6/site-packages/automated_ebs_snapshots-0.3.2-py2.6.egg/EGG-INFO/scripts/automated-ebs-snapshots", line 24, in <module>
    import automated_ebs_snapshots
  File "/usr/lib/python2.6/site-packages/automated_ebs_snapshots-0.3.2-py2.6.egg/automated_ebs_snapshots/__init__.py", line 7, in <module>
    from automated_ebs_snapshots.command_line_options import args
  File "/usr/lib/python2.6/site-packages/automated_ebs_snapshots-0.3.2-py2.6.egg/automated_ebs_snapshots/command_line_options.py", line 11, in <module>
    os.path.dirname(os.path.realpath(__file__))))
ValueError: zero length field name in format

running from master repo
instance is an aws ami

$ yum info aws-cli
Loaded plugins: priorities, update-motd, upgrade-helper
Installed Packages
Name        : aws-cli
Arch        : noarch
Version     : 1.5.5
Release     : 1.0.amzn1
Size        : 1.7 M
Repo        : installed

$ yum info python-boto
Loaded plugins: priorities, update-motd, upgrade-helper
Installed Packages
Name        : python-boto
Arch        : noarch
Version     : 2.34.0
Release     : 1.0.amzn1
Size        : 8.7 M
Repo        : installed
From repo   : amzn-updates

High resolution for resent, lower for distant backups

Hi,

Would be nice with a way of setting up a backup system with high resolution for resent backups, and a lower resolution for longer period of time. What i mean i this:

volume 1: daily backups, retention 14 days, and save every Sundays backup for a retention of 6months.

This way we can quickly revert to a resent failure, and also we have the option of looking back in time over a longer period, and see what may have changed 6months back.

us-east-2

Known issues connecting to the new us-east-2 (Ohio) region? I'm using the same IAM credentials previously used successfully with us-east-1.

2016-10-22 16:49:07,905 - auto-ebs - INFO - Connecting to AWS EC2 in us-east-2
2016-10-22 16:49:07,906 - auto-ebs - ERROR - An error occurred when connecting to EC2

Missing [general] section in config

2016-08-08 16:11:20,709 - auto-ebs - ERROR - Missing [general] section in the configuration file

I get this error message when doing just about any command (listing volumes, snapshots, restarting/starting daemon). I've verified the general section exists in our config file, and the commands are set to look in the proper location for it. Any thoughts? Thanks!

not able to --list or anyting after upgrade. this used to work before

i think since i upgraded this not working. getting below error. credentials are correct and even created new and saved in the file. supplying correct file at command prompt.

root@ip-10-10-11-122:~# automated-ebs-snapshots --config ~/auto-ebs-snap.conf --list
2015-04-16 16:11:27,154 - auto-ebs - INFO - Connecting to AWS EC2 in ap-southeast-1
2015-04-16 16:11:27,367 - auto-ebs - ERROR - 401 Unauthorized
2015-04-16 16:11:27,367 - auto-ebs - ERROR - <?xml version="1.0" encoding="UTF-8"?>
<Response><Errors><Error><Code>AuthFailure</Code><Message>AWS was not able to validate the provided access credentials</Message></Error></Errors><RequestID>ea47fc83-c2a0-44fa-a88d-5ce1ea35d81e</RequestID></Response>
Traceback (most recent call last):
  File "/usr/local/bin/automated-ebs-snapshots", line 29, in <module>
    automated_ebs_snapshots.main()
  File "/usr/local/lib/python2.7/dist-packages/automated_ebs_snapshots/__init__.py", line 168, in main
    volume_manager.list(connection)
  File "/usr/local/lib/python2.7/dist-packages/automated_ebs_snapshots/volume_manager.py", line 31, in list
    volumes = get_watched_volumes(connection)
  File "/usr/local/lib/python2.7/dist-packages/automated_ebs_snapshots/volume_manager.py", line 21, in get_watched_volumes
    filters={'tag-key': 'AutomatedEBSSnapshots'})
  File "/usr/local/lib/python2.7/dist-packages/boto/ec2/connection.py", line 2148, in get_all_volumes
    [('item', Volume)], verb='POST')
  File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1186, in get_list
    raise self.ResponseError(response.status, response.reason, body)
boto.exception.EC2ResponseError: EC2ResponseError: 401 Unauthorized
<?xml version="1.0" encoding="UTF-8"?>
<Response><Errors><Error><Code>AuthFailure</Code><Message>AWS was not able to validate the provided access credentials</Message></Error></Errors><RequestID>ea47fc83-c2a0-44fa-a88d-5ce1ea35d81e</RequestID></Response>

Rechecking for Correct Volume ID

I know you can use volume names instead of volume IDs in the watch file, but is there a way to have this always look for the volume name? It seems to just map the name to the ID. The problem I'm having is, when spinning up new EC2 instances periodically (i.e., with spot instances), the volume name stays the same, but the ID changes, and then I have to unwatch/watch again.

Thanks,
Mike

Passing list of attributes to the created snapshot

Is this doable? Right now I want to be able to share the created snapshot with some other account. It makes sense to add it to the script during/after the creation of the snapshot.

The other alternative is to run some command on terminal to do it on all the watched list.

Script breaks if snapshot is attached to an AMI

I get the following. The reason is that the snapshot is the base for an AMI.

root@ip-10-120-128-46:/etc# automated-ebs-snapshots --config /etc/automated-ebs-snapshots.conf --run
2014-04-28 06:35:53,095 - auto-ebs - INFO - Connecting to AWS EC2 in eu-west-1
2014-04-28 06:35:53,966 - auto-ebs - INFO - The newest snapshot for vol-461ee444 is 88273 seconds old
2014-04-28 06:35:53,966 - auto-ebs - INFO - Creating new snapshot for vol-461ee444
2014-04-28 06:35:54,471 - auto-ebs - INFO - Created snapshot snap-a83e8d4f for volume vol-461ee444
2014-04-28 06:35:54,956 - auto-ebs - INFO - Deleting snapshot snap-913bd876
2014-04-28 06:35:55,030 - auto-ebs - ERROR - 400 Bad Request
2014-04-28 06:35:55,030 - auto-ebs - ERROR - <?xml version="1.0" encoding="UTF-8"?>
<Response><Errors><Error><Code>InvalidSnapshot.InUse</Code><Message>The snapshot snap-913bd876 is currently in use by ami-836f95f4</Message></Error></Errors><RequestID>6192d985-fecc-456e-b2fb-cd72fc19be25</RequestID></Response>
Traceback (most recent call last):
  File "/usr/local/bin/automated-ebs-snapshots", line 29, in <module>
    automated_ebs_snapshots.main()
  File "/usr/local/lib/python2.7/dist-packages/automated_ebs_snapshots/__init__.py", line 155, in main
    snapshot_manager.run(connection)
  File "/usr/local/lib/python2.7/dist-packages/automated_ebs_snapshots/snapshot_manager.py", line 22, in run
    _remove_old_snapshots(connection, volume)
  File "/usr/local/lib/python2.7/dist-packages/automated_ebs_snapshots/snapshot_manager.py", line 127, in _remove_old_snapshots
    snapshot.delete()
  File "/usr/local/lib/python2.7/dist-packages/boto/ec2/snapshot.py", line 94, in delete
    return self.connection.delete_snapshot(self.id, dry_run=dry_run)
  File "/usr/local/lib/python2.7/dist-packages/boto/ec2/connection.py", line 2454, in delete_snapshot
    return self.get_status('DeleteSnapshot', params, verb='POST')
  File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1196, in get_status
    raise self.ResponseError(response.status, response.reason, body)
boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
<?xml version="1.0" encoding="UTF-8"?>
<Response><Errors><Error><Code>InvalidSnapshot.InUse</Code><Message>The snapshot snap-913bd876 is currently in use by ami-836f95f4</Message></Error></Errors><RequestID>6192d985-fecc-456e-b2fb-cd72fc19be25</RequestID></Response>

Daily value changes when using Name=Tag and "daily,7"

First, this is an awesome python project. I have used it many times with great success.
Second, I am having an issue :(
I have a watch file where I have the volumes defined via Name=Tag and it loads no problem.
Then a few hours later it changes to some other settings.
the volume.conf has this:
[Tagname_Volume1],daily,7
[Tagname_Volume2],daily,7

When I log in and check it,
automated-ebs-snapshots --list
I see:
Volume ID Volume Name Interval Retention
[Volume1_Vol_ID][Tagname_Volume1]daily 1
[Volume1_Vol_ID][Tagname_Volume2]daily 1

I have several other Volumes managed similarly in this account, though, on this one "daily, 7" resets to "daily, 1" ?

I have a Cloudwatch alert that noticed it when it happens.

time data '2019-11-14T10:46:47.508Z' does not match format '%Y-%m-%dT%H:%M:%S.000Z'

Seams like AWS changed the time format for volumes and automated-ebs-snapshots fails now with the following error:

root@8f5c692e99fe:/monitoring-sync# /usr/local/bin/automated-ebs-snapshots --access-key-id $AWS_ACCESS_KEY_ID --secret-access-key $AWS_SECRET_ACCESS_KEY --region $AWS_DEFAULT_REGION --watch-file /monitoring-sync/configs/ebs-volumes.conf --run
2019-11-25 11:57:12,738 - auto-ebs - INFO - Connecting to AWS EC2 in us-east-1

2019-11-25 11:57:13,302 - auto-ebs - INFO - Updated the rotation interval to daily for vol-0ffce4cbd56715dfe
2019-11-25 11:57:13,864 - auto-ebs - INFO - Updated the rotation interval to daily for vol-05af287a940bd96b1
Traceback (most recent call last):
  File "/usr/local/bin/automated-ebs-snapshots", line 29, in <module>
    automated_ebs_snapshots.main()
  File "/usr/local/lib/python2.7/dist-packages/automated_ebs_snapshots/__init__.py", line 171, in main
    snapshot_manager.run(connection)
  File "/usr/local/lib/python2.7/dist-packages/automated_ebs_snapshots/snapshot_manager.py", line 23, in run
    _ensure_snapshot(connection, volume)
  File "/usr/local/lib/python2.7/dist-packages/automated_ebs_snapshots/snapshot_manager.py", line 76, in _ensure_snapshot
    '%Y-%m-%dT%H:%M:%S.000Z')
  File "/usr/lib/python2.7/_strptime.py", line 325, in _strptime
    (data_string, format))
ValueError: time data '2019-11-14T10:46:47.508Z' does not match format '%Y-%m-%dT%H:%M:%S.000Z'

I temporary fix it by by replacing %Y-%m-%dT%H:%M:%S.000Z with %Y-%m-%dT%H:%M:%S.%fZ in snapshot_manager.py.

--run --force

On a daily rotation powered by cron, scheduled at the same time of every day, automated-ebs-snapshots uses the current time and date to determine if a snapshot is needed. On a daily rotation, if the current time is less than 24 hours since a snapshot was made, then a snapshot does not occur. In my experience, the daily snapshot timing comparisons can be very close, to the second. When the comparison is less than 24 hours, the result is a snapshot appearing to happen every other day.

It would be nice to --force a snapshot, overriding and negating the time comparison, to ensure a snapshot happens.

You can currently route around the issue by setting up seven (7) crons, one for each day of the week, and increment the cron timing each day. On the seventh day however, when cron comes back around to a lesser value, a snapshot does not occur.

Bug in this file snapshot_manager.py

Found a minor bug in the snapshot_manager.py files, which deleting manually created snapshot if description contain a volume id.
File >>>> snapshot_manager.py
I have updated this line
snapshots = connection.get_all_snapshots(filters={'volume-id': volume.id,'description':'Automatic snapshot by Automated EBS Snapshots'})

to stop deleting my manually created file.

Snapshot Name and Logging parameters

Is it possible to add the option to provide a name for the snapshot? Currently, it's blank. Is there a way to also provide an intermental number when snapshots are in daemon mode?

Example: Test_20151014_01, Test_20151014_02, Test_20151014_03, Test_20151015_01, Test_20151015_02 (Something like --snapshot-name Test_$date_$increment_value)

Also - Is there a way to specify the log-file inside a .conf file as opposed to a command line parameter? I want to be able to log what happens automatically through the daemon and have a location where the logs are saved.

AWESOME TOOL! Love it! and THANK YOU!!!!!

Error with forcerun on 0.6.0

Hello,we've been using automated-ebs-snapshots successfully for some time. Tried the new 0.6.0 version out this morning with the usual arguments and get the following error:

automated-ebs-snapshots --access-key-id=**** --secret-access-key=**** --region=us-east-1 --run
2020-03-30 13:08:52,608 - auto-ebs - INFO - Connecting to AWS EC2 in us-east-1
2020-03-30 13:08:53,005 - auto-ebs - INFO - The newest snapshot for vol-**** is 4330 seconds old
2020-03-30 13:08:53,005 - auto-ebs - INFO - No need for a new snapshot of vol-****
2020-03-30 13:08:53,052 - auto-ebs - INFO - No old snapshots to remove
Traceback (most recent call last):
  File "/usr/local/bin/automated-ebs-snapshots", line 29, in <module>
    automated_ebs_snapshots.main()
  File "/usr/local/lib/python2.7/site-packages/automated_ebs_snapshots/__init__.py", line 173, in main
    if args.forcerun:
AttributeError: 'Namespace' object has no attribute 'forcerun'

This seems to be an issue with argparse, but I don't know how to work around it?

Thanks,

Christopher

List of needed AWS permissions

Is there a list of the AWS permissions needed by the tool?

CreateSnapshot
CopySnapshot
DeleteSnapshot
DescribeSnapshots
DescribeSnapshotAttribute
ModifySnapshotAttribute
ResetSnapshotAttribute

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.