Code Monkey home page Code Monkey logo

aws's Introduction

aws: AWS command line

When AWS was new (mid 2000s), Amazon didn't have a decent command line tool, so I wrote AWS. It was exceeding popular and was ranked by Amazon as the #1 community-contributed software for all of AWS.

(For a long time, aws had 14 5-star reviews. Eventually, somebody gave aws a 4-star review, saying, "I save 5 stars for software that is so good, it doesn't exist yet." At that point, Amazon's ranking system dropped the rating below anothyer project that had a total of one 5-star review. I suggested to Amazon that it didn't seem fair, and they reworked their ranking system, so that aws was again #1. It stayed there for the first few years of AWS.)

Amazon has added lots of tools, and aws was maintained by me alone. It still works, and I use it, but I no longer spend much time supporting it.

"aws" is a command line program that accesses Amazon Web Services: EC2, S3, SQS, SDB, ELB, IAM, EBN, RDS

To run "aws", all you need is a single file!

  1. download https://raw.github.com/timkay/aws/master/aws
  2. make it executable
  3. create ~/.awssecret or set EC2_ACCESS_KEY and EC2_SECRET_KEY

and you are done.

For more documentation, see http://timkay.com/aws/ and also the GitHub Wiki.

aws's People

Contributors

apankrat avatar arrestee avatar asergeyev avatar attili avatar ecweaver avatar gchinis avatar hmlnarik avatar icy avatar kjoconnor avatar lachie avatar lansalpc avatar laxdog avatar lucayepa avatar mv avatar mwild1 avatar nneul avatar readparse avatar timkay avatar tzz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws's Issues

encrypt ~/.awssecrets and better security for credentials

Currently ~/.awssecrets is unencrypted, which is a pain. I propose:

  • support ~/.awssecrets.gpg which is decrypted through GPG
  • support ~/.authinfo and ~/.netrc (format TBD, also should support .gpg extension)
  • the Git credential system could be helpful here, if aws could talk that protocol, see http://www.kernel.org/pub/software/scm/git/docs/technical/api-credentials.html (the idea being that this supports secure credential storage outside the aws codebase, simplifying things)

505 version error?

We've been using aws to back up our files for quite a while now. Suddenly today, it started returning the error:
505 HTTP Version Not Supported
I tried checking out the newest version of the code, (we were on a REALLY old revision), but got the same error.

From a bit of googling around, it seems this error code can mean a bunch of different things. Not sure if there's anything I can do to fix the issue or even to provide more debug output. Is anybody else suddenly getting this error as well?

For what it's worth, it seems to only be broken on ls, not on put.

s3get return code

When file is not found to be downloaded, there is no error returned and this is bad for scripting.

$ ./aws --secrets-file=/path/.awssecret --progress get bucket/path/file.zip /mnt/workspace/project/file.zip
404 Not Found
$ echo $?
0

sqs send/recv operations do nothing and report success

$ aws receive-message 55555555555/test
$ echo $?
0

With the above, there is a message already in the queue which doesn't get read. Same behavior on send. Same behavior on send even when I use IAM credentials that don't even have permissions to put.

aws tells me it is at version v1.77

EDIT: mea culpa, as you can see above I missed a leading / before the queue name, which led to a mangled URL, causing curl to exit with status 6, however that error code is unhandled so we never see it. Not a good situation, but not the bug I thought.

Feel free to close.

SQS Issues (long polling & delete message on successful exec)

Hi Tim,

Thanks for the great bit of code. It's making my life significantly easier for a project I am working on.

Despite saying that, I am having a couple of issues with SQS. The first is with long polling (which i notice is a recent commit to master). Maybe I'm missing something, but after adding --wait=20 to the command line long-polling doesn't seem to be working. Have I got the right parameter and format, or am I doing something wrong?

My second issue is a bit more significant. I'm currently using --exec='system("processMessage", "$body"); $?;' and despite the system call returning 0 (there is no message from your script saying otherwise), the message does not get deleted from SQS. Am I mis-reading the docs, are the docs wrong or is there something else?

Thanks in advance for your help.

s3 copy breaks key names

the "copy" function is lossy - the key (everything following bucket) is truncated.

example:

aws copy telemetry-easodeasoddldefault /telemetry-easod-easod_dldefault/cd/msocial13-2012083016-13003.telem-20120830161702-ec2-174-129-33-195-448597a85ae04aa7ba0a4b2e7c2ec0a8.tsv.gz

results in this PUT:

--request PUT --dump-header - --location 'https://telemetry-easodeasoddldefault.s3.amazonaws.com/msocial13-2012083016-13003.telem-20120830161702-ec2-174-129-33-195-448597a85ae04aa7ba0a4b2e7c2ec0a8.tsv.gz? (etc)'

the /cd/ part of the key is gone.

so we moved from:

easod_dlfeature2:easod:i-0d5e6963:dev:[root@ec2-50-19-86-128 ~]$ aws ls -1 telemetry-easod-easod_dldefault/cd/msocial13-2012083016-13003.telem-2012083016170
cd/msocial13-2012083016-13003.telem-20120830161702-ec2-174-129-33-195-448597a85ae04aa7ba0a4b2e7c2ec0a8.tsv.gz
easod_dlfeature2:easod:i-0d5e6963:dev:[root@ec2-50-19-86-128 ~]$

to

easod_dlfeature2:easod:i-0d5e6963:dev:[root@ec2-50-19-86-128 ~]$ aws ls -1 telemetry-easodeasoddldefault/msocial13-2012083016-13003.telem-2012083016170
msocial13-2012083016-13003.telem-20120830161702-ec2-174-129-33-195-448597a85ae04aa7ba0a4b2e7c2ec0a8.tsv.gz
easod_dlfeature2:easod:i-0d5e6963:dev:[root@ec2-50-19-86-128 ~]$

instead of

easod_dlfeature2:easod:i-0d5e6963:dev:[root@ec2-50-19-86-128 ~]$ aws ls -1 telemetry-easodeasoddldefault/cd/msocial13-2012083016-13003.telem-2012083016170
easod_dlfeature2:easod:i-0d5e6963:dev:[root@ec2-50-19-86-128 ~]$

aws get hangs sometimes

I had encountered two situations where aws get hangs when downloading 13MB file. My script looks like this:

HCDIR="desktop/hc"
BUCKET="homersoft-dist"
AWS_GET="aws get --progress"
AWS_LS="aws ls -1"
TEMPDIR="/tmp/"

OBJECT="$HCDIR/$VERSION.zip"
if [ -n "$AWS_LS $BUCKET/$OBJECT" ]; then
echo "# $OBJECT found"
$AWS_GET $BUCKET/$OBJECT $TEMPDIR
else
echo "# $OBJECT not found"
fi

produce a friendly message when parameters are missing

If we specify aws put operation with a single argument only it fails with the message:

root:~# aws put some-long-filename-with-version.1.0.2-and-releases-like.this.jar
sanity-check: Your system clock is 366 seconds ahead.
/usr/local/bin/aws: will not to read from terminal (use "-" for filename to force)

The message is a bit confusing in this case as it would better tell us about missing argument.

Probably some other commands behave in the same way.

extremely long multipart S3 operations expire

I can't quite figure out what's going on (the code looks to do the right thing), but I'm leaving a public note because the error is inobvious: For S3 results that come in many parts and take a long time (several minutes), eventually aws will die with 22 and print "403 Forbidden".

Adding a large --expire-time value fixes it.

Generalized "give me only X keys" option

It would be nice for some s3 batch operations to have a --max-keys option that returns only the number of keys asked for in, eg, 'ls'. If I have a bucket that has 100,000 keys, I don't want to fetch them all at once then iterate, I'd rather fetch 1000 at once, then iterate, so it doesn't spend forever just fetching the keys. Obviously this only works where the file list is actually changing, or else you just get the same 1000 the second time.

I added a lame version to mine, but it is very lame. midendian@73826c2

It looks like it can take --marker from the command line, which could do what I want somehow, do you know off-hand how?

JSON support

I wrote a 10-line patch to provide JSON support in the aws tool. Basically when --json is called, it opportunistically "requires" XML::Simple and JSON, and transforms $xml into one line of JSON input. The modules are not loaded otherwise. Are you interested in this patch or do you want to stick just with XML and YAML or do you like it, but without the module dependency?

Thanks!

SignatureDoesNotMatch error caused by tag "Name" with value of length 29

ec2-create-tags $INSTANCE --tag Name=12345678901234567890123456789
+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Code | Message |
+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| SignatureDoesNotMatch | The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details. |
+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Lengths other than 29 work, and lengths of 29 work with the official Java CLI tools.

--wait or --simple break output for aws run

I've been trying to utilize the --wait function to poll for when my EC2 instance comes up. However, it seems that, when I pass either the --wait function or the --simple option, all the output breaks and it does not wait/poll for my instance to come out of pending status. The AWS API call DOES work correctly, and my box still comes up, but I have to manually poll for status.

I'm using aws script version 1.75 on ubuntu 11.04. Shell output below.

Great script, thanks for making it available!

Not passing --wait or --simple works:

john@jump0$ aws run -n 1 -v --group cluster --key control --type m1.small --availability-zone us-east-1d ami-XXXXXX
aws version: v1.75 (ec2: 2010-11-15, sqs: 2009-02-01, elb: 2010-07-01, sdb: 2009-04-15, iam: 2010-05-08)
sanity-check: Your system clock is 9 seconds behind.
ec2(Action, RunInstances, MinCount, 1, MaxCount, 1, SecurityGroup.1, cluster, KeyName, control, InstanceType, m1.small, Placement.AvailabilityZone, us-east-1d, ImageId, ami-XXXX)
data = GET\nec2.amazonaws.com\n/\nAWSAccessKeyId=XXXXXXX&Action=RunInstances&Expires=2011-10-03T22%3A16%3A58Z&ImageId=ami-XXXX&InstanceType=m1.small&KeyName=control&MaxCount=1&MinCount=1&Placement.AvailabilityZone=us-east-1d&SecurityGroup.1=cluster&SignatureMethod=HmacSHA1&SignatureVersion=2&Version=2010-11-15
+------------+--------------+---------------------+---------+--------------+--------------------------+-----------------------------+--------------+----------------+------------------------------+----------------+------------+
| instanceId | imageId | instanceState | keyName | instanceType | launchTime | placement | kernelId | monitoring | stateReason | rootDeviceType | hypervisor |
+------------+--------------+---------------------+---------+--------------+--------------------------+-----------------------------+--------------+----------------+------------------------------+----------------+------------+
| i-XXXXX | ami-XXXX | code=0 name=pending | control | m1.small | 2011-10-03T22:16:28.000Z | availabilityZone=us-east-1d | aki-XXXXXX | state=disabled | code=pending message=pending | instance-store | xen |
+------------+--------------+---------------------+---------+--------------+--------------------------+-----------------------------+--------------+----------------+------------------------------+----------------+------------+

Passing --simple:

john@jump0$ aws run -n 1 -v --simple --group cluster --key control --type m1.small --availability-zone us-east-1d --simple ami-XXXXX
aws version: v1.75 (ec2: 2010-11-15, sqs: 2009-02-01, elb: 2010-07-01, sdb: 2009-04-15, iam: 2010-05-08)
sanity-check: Your system clock is 9 seconds behind.
ec2(Action, RunInstances, MinCount, 1, MaxCount, 1, SecurityGroup.1, cluster, KeyName, control, InstanceType, m1.small, Placement.AvailabilityZone, us-east-1d, ImageId, ami-XXXXX)
data = GET\nec2.amazonaws.com\n/\nAWSAccessKeyId=XXXXXX&Action=RunInstances&Expires=2011-10-03T22%3A18%3A05Z&ImageId=ami-XXXXX&InstanceType=m1.small&KeyName=control&MaxCount=1&MinCount=1&Placement.AvailabilityZone=us-east-1d&SecurityGroup.1=cluster&SignatureMethod=HmacSHA1&SignatureVersion=2&Version=2010-11-15

Passing --wait:

john@jump0$ aws run -n 1 -v --wait=10 --group cluster --key control --type m1.small --availability-zone us-east-1d --simple ami-XXXXX
aws version: v1.75 (ec2: 2010-11-15, sqs: 2009-02-01, elb: 2010-07-01, sdb: 2009-04-15, iam: 2010-05-08)
sanity-check: Your system clock is 10 seconds behind.
ec2(Action, RunInstances, MinCount, 1, MaxCount, 1, SecurityGroup.1, cluster, KeyName, control, InstanceType, m1.small, Placement.AvailabilityZone, us-east-1d, ImageId, ami-XXXXX)
data = GET\nec2.amazonaws.com\n/\nAWSAccessKeyId=XXXX&Action=RunInstances&Expires=2011-10-03T22%3A26%3A57Z&ImageId=ami-XXXX&InstanceType=m1.small&KeyName=control&MaxCount=1&MinCount=1&Placement.AvailabilityZone=us-east-1d&SecurityGroup.1=cluster&SignatureMethod=HmacSHA1&SignatureVersion=2&Version=2010-11-15

IAM Role Authentication is broken since commit "Support for V4 signatures. Only S3 supports V4 signatures for now" 002baa1

With every version of aws from commit 002baa1 onwards, when I run commands that rely on IAM role based authentication aws now no longer works.

Here is the error state with the latest version:

$ aws describe-tags --json --region=eu-west-1 --filter resource-id=i-obfuscated --sha1

{"Errors":{"Error":{"Message":"The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.","Code":"SignatureDoesNotMatch"}},"RequestID":"6ec2d44c-3596-477e-b742-obfuscated"}

And here is the correct response when I revert to commit 7b8e99d

$ aws describe-tags --json --region=eu-west-1 --filter resource-id=i-obfuscated

{"xmlns":"http://ec2.amazonaws.com/doc/2013-10-15/","tagSet":{"item":{"aws:autoscaling:groupName":{"resourceId":"obfuscated","resourceType":"instance","value":"obfuscated"},"Version":{"resourceType":"instance","value":"obfuscated","resourceId":"obfuscated"},"DataCenterID":{"value":"A","resourceType":"instance","resourceId":"obfuscated"},"aws:cloudformation:stack-id":{"value":"obfuscated","resourceType":"instance","resourceId":"obfuscated"},"aws:cloudformation:logical-id":{"resourceId":"obfuscated","resourceType":"instance","value":"obfuscated"},"Roles":{"resourceId":"obfuscated","value":"obfuscated","resourceType":"instance"},"aws:cloudformation:stack-name":{"value":"obfuscated","resourceType":"instance","resourceId":"obfuscated"},"Name":{"resourceId":"obfuscated","resourceType":"instance","value":"obfuscated"},"Environment":{"value":"obfuscated","resourceType":"instance","resourceId":"obfuscated"},"ConfigDecryptionKey":{"resourceId":"obfuscated","resourceType":"instance","value":"obfuscated"},"Branch":{"resourceId":"obfuscated","resourceType":"instance","value":"obfuscated"}}},"requestId":"5d495b51-3445-49cd-b529-obfuscated"}

Improve visual design of command output

AWS currently uses a bordered table format that requires enormous amounts of terminal window space to view, and (for example) returns a separate ASCII table for each EC2 instances in list-instances. The ASCII output could be substantially shorter and easier to read, designed for viewing within a monitor-sized terminal (120 columns? 160?) with a --verbose option to get everything.

How to attach metadata when put to S3?

According to S3 put-object documentation, we can attach custom metadata to an object with

--metadata key_name=string,key_name2=string

Is this something aws supports? I've try --metadata, --meta-data, --meta, and --data, all failed miserably with the mispelled meta parameter error.

s3put incorrect exit code when file is not found

s3put returns exit code 0 when a file is not found which is kinda bad for scripting.

For example:

s3put BUCKET_NAME/ foobar.txt ; echo $?

Gives the following output:

curl: Can't open 'foobar.txt'!
curl: try 'curl --help' or 'curl --manual' for more information

0

Missing bracket?

Odd error, I pulled down the latest version about an hour ago and have been getting this error:

[sgreen@db02 ~]$ perl aws --install
Missing right curly or square bracket at aws line 2766, at end of line
syntax error at aws line 2766, at EOF
Execution of aws aborted due to compilation errors.

I couldn't find an obvious missing curly or bracket. Thoughts?

Real Streaming-support for PUTs

My findings show that PUTs from STDIN are not really streamed. It looks like "aws" first takes all data on STDIN, and after all data is received triggers curl to upload.

Is it possible to support real streaming? This would be great for stream modification (for example, do some inline encryption with gnupg)

aws no longer exits with non-zero code on error

"aws put" routinely fails partway through uploading a file (a separate bug that I'm too lazy to diagnose), so we wrap it in a while loop. Good enough for government work.

That worked great with version 1.49, which would exit non-zero in that case. 1.75 seems to exit with rc=0 regardless of whether it succeeds or fails:

$ for a in *zip; do echo $a ; until aws-1.49 --progress --public put bucket $a ; do echo "Trying again." ; done ; done
foo.zip
###################################                                       49.7%
Trying again.
######################################################################## 100.0%
$

vs.

$ for a in *zip; do echo $a ; until aws-1.75 --progress --public put bucket $a ; do echo "Trying again." ; done ; done
foo.zip
####################################################################      94.6%
$

`aws --fail head` hangs

I'm using aws --fail head to check whether a file exists in a bucket. It will hang with the --fail option if the file already exists. I would expect to get an exit code of 0 if the file exists, and some other code if it doesn't

#!/bin/bash

# set some settings for the aws utility
export EC2_ACCESS_KEY=MYKEY
export EC2_SECRET_KEY=MYSECRET
export S3_DIR=MYBUCKET

FILE=a.test

# do not overwrite an existing file on aws of the same name
aws --fail head $FILE # this will hang if the file exists already
#aws head $FILE # this will not hang, but it doesn't give me a proper return code
if [[ $? == 0 ]]; then
  echo "file already exists on aws" >&2
  exit 1
fi;

echo "file doesn't exist yet"

-H or --headers option unavailable

When I try to describe an image with "aws dim --headers ", I get "-H: mispelled meta parameter?"

I only need to get the "state" of the ami.

SQS recv and dm exit code is 0 on failures

I always get exit code 0 from these two even when I try to get message from non-existent queue or delete message with missing or invalid handle. When using aws from scripts knowing if delete succeeded or not would be nice to avoid looping same messages repeatedly in case of failures after visibility timeout passes.

Is there any good way to detect failures in these commands?

sqssend message body always encoded

I am trying to send a message to an sqs queue where the body looks like "folder/sub-folder/filename.txt"

The resulting message posted to SQS is encoded looking like "folder%2Fsub-folder%2Ffilename.txt

I don't want to have to change other downstream processing to handle the encoding and would really like if the aws api tool could send the message appropriately.

I attempted to use the --curl-options but to no avail.

Is it a bug or am I just being a numpty?

amazon updates

Thanks for your work, this is a great program. I've been using it for years.

Every time I do a yum update and Amazon updates boto, it over writes the s3 commands (sym links) and I have to reinstall aws.

Just wondering if there's a way around this?

S3 put can fail, but aws still returns zero

I have an aws put as part of an automated buildbot script, and typically when something goes wrong, aws will return a nonzero exit status so that I can stop the rest of my script from running. In this case however, curl looks like it failed, but aws still returned zero. The output from curl says:

curl: (56) SSL read: error:00000000:lib(0):func(0):reason(0), errno 104

The command used to invoke aws was:

~/bin/aws put 'x-amz-acl: public-read' julianightlies/bin/osx/x64/0.4/julia-0.4.0-5039cb1011-osx.dmg /tmp/julia_package/Julia-0.4.0-dev-5039cb1011.dmg

put is too clever for some files

Due to this magic:
if (/^(?:x-amz-|Cache-|Content-|Expires:|If-|Range:)/i)

you can't have files named, for example, if-i-only-had-a-brain.txt. (Or, in my case, dealing with an ill-advised base64 alphabet.)

Would adding a special argument '--' to indicate "treat the remaining argv as files" make it less surprising?

S3COPY , S3CP - copy from one bucket to another bucket fails

Copies object BUCKET2/OBJECT2 to BUCKET1/OBJECT1 to fails with error

The specified key does not exist.

e.g

s3copy breinput/sample/tables/table1 breinput/sample/tables/t1.

I used the document from timkay.com/aws/ for the syntax.

Any help is appreciated.

new automatic multipart upload feature

As it is implemented right now multipart upload starts automatically if file >25GB (see line 1288) . I am guessing the intention was 5GB and its is just a typo in GB5 calculation.

I am great fan of aws. Keep up excellent work!

Support AWS SIgnature Version 4

Over at Transloadit we're happy users of this project. One of our customers tried to upload something to the new Frankfurt datacenters, and got the following error:

+----------------+----------------------------------------------------------------------------------------------+------------------+
| Code | Message | RequestId |
+----------------+----------------------------------------------------------------------------------------------+------------------+
| InvalidRequest | The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. | 8779D556E0813FCE |
+----------------+----------------------------------------------------------------------------------------------+------------------+

As it turns out, this region only supports AWS Signature Version 4, and is not backwards compatible.

Any chance of supporting AWS Signature Version 4?

Docs for NextToken

Hi, I'm trying to use aws for querying SimpleDB. When I get my results it looks something like:

+--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|  Name  |                                                                                                                                                                                                                                                         Attribute                                                                                                                                                                                                                                                         |
+--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Domain | Name=Count Value=467263rO0ABXNyACdjb20uYW1hem9uLnNkcy5RdWVyeVByb2Nlc3Nvci5Nb3JlVG9rZW7racXLnINNqwMA?C0kAFGluaXRpYWxDb25qdW5jdEluZGV4WgAOaXNQYWdlQm91bmRhcnlKAAxsYXN0RW50aXR5SURa?AApscnFFbmFibGVkSQAPcXVlcnlDb21wbGV4aXR5SgATcXVlcnlTdHJpbmdDaGVja3N1bUkACnVu?aW9uSW5kZXhaAA11c2VRdWVyeUluZGV4TAANY29uc2lzdGVudExTTnQAEkxqYXZhL2xhbmcvU3Ry?aW5nO0wAEmxhc3RBdHRyaWJ1dGVWYWx1ZXEAfgABTAAJc29ydE9yZGVydAAvTGNvbS9hbWF6b24v?c2RzL1F1ZXJ5UHJvY2Vzc29yL1F1ZXJ5JFNvcnRPcmRlcjt4cAAAAAAAAAAAAAAO3s4AAAAAAQAA?AAAAAAAAAAAAAABwcHB4 |
+--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

So I think the string starting rO0 is something to do with the NextToken but if I copy/paste that entire string I'm told:

The specified next token is not valid.

So how does one deal with tokens?

Route53 updates broken in this commit "Support for V4 signatures. Only S3 supports V4 signatures for now. โ€ฆ"

it removes this function:-

-sub R53_xml_data {
-    return { 'header' => '<?xml version="1.0" encoding="UTF-8"?>',
-        'POST|' => '<CreateHostedZoneRequest xmlns="https://route53.amazonaws.com/doc/2012-02-29/"><Name></Name><CallerReference></CallerReference><HostedZoneConfig><Comment></Comment></HostedZoneConfig></CreateHostedZoneRequest>',
-        'POST|rrset' => '<ChangeResourceRecordSetsRequest xmlns="https://route53.amazonaws.com/doc/2012-02-29/"> <ChangeBatch> <Comment></Comment> <Changes> <!--REPEAT--> <Change> <Action></Action> <ResourceRecordSet> <Name></Name> <Type></Type> <TTL></TTL> <ResourceRecords> <ResourceRecord> <Value></Value> </ResourceRecord> </ResourceRecords> </ResourceRecordSet> </Change> <!--/REPEAT--> </Changes> </ChangeBatch> </ChangeResourceRecordSetsRequest>',
-
-    };
-}

The sanity check fails if using a curl compiled with NSS library (not OpenSSL or GnuTLS).

Latest versions of CentOS, RedHat Linux and Fedora have curl compiled with NSS, not OpenSSL. The aws command fails with:

sanity-check:  Your curl doesn't seem to support SSL.  Try using --http

The sanity check done using:

curl -q --cipher RC4-SHA -s  --include https://connection.s3.amazonaws.com/test

fails (with an exit code 59 meaning "Couldn't use specified SSL cipher") because the cipher string for NSS is different than the string for OpenSSL or GnuTLS (which use "RC4-SHA"). For versions compiled with NSS the cipher string should be "rsa_rc4_128_sha". A working check would be:

curl -q --cipher rsa_rc4_128_sha -s --include https://connection.s3.amazonaws.com/test

I think the solution would be to test with RC4-SHA and retry with rsa_rc4_128_sha if it fails, or try to detect if curl uses NSS. The version command shows the library used:

curl --version
curl 7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3

for a version with OpenSSL and:

curl --version
curl 7.29.0 (x86_64-redhat-linux-gnu) libcurl/7.29.0 NSS/3.15.4 zlib/1.2.7 libidn/1.28 libssh2/1.4.3

for a version with NSS.

Regards,

MValdez.

Documentation of meta-params?

I hope I'm missing something, but I can't find anything that documents meta-parameters. The only reason I even know about them is the "mispelled metaparameter" error, and by looking at the code, which isn't terrible helpful.

SQS send returns error code 0 on error

Hi again Tim,

Thanks for the speedy help last time, I've found another small issue and once again my lack of Perl skills is letting me down.

I have defined a bash function as follows

function notifyQueue {
    ./aws --silent --simple --region="${REGION}" send-message "${1}" -message "${2}" || { echo $?; echo "{\"error\":\"Could not notify queue\",\"id\":\"${POST_ID}\"}"; exit 1; }
    echo $?
}

This works fine the majority of the time, but every so often I get the following response from your script

+--------+-----------------------+----------------------------------------------------------------------------------+
|  Type  |         Code          |                             Message                                              |
+--------+-----------------------+----------------------------------------------------------------------------------+
| Sender | SignatureDoesNotMatch | The request signature we calculated does not match the signature you provided... |
+--------+-----------------------+----------------------------------------------------------------------------------+

When this happens I still get an error code of 0 returned.

I've noticed that in your documentation is is mentioned that you don't always return a non-0 value on error, but can you give me some pointers as to where I would need to detect this to get a non-0 value.

My use case is that after processing a message I need to send a message to a second queue, when that message fails I don't want to delete the current working message.

Thanks again.

Automatic multi-part uploads for large files with server-side encryption

Using the latest version, with the following command line syntax as an example:

/usr/bin/aws put "x-amz-server-side-encryption: AES256" ${BUCKET_NAME}/${PARTITION_DATE}/${FILE_NAME} ${UPLOAD_FILEPATH}

For files less than 5GB in size, which upload as a single part, the end state of the put to S3 is a file with server-side encryption enabled.

For files greater than 5GB in size, which the client automatically uploads via multi-part, the end state of the put to S3 is a file without server-side encryption enabled even though the "x-amz-server-side-encryption: AES256" header is specified on the put.

I can successfully upload a file greater than 5GB with SSE enabled using the following multi-part logic:

dd if=/dev/zero of=file1.img bs=1 count=0 seek=3G
dd if=/dev/zero of=file2.img bs=1 count=0 seek=3G
dd if=/dev/zero of=file3.img bs=1 count=0 seek=3G
./aws post "x-amz-server-side-encryption: AES256" ${BUCKET_NAME}/MyMpu?uploads
./aws put ${BUCKET_NAME}/MyMpu?part file1.img
./aws put ${BUCKET_NAME}/MyMpu?part file2.img
./aws put ${BUCKET_NAME}/MyMpu?part file3.img
./aws post ${BUCKET_NAME}/MyMpu?upload

In my specific use case, some files that are being uploaded are small, and some are very large, so I cannot easily divide all files up in to multi-part chunks using this logic. Should the client be able to support this automatically when it follows the multi-part code path for any individual file >= 5GB?

"405 Method Not Allowed" with Multipart upload

So I was uploading a 15 GB file (and alternatively, a 6.5GB file) with the following sample command

~/aws put backup_bucket/diskimg.raw /mnt/img/diskimg.raw

it was able to upload the file parts and they show up in the bucket, but the command always shows a "405 Method Not Allowed" in the end.

For the 15GB file, it would upload 2~3 files of 5 GB each, then it shows a "405 Method Not Allowed". (the file parts along with the upload ids would remain in the bucket).

I initially think it's a permission issue, but after checking, I don't think they are.

The bucket grants full rights to the bucket owner and any authenticated user. Bucket policy is in the default setting and empty. CORS policy is in the default setting and shouldn't be applicable to me since I'm runing the command from an EC2 instance within the same account. Also, I'm not using any IAM roles (they are all empty).

In addition, and most importantly, I was able to upload any files less than 5 GB in size, as always, to the same bucket.

The aws command was already the latest version, v1.80.

missing query instance user-data option

Tim Kay,

First of all, thanks for this great tool.

aws describe-instances [InstanceId...] does not show the user-data field.
I use this field to assign a friendly name to the instance, wich I than use to assign friendly names to snapshots created of volumes attached on this instance.

Am I overlooking an command-line option to retrieve this information?

Till now I have been using
ec2-describe-instance-attribute [InstanceId...] --user-data
to retrieve this information, but I would be much happier using your lean & mean 'aws' tool.

If it is not currently possible with your tool, would you consider adding it?
Sounds like a 'describe-instance-attribute' option to me.
Do you have a temporary 'curl commandline' alternative that could be used to query this information?

Thank you in advance.

Justin

Feature Request: Add RDS support

Hi Tim,

Love the tool. So easy to use.

I would like to add RDS support.
Is this something you would like me to add in?
Any issues I should be aware of with adding in RDS support?

And I'd like to do it now!

warm regards

Rob

S3 region Frankfurt error: The authorization mechanism you have provided is not supported

I get the following error when I try to access a S3 bucket in the new Frankfurt region:

307 Temporary Redirect
+----------------+----------------------------------------------------------------------------------------------+------------------+
| Code | Message | RequestId |
+----------------+----------------------------------------------------------------------------------------------+------------------+
| InvalidRequest | The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. | 5899A2581461XXXX |
+----------------+----------------------------------------------------------------------------------------------+------------------+

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.