aws - simple access to Amazon EC2 and S3 and SQS and SDB and ELB » tools » aws

aws is a command-line tool that gives you easy access to Amazon EC2 and Amazon S3. aws is designed to be simple to install and simple to use.

Thanks to your feedback (, aws is the top-rated "community code" for all of Amazon EC2 and S3! See the ratings and reviews at EC2 and S3. They make me blush! Thank you!

This document describes the basic features. For additional examples, see the HowTo document.

  • SDB support - zikes! it was easy to add

    "aws" now supports Amazon SDB. See the documentation.

  • ELB support

    "aws" now supports Amazon ELB. See the documentation.

  • SQS support

    "aws" now supports Amazon SQS. See the documentation.

  • Some new options (as of v1.29):

    --simplesimplified output for some commands
    --wait=SECwait for EC2 instances to start
    --cuttruncate output to screen width
    --md5verify MD5 checksums
    --max-time=Ntime out after N seconds (will retry up to 3 times)
    --failsuppress error message display (but sets $?)
    --curl-options="blah"   specify additional curl options
    set or change ACL

    Sets $? correctly in more cases. Retries all requests up to 3 times.

    s3put will read from stdin (creating a temp file, which is then uploaded).

  • Regions support (as of v1.23), as well as support for Signing Version 2, --progress indicator, better error messages, better formatting, and some small bug fixes. See the change log.

    Many, many changes (as of v1.21). ~/.awsrc file support, --exec option to iterate over results, --secrets-file, --expire-time, --limit-rate, Cache-Control:, Range: and If-*: support. And a few bug fixes/tweaks.

See the change log.


This section describes how to install aws on Linux and other *ix systems. To install on Windows, see Windows Download and Configuration. aws has only one dependency (curl), in addition to perl, which is often pre-installed on Linux.

To use aws, follow these steps:

  1. Install curl if necessary (either apt-get install curl or yum install curl)
  2. Download aws to your computer
    curl -o aws
  3. Put your AWS credentials in ~/.awssecret: the Access Key ID on the first line and the Secret Access Key on the second line. Example:
  4. Alternately, set the EC2_ACCESS_KEY and EC2_SECRET_KEY environment variables.
  5. OPTIONAL. Perform this step if you want to use commands like s3ls and ec2run. If you prefer commands like aws ls or aws run, then you may skip this step. Perform the optional install with:
    perl aws --install
    (This step sets aws to be executable, copies it to /usr/bin if you are root, and then symlinks the aliases, the same as aws --link does.)
  6. Alternately, set "aws" to be executable:
    chmod +x aws

If you are logged in as root, aws will be installed in /usr/bin. Otherwise, it will be installed in the current directory.

You are ready to use Amazon EC2 and Amazon S3. Try the following example. You'll need to provide your own BUCKET_NAME, unique from all other bucket names.

s3put BUCKET_NAME/aws-backup /usr/bin/aws
s3get BUCKET_NAME/aws-backup
s3delete BUCKET_NAME/aws-backup
s3delete BUCKET_NAME


As a security measure, make sure that ~/.awssecret is not readable by group or other (chmod go-rwx ~/.awssecret).

Windows Download and Configuration

As of v1.10, aws runs on Windows. Currently, client authentication isn't supported, until I figure out where to put the cert. (Hence the --insecure-aws setting.)

Kenzo writes:

Curl checks the directory that curl.exe runs from, followed by all the directories in the PATH, searching for the file "curl-ca-bundle.crt".

So, folks need to rename ca-bundle.crt to curl-ca-bundle.crt, and then place it in the directory where they installed curl.

There is probably a way to configure curl to look for the certificate file elsewhere; I found that curl attempted to open %appdata%\_curlrc, so a configuration file could likely be placed here that specifies another location. However, I didn't experiment with this, as I was satisfied with keeping it with the executable (although this is mildly less secure).

To download and configure, follow these steps:

  1. Install curl. Search for "curl download". The first result should be In the "Win32 Generic" section, choose the "Win32" binary with SSL. The download is a .zip file. Extract curl.exe and place in the C:\Windows directory, or some other place that is in the path. Test by opening a cmd window, then do "cd \" and "curl". It should find curl.
  2. Download Strawberry Perl from
  3. Download the latest aws to C:\Windows or your current directory, and rename to
  4. Put your AWS credentials in C:\Documents and Settings\<YOUR NAME>\.awssecret (see the example file in the above Download section) or set the EC2_ACCESS_KEY and EC2_SECRET_KEY environment variables.
  5. Try --insecure-aws ls -- you should get a list of your S3 buckets.
To run on Windows, you need to use the --insecure-aws flag, unless you figure out where to install the certificate that comes with Curl for Windows.

The following command will list your S3 buckets: --insecure-aws ls
If you prefer to use environment variables to set the secrets rather than the .awssecret file, you can create this batch file, replacing XXX with your keys, of course:
set AWS_ACCESS_KEY_ID=XXX set AWS_SECRET_ACCESS_KEY=XXX --insecure-aws %1 %2 %3 %4 %5 %6 %7 %8 %9


Do you want to use Amazon EC2 and Amazon S3 but are intimidated by how difficult they are to use? Do you find it cumbersome (and ironic) that a web service requires you to install Java? If so, aws is for you.

aws is a very simple, light weight interface to Amazon EC2 and Amazon S3, that implements basic commands like ec2run and s3ls.

The command-line interface works like this:


For example, you can start an EC2 server instance:

aws run

Or, you can store a file named file7.txt to S3 in bucket mybucket, object sample.txt:

aws put mybucket/sample.txt file7.txt

If you like, you can have aws create command-line aliases, such as ec2run and s3put, so that you don't have to prepend "aws" to each command. aws can create its own aliases as follows. This step was done for you if you installed with --install.

aws --link

Then you can perform the operations without the "aws" prefix:

s3put mybucket/sample.txt file7.txt

~/.awssecret File Format

Note: support for s3cmd's ~/.s3cfg format has been added. If that file exists (and ~/.awssecret does not), the keys will be read from there.
aws needs access to a pair of strings called the Access Key ID and the Secret Access Key. By default, aws looks for these secrets in a file ~/.awssecret, but a different file can be specified using --secrets-file=FILE. The secrets are stored in this file separated by white space, as in the following example:

Keep in mind that your Secret Access Key must be kept secret; anybody with access to that key gets full access to your EC2 and S3 instances and files. If your file permissions are too generous, a warning is generated.

You may use environment variables to specify the secrets rather than the .awssecret file. For example,

Use of environment variables is not recommended because command lines and environment variables are visible to other users of the same host.

It is possible to configure aws to use a remote signing service (see below), enabling your infrastructure to make use of the Secret Access Key without having access to it.

~/.awsrc File Format

The ~/.awsrc file may contain command-line parameters in the same format that they would appear on the command line, on one line or several. For example, the following ~/.awsrc file sets the --insecure-aws option, so that host authentication is turned off. (Useful if your host certificates don't work.) It also sets the --simple option, so that "aws ls" commands generate output that is simple to parse.
The existence of the ~/.awsrc file (even if empty) turns off default sanity checking. To enable sanity checking when ~/.awsrc exists, add --sanity-check to the ~/.awsrc file.

Sanity Checking

When it starts, "aws" checks for Certain common conditions that prevent it from working properly:

~/.awssecret is missing
~/.awssecret not readable
~/.awssecret has wrong permissions  
You must create a .awssecret file that contains the access keys for your AWS account.
local host certificates aren't workingThe curl command is used to access Amazon Web Services. The connection is secured via TLS (also called SSL or https). If your host certificates aren't set properly, the TLS connection fails. Fix the problem or add --insecure-aws to ~/.awsrc.
AWS is not reachableFor some reason, curl could not access
local host time is wrongIf your host's clock is set wrong, then requests might be signed with an invalid timestamp. The sanity check code will automatically correct for a wrong host clock.

For optimal performance, sanity checks should be disabled by creating a ~/.awsrc file. If you have reason to create .awsrc and still want sanity checks enabled, add --sanity-check to ~/.awsrc. If you want the sanity checks enabled (perhaps to correct for a wrong host clock), but you want to disable warnings, add --sanity-check and --silent to ~/.awsrc.

Sanity check warnings will not be displayed if stdin is set to something other than a tty. Thus, you might not see sanity check warnings when invoking "aws" from within a script.

S3 Reference Guide

Alternate Syntax Description
aws get
aws ls
List all buckets. See options below.
aws get BUCKET
aws ls [-X] BUCKET[/PREFIX]
aws ls [-X] BUCKET [PREFIX]
s3get BUCKET
List files in BUCKET, optionally restricted to those files that start with PREFIX. X is one or more of the following:
-1(digit "1") displays name in single column
-l(letter "L") displays data is an "ls -l" format
-tsorts list by modification time
-rreverses sort order
--simpledisplays size, date, and key in tab-separated columns, for easy parsing
--exec=PERL executes PERL for each line of the result. $size (of file in bytes), $mod (modification date), $key (bucket or object name), $bucket are defined
Either a space or slash will work. NOTE: S3 has a limit of 1,000 files listed. "aws" will issue multiple requests, until the list is completed.
aws put BUCKET
aws mkdir BUCKET
s3put BUCKET
s3mkdir BUCKET
Create BUCKET.
aws put BUCKET/[OBJECT] [FILE] s3put BUCKET/[OBJECT] [FILE] Store FILE to OBJECT in BUCKET. If OBJECT is omitted, then FILE is used.
Retrieve OBJECT from BUCKET, saving to FILE (default stdout).
aws delete BUCKET/OBJECT
Delete OBJECT from BUCKET.
Copies object BUCKET2/OBJECT2 to BUCKET1/OBJECT1 (right-to-left to stay consistent with PUT syntax). If the source bucket is missing, uses the target bucket. If the target object name is missing, uses the source object name.

Note the leading slash on the source object is required when specifying a source bucket. Without the leading slash, refers to the object in the same bucket as the target, possibly with slashes in the source object name.

Can be used to copy a file to a bucket with a different location constraint.

Use --progress to display progress for large S3 transfers.

New! aws put using STDIN

When using aws put with no source file, or if the source file is -, then a temporary file is created, stdin is copied into the temporary file, and that file is uploaded. For example,

echo hello, world |aws put test681/hello.txt
creates an object named test681/hello.txt, which contains hello, world\n.

The MIME type is set as appropriate for the target file name.

Using "get" with a Directory Target

If the target of a "get" operation ends with "/", or if the target is a directory, then the file name will be taken from the object name. Examples:

aws get timkay/aws-1.12 .
will retrieve the object and store it as "aws-1.12" in the current directory.

aws get timkay/aws-1.12 x/y/z/
will retrieve the object and store it as "x/y/z/aws-1.12", relative to the current directory. As necessary, it will create directories. Compare to

aws get timkay/aws-1.12 x/y/z
which will retrieve the object and store it as "x/y/z", assuming "x/y/z" isn't already a directory. (The file is named "z" in the subdirectory "x/y/".)

Note that it is possible to use slashes in object names, such as "x/y/foo". It would be useful to say

aws get timkay/x/y/foo .
to retrieve the object and store as x/y/foo. The current implementation will store it as file "foo" in the current directory, discarding the object's "path". It's not clear which action is the right one. If you have an opinion, please send feedback.

Specifying x-amz-*: and Content-Type: Headers

It is possible to send x-amz-*: and Content-Type: headers by including them on the aws command line after the verb. The following example sets the object access policies to be world-readable and also sets the content type as appropriate for a .jpg file:

aws put "x-amz-acl: public-read" "Content-Type: image/jpg" timkay681 tim.jpg

As of v1.12, if it isn't set otherwise, the Content-Type: header is automatically set using the mime.types file, which is read from both the current directory and /etc/, if they exist.

Changing the ACL of an Existing Object New!

To change the ACL of an object to a canned access policy, store an empty ACL file and include a header to indicate the desired canned access policy:

aws put "x-amz-acl: public-read" test681/tim2.jpg?acl /dev/null
The --set-acl option does the same thing with a simpler syntax:
aws put test681/tim2.jpg?acl --set-acl=public-read
or even
aws put test681/tim2.jpg?acl --public

The choices for --set-acl are private, public-read, public-read-write, and authenticated-read. As of aws-1.25, --public is equivalent to --set-acl=public-read, and --private is equivalent to --set-acl=private.

For more information about canned access policies, see the Amazon S3 API. To control the ACL more precisely, see the following section, Setting Access Policies (ACLs).

Setting Access Policies (ACLs)

You can set canned access policies as described in the previous section. Otherwise, setting access policies requires that you create an XML file containing the access policies. You might start by retrieving the existing policies, modify those policies, then upload the changes, as shown in the example below. You can see that the results are publicly readable.

bash-3.00$ aws --xml get public681/tim.jpg?acl >acl.xml bash-3.00$ xmlpp acl.xml <?xml version="1.0" encoding="UTF-8"?> <AccessControlPolicy xmlns=""> <Owner> <ID>c1438ce900acb0db547b3708dc29ca60370d8174ee55305050d2990dcf27109c</ID> <DisplayName>timkay681</DisplayName> </Owner> <AccessControlList> <Grant> <Grantee xmlns:xsi="" xsi:type="CanonicalUser"> <ID>c1438ce900acb0db547b3708dc29ca60370d8174ee55305050d2990dcf27109c</ID> <DisplayName>timkay681</DisplayName> </Grantee> <Permission>FULL_CONTROL</Permission> </Grant> </AccessControlList> </AccessControlPolicy> bash-3.00$ vi acl.xml bash-3.00$ aws put public681/tim.jpg?acl acl.xml bash-3.00$ aws --xml get public681/tim.jpg?acl |xmlpp <?xml version="1.0" encoding="UTF-8"?> <AccessControlPolicy xmlns=""> <Owner> <ID>c1438ce900acb0db547b3708dc29ca60370d8174ee55305050d2990dcf27109c</ID> <DisplayName>timkay681</DisplayName> </Owner> <AccessControlList> <Grant> <Grantee xmlns:xsi="" xsi:type="CanonicalUser"> <ID>c1438ce900acb0db547b3708dc29ca60370d8174ee55305050d2990dcf27109c</ID> <DisplayName>timkay681</DisplayName> </Grantee> <Permission>FULL_CONTROL</Permission> </Grant> <Grant> <Grantee xmlns:xsi="" xsi:type="Group"> <URI></URI> </Grantee> <Permission>READ</Permission> </Grant> </AccessControlList> </AccessControlPolicy>

For more information about about setting access policies, see the Amazon S3 documentation: "Setting Access Policy with REST".

Note that some browsers cache the content types of URLs. If you browse a URL then change it's content type, then hit refresh, the broswer might not notice the new content type. To fix this problem, you should flush the content cache. (With FireFox, use Tools/Clear Private Data with the "Cache" option checked.)

Sending <CreateBucketConfiguration> Data with "mkdir"

"put" and "mkdir" used to be the same code (you could create buckets with either). To support configuration options, they are now different. If you don't care about the Location of a bucket, then you can continue to use either. However, if you want to specify Location, then you have to use "mkdir". ("aws mkdir foo eu.xml" and "aws put foo eu.xml" do two different things: The first sets the configuration of the new bucket foo as per the file eu.xml, while the latter stores the file eu.xml in bucket foo.)

  1. Create a file with the <CreateBucketConfiguration> XML tag in it. See eu.xml as an example.
  2. Create the bucket with
    For example, you could say
    aws mkdir my-new-bucket eu.xml
  3. To see the Location constraint of a bucket, use the ?location attribute:
    aws get BUCKETNAME?location
    For example, you could say
    aws --xml get my-new-bucket?location |xmlpp
    <?xml version="1.0" encoding="UTF-8"?> <LocationConstraint xmlns="">EU</LocationConstraint>
    Notes: I am using the --xml output format, until I get xml2tab working in this case. xmlpp is at
Remember that buckets naming is more strict when specifying Location constraints. See the Amazon S3 documentation: "Bucket Restrictions and Limitations".

EC2 Reference Guide

The EC2 interface supports the same command line options as Amazon's java implementation. See the Amazon EC2 documentation: "List of Operations by Function" for details.

Use "aws COMMAND -h" to see the options available for each command. For example,

$ ./aws run -h
$ ./aws run -h Usage: aws run-instance run [ImageId (ami-23b6534a)] [-i InstanceType (ml.small)] [-a AddressingType (public)] [-n MinCount (1)] [-n MaxCount (1)] [-g SecurityGroup... (default)] [-k KeyName (default)] [-d UserData] [-f UserData]
Here we see that the run-instance command takes an image id, an instance type, an addressing type, a count of servers, security groups, a key name, and user data. The defaults are specified in parentheses.

Note. The following command groups are now supported, but the following table has not been updated. The syntax is the same as Amazon's command-line tools.

Note: Elastic Block Storage is now supported (as of v1.14), but the following table has not yet been updated. It works the same as the Amazon documentation: Amazon Elastic Compute Cloud.

Type "aws" at the command-line to see a list of supported commands.

Alternate Syntax Description
aws add-group GROUP
aws addgrp GROUP
ec2-add-group GROUP
ec2addgrp GROUP
Creates a security group named GROUP.
aws add-keypair KEYNAME
aws addkey KEYNAME
ec2-add-keypair KEYNAME
ec2addkey KEYNAME
Creates a new key pair named KEYNAME and displays the private key.
aws allocate-address
aws allad
Allocates a new public IP address.
aws associate-address ...
aws aad ...
ec2aad address -i instance [-d device]
Associates an IP address with an instance.
aws authorize ...
aws auth ...
ec2-authorize ...
ec2auth [-P PROTOCOL] [-p PORT_RANGE] [-u USER]
Creates new firewall entries.
aws delete-keypair KEYNAME
aws delkey KEYNAME
ec2-delete-keypair KEYNAME
ec2delkey KEYNAME
Deletes the key pair named KEYNAME.
aws delete-group GROUP
aws delgrp GROUP
ec2-delete-group GROUP
ec2delgrp GROUP
Deletes the security group named GROUP.
aws describe-groups [GROUP...]
aws dgrp [GROUP...]
ec2-describe-group [GROUP...]
ec2dgrp [GROUP...]
Describe security groups.
aws describe-keypairs [KEY...]
aws dkey [KEY...]
ec2-describe-keypairs [KEY...]
ec2key [KEY...]
Describe EC2 images.
aws describe-images ...
aws dim [IMAGE...] [-o OWNER] [-u USER]
ec2-describe-images ...
ec2dim [IMAGE...] [-o OWNER] [-u USER]
Describe EC2 images.
aws describe-instances
aws din [INSTANCE...]
ec2din [INSTANCE...]
Describe all EC2 server instances.
aws describe-regions
aws dreg
Describe available EC2 regions. When issuing EC2 commands specify --region=REGION to indicate a region.
aws reboot-instances INSTANCE [INSTANCE...]
aws reboot INSTANCE [INSTANCE...]
ec2-reboot-instances INSTANCE [INSTANCE...]
ec2reboot INSTANCE [INSTANCE...]
Reboot EC2 server instances.
aws run-instances ...
aws run
ec2-run-instances ...
ec2run [--simple] [--wait=SEC]
Start N EC2 server instances.
--wait=SEC to poll for instances to leave "pending"
--simple to display the instanceId's in tab-separated format
aws terminate-instances INSTANCE [INSTANCE...]
ec2-terminate-instances INSTANCE [INSTANCE...]
Terminate EC2 server instances.

New! --wait=SEC

As of aws-1.24, the --wait=SEC switch tells ec2run to poll for the status of the new instances (using ec2-describe-instances), waiting for them to leave "pending" status (presumably entering "running" status). Waits SEC seconds between polling attempts.

$ ./aws-1.24 run -k k2 --wait=10 i-97f276fe pending i-97f276fe pending i-97f276fe pending i-97f276fe pending i-97f276fe pending i-97f276fe running


Use --region=eu-west-1 to select the European EC2 region. eu and us are synonums for eu-west-1 and us-east-1.

Add --region=eu to ~/.awsrc to switch the default region.

Robustness and Scripting

To interact with EC2 and S3 services, "aws" delegates the core functionality to the command line program "curl", which has many features and options. Some of the options are made explicitely available via "aws" command line options. Other curl command line options may be set using the --curl-options="..." setting.

To improve the likelihood that a transaction succeeds succeeds, "aws" sets the appropriate curl option, so that all failed transactions are retried up to 3 times.

The "aws" command --max-time=N indicates the maximum number of seconds to allow a given try to succeed. If the timeout is exceeded, it is terminated and retried up to 3 times. (By default, there is no timeout, so a transaction can potentially hang forever.)

As of v1.28, the --md5 option computes the MD5 checksum for all objects and generates an error when a mismatch occurs.

Also as of v1.28, $? is set correctly in most cases, so that scripts can tell if an "aws" transaction completes successfully.

Remote Signature Mode

A user gets only one Secret Access Key. It should be kept secret, but it is needed to sign requests. Thus, any EC2 instances that need to pull data from S3 need to have access to the secret.

It's not always a good idea to store your key on any server that needs access to Amazon Web Services.

So how does an EC2 instance access S3 data without using the Secret Access Key? By using aws in remote signature mode. Remote signature mode is a web service that signs EC2 and S3 requests remotely. The Secret Access Key is stored only on the server(s) that provide the signing service.

aws can then run on an EC2 instance, performing all operations as it normally would, except that it signs its requests using a remote call to the web service.

The web service can be protected with a password, by restricting access to known IP addresses, and/or by limiting what requests are allowed.

To use remote signature mode, two things have to happen. First, the web service needs to be configured by following these steps on the server that is to host it.

  1. Make sure that aws runs properly at the command line when logged in as the user that apache runs as. (Log in as www-data, and do aws ls, for example, to see that aws and ~/.awssecret are properly configured.
  2. Create a CGI that contains just this one line:
    #!/usr/bin/env aws
    and configure the web server, so that the script is password (or otherwise) protected and executable. It is also a good idea to use SSL (https) to access this service, so that the password is encrypted. To use a self-signed certificate, see the --insecure-signing setting.

    The following PHP code implements a remote signing service:

    <? list($head, $body) = explode("\n\n", shell_exec("aws --sign=" . file_get_contents("php://input"))); header("Content-type: text/plain"); echo $body; ?>
    Make sure that aws is in the web server's path, or else add the explicit path (probably /usr/bin/aws) to the script.
  3. Test this remote signing service by pointing your browser to the URL. It should display something like
    (This particular string is the empty string, signed by my Secret Access Key.)
You now have a aws remote signing service.

Next, you need to configure the client to use the remote signing service:

  1. Create ~/.awssecret like this:
    The first line two lines contains underscore ("_") characters where your Access Key ID and Secret Access Key would normally go. The third line contains the URL of the signing service. If the service is password protected, the URL should contain the username and password, as indicated.
  2. Test the remote signing by running "aws -v ls". You should see a message like this, followed by the output from the "ls" command.
    signing [GET\n\n\n1172520135\n/] via https://******:******
Be careful securing your remote signing service: if you use HTTP Basic Authentication, then your password is sent in clear text and can be sniffed. You'll create a situation where your AWS credentials are secure but can still be used by an unauthorized 3rd party. To mitigate the risk, you can use basic authentication combined with SSL by specifying a https URL (instead of http) in the ~/.awssecret file. The URL format is The server hosting your signing service then needs to be configured to support SSL. If you use a self-signed certificate, remote signing will still fail because the server cannot be properly authenticated (due to the self-signed nature of the certificate). The --insecure-signing option tells aws to ignore this particular error. Using https with --insecure-signing is better than simply using http because the connection is encrypted, which prevents the basic authentication password from being sniffed.


See Changes.txt


aws is written in pure perl. It consists of a single file (roughly 1,500 lines). The core functions are ec2() and s3(), which are about 25 and 150 lines.

The rest of the program contains the command-line interface, the pretty-printer, and HMAC and SHA1 code, none of which is needed when calling ec2() or s3() from your program. In that case, use Digest::SHA1 (or Digest::SHA1::PurePerl) and parse the XML however you like. (I have no idea if ec2() and s3() can be run in isolation any more.)

export S3_DIR=bucket

If you work primarily with a single bucket, you can
export S3_DIR=bucket
aws will prepend bucket/ to any name you specify in any command.

Thus, you can do the following sorts of operations:

export S3_DIR=timkay
s3put foo		# put file foo to object timkay/foo
s3ls			# list objects in bucket timkay
s3cat foo		# display object timkay/foo
s3rm foo		# remove object timkay/foo
To deactivate S3_DIR, use
export -n S3_DIR


aws does not yet support ec2 Images and Image Attributes functionality, only because I have no need for them. Everything else is there.

Occasionally, system certificates are misconfigured, so that SSL to Amazon fails at the server authentication step. (Note that this is a different problem than when using self-signed certificates for remote signing mode.) The option --insecure-aws tells aws to continue anyway. If aws isn't working, you can test for this problem with

curl -vv curl -vv --insecure-aws
If the first fails and the second succeeds, you should fix your system's certificates. Or you can use the --insecure-aws option.

Things to do:

HMAC and SHA1 functions have been included, so that no perl modules need be installed. The SHA1 function has not been thoroughly tested and might fail under certain circumstances. (It has been tested on x86 and x86/64, but there are many other possibilities.)

If SHA1 fails, use Digest::SHA1 or Digest::SHA::PurePerl instead. Use CPAN to install one, uncomment the relevant "use" statement, and remove the included sha1() function.

aws supports streaming, even if Amazon S3 does not. The following incredibly useful command would work if Amazon were to add streaming support:

tar czf . - |s3put timkay/backup.tgz -


I want to hear from you! I constantly improve this code, and I want to know what you need. I typically respond to emails within a few hours. My email address is in the copyright notice below.


Copyright 2007-2011 Timothy Kay (