timkay.com » tools » aws
aws is a command-line tool that gives you easy access to Amazon EC2 and Amazon S3. aws is designed to be simple to install and simple to use.
Thanks to your feedback (firstname.lastname@example.org), aws is the top-rated "community code" for all of Amazon EC2 and S3! See the ratings and reviews at EC2 and S3. They make me blush! Thank you!
This document describes the basic features. For additional examples, see the HowTo document.
See the change log.
To use aws, follow these steps:
curl https://raw.githubusercontent.com/timkay/aws/master/aws -o aws
ec2run. If you prefer commands like
aws run, then you may skip this step. Perform the optional install with:
perl aws --install(This step sets aws to be executable, copies it to /usr/bin if you are root, and then symlinks the aliases, the same as aws --link does.)
chmod +x aws
If you are logged in as root, aws will be installed in /usr/bin. Otherwise, it will be installed in the current directory.
You are ready to use Amazon EC2 and Amazon S3. Try the following example. You'll need to provide your own BUCKET_NAME, unique from all other bucket names.
s3mkdir BUCKET_NAME s3put BUCKET_NAME/aws-backup /usr/bin/aws s3ls BUCKET_NAME s3get BUCKET_NAME/aws-backup s3delete BUCKET_NAME/aws-backup s3delete BUCKET_NAME ec2run ec2din ec2tin INSTANCEID
As a security measure, make sure that ~/.awssecret is not readable by group or other (chmod go-rwx ~/.awssecret).
v1.10, aws runs on Windows. Currently, client authentication isn't supported, until I figure out where to put the cert. (Hence the --insecure-aws setting.)
Curl checks the directory that curl.exe runs from, followed by all the directories in the PATH, searching for the file "curl-ca-bundle.crt".
So, folks need to rename ca-bundle.crt to curl-ca-bundle.crt, and then place it in the directory where they installed curl.
There is probably a way to configure curl to look for the certificate file elsewhere; I found that curl attempted to open %appdata%\_curlrc, so a configuration file could likely be placed here that specifies another location. However, I didn't experiment with this, as I was satisfied with keeping it with the executable (although this is mildly less secure).
To download and configure, follow these steps:
The following command will list your S3 buckets:
If you prefer to use environment variables to set the secrets rather than the .awssecret file, you can create this batch file, replacing XXX with your keys, of course:
aws.pl --insecure-aws ls
set AWS_ACCESS_KEY_ID=XXX set AWS_SECRET_ACCESS_KEY=XXX aws.pl --insecure-aws %1 %2 %3 %4 %5 %6 %7 %8 %9
aws is a very simple, light weight interface to Amazon EC2 and Amazon S3, that implements basic commands like ec2run and s3ls.
The command-line interface works like this:
aws ACTION [PARAMETERS]
For example, you can start an EC2 server instance:
Or, you can store a file named file7.txt to S3 in bucket mybucket, object sample.txt:
aws put mybucket/sample.txt file7.txt
If you like, you can have aws create command-line aliases, such as ec2run and s3put, so that you don't have to prepend "aws" to each command. aws can create its own aliases as follows. This step was done for you if you installed with --install.
Then you can perform the operations without the "aws" prefix:
s3put mybucket/sample.txt file7.txt
Note: support for s3cmd's ~/.s3cfg format has been added. If that file exists (and ~/.awssecret does not), the keys will be read from there.aws needs access to a pair of strings called the Access Key ID and the Secret Access Key. By default, aws looks for these secrets in a file ~/.awssecret, but a different file can be specified using --secrets-file=FILE. The secrets are stored in this file separated by white space, as in the following example:
Keep in mind that your Secret Access Key must be kept secret; anybody with access to that key gets full access to your EC2 and S3 instances and files. If your file permissions are too generous, a warning is generated.
You may use environment variables to specify the secrets rather than the .awssecret file. For example,
AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=XXX aws ls
Use of environment variables is not recommended because command lines and environment variables are visible to other users of the same host.
EC2_ACCESS_KEY=XXX EC2_SECRET_KEY=XXX aws ls
It is possible to configure aws to use a remote signing service (see below), enabling your infrastructure to make use of the Secret Access Key without having access to it.
The existence of the ~/.awsrc file (even if empty) turns off default sanity checking. To enable sanity checking when ~/.awsrc exists, add--insecure-aws --simple
--sanity-checkto the ~/.awsrc file.
|~/.awssecret is missing
~/.awssecret not readable
|You must create a .awssecret file that contains the access keys for your AWS account.|
|The curl command is used to access Amazon Web Services. The connection is secured via TLS (also called SSL or https). If your host certificates aren't set properly, the TLS connection fails. Fix the problem or add --insecure-aws to ~/.awsrc.|
|AWS is not reachable||For some reason, curl could not access https://s3.amazonaws.com/|
|local host time is wrong||If your host's clock is set wrong, then requests might be signed with an invalid timestamp. The sanity check code will automatically correct for a wrong host clock.|
For optimal performance, sanity checks should be disabled by creating
a ~/.awsrc file. If you have reason to create .awsrc and still
want sanity checks enabled, add
~/.awsrc. If you want the sanity checks enabled (perhaps to correct for a wrong host
clock), but you want to disable warnings, add
Sanity check warnings will not be displayed if stdin is set to something other than a tty. Thus, you might not see sanity check warnings when invoking "aws" from within a script.
|List all buckets. See options below.|
aws ls [-X] BUCKET[/PREFIX]
aws ls [-X] BUCKET [PREFIX]
s3ls [-X] BUCKET[/PREFIX]
s3ls [-X] BUCKET [PREFIX]
|List files in BUCKET, optionally restricted to those files that start with PREFIX. X is one or more of the following:
|aws put BUCKET
aws mkdir BUCKET
|aws put BUCKET/[OBJECT] [FILE]||s3put BUCKET/[OBJECT] [FILE]||Store FILE to OBJECT in BUCKET. If OBJECT is omitted, then FILE is used.|
|aws get BUCKET/OBJECT [FILE]
aws cat BUCKET/OBJECT [FILE]
|s3get BUCKET/OBJECT [FILE]
s3cat BUCKET/OBJECT [FILE]
|Retrieve OBJECT from BUCKET, saving to FILE (default stdout).|
|aws delete BUCKET/OBJECT
aws rm BUCKET/OBJECT
|Delete OBJECT from BUCKET.|
|aws copy BUCKET1/OBJECT1 /BUCKET2/OBJECT2
aws copy BUCKET1 /BUCKET2/OBJECT2
aws copy BUCKET1/OBJECT1 OBJECT2
|Copies object BUCKET2/OBJECT2 to BUCKET1/OBJECT1 (right-to-left to stay consistent with PUT syntax). If the source bucket is missing, uses the target bucket. If the target object name is missing, uses the source object name.
Note the leading slash on the source object is required when specifying a source bucket. Without the leading slash, refers to the object in the same bucket as the target, possibly with slashes in the source object name.
Can be used to copy a file to a bucket with a different location constraint.
--progress to display progress for large S3 transfers.
aws put using STDIN
aws put with no source file, or if the source file is
-, then a temporary file is created, stdin is copied into the temporary file, and that file is uploaded. For example,
creates an object named
echo hello, world |aws put test681/hello.txt
test681/hello.txt, which contains
The MIME type is set as appropriate for the target file name.
Using "get" with a Directory Target
If the target of a "get" operation ends with "/", or if the target is a directory, then the file name will be taken from the object name. Examples:
aws get timkay/aws-1.12 .will retrieve the object and store it as "aws-1.12" in the current directory.
aws get timkay/aws-1.12 x/y/z/will retrieve the object and store it as "x/y/z/aws-1.12", relative to the current directory. As necessary, it will create directories. Compare to
aws get timkay/aws-1.12 x/y/zwhich will retrieve the object and store it as "x/y/z", assuming "x/y/z" isn't already a directory. (The file is named "z" in the subdirectory "x/y/".)
Note that it is possible to use slashes in object names, such as "x/y/foo". It would be useful to say
aws get timkay/x/y/foo .to retrieve the object and store as x/y/foo. The current implementation will store it as file "foo" in the current directory, discarding the object's "path". It's not clear which action is the right one. If you have an opinion, please send feedback.
It is possible to send
Content-Type: headers by including them on the
aws command line after the verb. The following example sets the object access policies to be world-readable and also sets the content type as appropriate for a .jpg file:
aws put "x-amz-acl: public-read" "Content-Type: image/jpg" timkay681 tim.jpg
v1.12, if it isn't set otherwise, the
Content-Type: header is automatically set using the
mime.types file, which is read from both the current
directory and /etc/, if they exist.
Changing the ACL of an Existing Object New!
To change the ACL of an object to a canned access policy, store an empty ACL file and include a header to indicate the desired canned access policy:
aws put "x-amz-acl: public-read" test681/tim2.jpg?acl /dev/null
--set-acloption does the same thing with a simpler syntax:
aws put test681/tim2.jpg?acl --set-acl=public-read
aws put test681/tim2.jpg?acl --public
The choices for --set-acl are
--public is equivalent to
--private is equivalent to
For more information about canned access policies, see the Amazon S3 API. To control the ACL more precisely, see the following section, Setting Access Policies (ACLs).
Setting Access Policies (ACLs)
You can set canned access policies as described in the previous section. Otherwise, setting access policies requires that you create an XML file containing the access policies. You might start by retrieving the existing policies, modify those policies, then upload the changes, as shown in the example below. You can see that the results are publicly readable.
bash-3.00$ aws --xml get public681/tim.jpg?acl >acl.xml bash-3.00$ xmlpp acl.xml bash-3.00$ vi acl.xml bash-3.00$ aws put public681/tim.jpg?acl acl.xml bash-3.00$ aws --xml get public681/tim.jpg?acl |xmlpp c1438ce900acb0db547b3708dc29ca60370d8174ee55305050d2990dcf27109c timkay681 c1438ce900acb0db547b3708dc29ca60370d8174ee55305050d2990dcf27109c timkay681 FULL_CONTROL c1438ce900acb0db547b3708dc29ca60370d8174ee55305050d2990dcf27109c timkay681 c1438ce900acb0db547b3708dc29ca60370d8174ee55305050d2990dcf27109c timkay681 FULL_CONTROL http://acs.amazonaws.com/groups/global/AllUsers READ
For more information about about setting access policies, see the Amazon S3 documentation: "Setting Access Policy with REST".
Note that some browsers cache the content types of URLs. If you browse a URL then change it's content type, then hit refresh, the broswer might not notice the new content type. To fix this problem, you should flush the content cache. (With FireFox, use Tools/Clear Private Data with the "Cache" option checked.)
Sending <CreateBucketConfiguration> Data with "mkdir"
"put" and "mkdir" used to be the same code (you could create buckets with either). To support configuration options, they are now different. If you don't care about the Location of a bucket, then you can continue to use either. However, if you want to specify Location, then you have to use "mkdir". ("aws mkdir foo eu.xml" and "aws put foo eu.xml" do two different things: The first sets the configuration of the new bucket foo as per the file eu.xml, while the latter stores the file eu.xml in bucket foo.)
aws mkdir BUCKETNAME CONFIGURATIONFILEFor example, you could say
aws mkdir my-new-bucket eu.xml
aws get BUCKETNAME?locationFor example, you could say
aws --xml get my-new-bucket?location |xmlppreturns
Notes: I am using the --xml output format, until I get xml2tab working in this case.
xmlppis at http://timkay.com/xmlpp/
Use "aws COMMAND -h" to see the options available for each command. For example,
$ ./aws run -hHere we see that the
$ ./aws run -h Usage: aws run-instance run [ImageId (ami-23b6534a)] [-i InstanceType (ml.small)] [-a AddressingType (public)] [-n MinCount (1)] [-n MaxCount (1)] [-g SecurityGroup... (default)] [-k KeyName (default)] [-d UserData] [-f UserData]
run-instancecommand takes an image id, an instance type, an addressing type, a count of servers, security groups, a key name, and user data. The defaults are specified in parentheses.
Note. The following command groups are now supported, but the following table has not been updated. The syntax is the same as Amazon's command-line tools.
- registering images
- availability zones
- kernel types
- persistent IP addresses
Note: Elastic Block Storage is now supported (as of v1.14), but the following table has not yet been updated. It works the same as the Amazon documentation: Amazon Elastic Compute Cloud.
Type "aws" at the command-line to see a list of supported commands.
|aws add-group GROUP
aws addgrp GROUP
|Creates a security group named GROUP.|
|aws add-keypair KEYNAME
aws addkey KEYNAME
|Creates a new key pair named KEYNAME and displays the private key.|
|Allocates a new public IP address.|
|aws associate-address ...
aws aad ...
ec2aad address -i instance [-d device]
|Associates an IP address with an instance.|
|aws authorize ...
aws auth ...
ec2auth [-P PROTOCOL] [-p PORT_RANGE] [-u USER]
[-o SECURITY_GROUP] [-s SOURCE_SUBNET]
|Creates new firewall entries.|
|aws delete-keypair KEYNAME
aws delkey KEYNAME
|Deletes the key pair named KEYNAME.|
|aws delete-group GROUP
aws delgrp GROUP
|Deletes the security group named GROUP.|
|aws describe-groups [GROUP...]
aws dgrp [GROUP...]
|Describe security groups.|
|aws describe-keypairs [KEY...]
aws dkey [KEY...]
|Describe EC2 images.|
|aws describe-images ...
aws dim [IMAGE...] [-o OWNER] [-u USER]
ec2dim [IMAGE...] [-o OWNER] [-u USER]
|Describe EC2 images.|
aws din [INSTANCE...]
|Describe all EC2 server instances.|
|Describe available EC2 regions. When issuing EC2 commands specify --region=REGION to indicate a region.|
|aws reboot-instances INSTANCE [INSTANCE...]
aws reboot INSTANCE [INSTANCE...]
|ec2-reboot-instances INSTANCE [INSTANCE...]
ec2reboot INSTANCE [INSTANCE...]
|Reboot EC2 server instances.|
|aws run-instances ...
ec2run [--simple] [--wait=SEC]
|Start N EC2 server instances.
--wait=SEC to poll for instances to leave "pending"
--simple to display the instanceId's in tab-separated format
|aws terminate-instances INSTANCE [INSTANCE...]
aws tin INSTANCE [INSTANCE...]
|ec2-terminate-instances INSTANCE [INSTANCE...]
ec2tin INSTANCE [INSTANCE...]
|Terminate EC2 server instances.|
--wait=SEC switch tells
ec2run to poll for the status of the new instances (using ec2-describe-instances), waiting for them to leave "pending" status (presumably entering "running" status). Waits SEC seconds between polling attempts.
$ ./aws-1.24 run -k k2 --wait=10 i-97f276fe pending i-97f276fe pending i-97f276fe pending i-97f276fe pending i-97f276fe pending i-97f276fe running ec2-75-101-242-20.compute-1.amazonaws.com
select the European EC2 region.
are synonums for
~/.awsrc to switch the default region.
To improve the likelihood that a transaction succeeds succeeds, "aws" sets the appropriate curl option, so that all failed transactions are retried up to 3 times.
The "aws" command --max-time=N indicates the maximum number of seconds to allow a given try to succeed. If the timeout is exceeded, it is terminated and retried up to 3 times. (By default, there is no timeout, so a transaction can potentially hang forever.)
v1.28, the --md5 option
computes the MD5 checksum for all objects and generates an error when
a mismatch occurs.
Also as of
v1.28, $? is set
correctly in most cases, so that scripts can tell if an "aws"
transaction completes successfully.
It's not always a good idea to store your key on any server that needs access to Amazon Web Services.
So how does an EC2 instance access S3 data without using the Secret Access Key? By using aws in remote signature mode. Remote signature mode is a web service that signs EC2 and S3 requests remotely. The Secret Access Key is stored only on the server(s) that provide the signing service.
aws can then run on an EC2 instance, performing all operations as it normally would, except that it signs its requests using a remote call to the web service.
The web service can be protected with a password, by restricting access to known IP addresses, and/or by limiting what requests are allowed.
To use remote signature mode, two things have to happen. First, the web service needs to be configured by following these steps on the server that is to host it.
and configure the web server, so that the script is password (or otherwise) protected and executable. It is also a good idea to use SSL (https) to access this service, so that the password is encrypted. To use a self-signed certificate, see the --insecure-signing setting.
The following PHP code implements a remote signing service:
Make sure that aws is in the web server's path, or else add the explicit path (probably /usr/bin/aws) to the script.
list($head, $body) = explode("\n\n", shell_exec("aws --sign=" . file_get_contents("php://input"))); header("Content-type: text/plain"); echo $body; ?>
VKL5xvaVRCTAsBTF63dCr5zc3I4=.(This particular string is the empty string, signed by my Secret Access Key.)
Next, you need to configure the client to use the remote signing service:
_The first line two lines contains underscore ("_") characters where your Access Key ID and Secret Access Key would normally go. The third line contains the URL of the signing service. If the service is password protected, the URL should contain the username and password, as indicated.
signing [GET\n\n\n1172520135\n/] via https://******:******@timkay.com/aws/sign/
The rest of the program contains the command-line interface, the
pretty-printer, and HMAC and SHA1 code, none of which is needed when
calling ec2() or s3() from your program. In that case, use
Digest::SHA1 (or Digest::SHA1::PurePerl) and parse the XML however you
like. (I have no idea if ec2() and s3() can be run in isolation any more.)
Thus, you can do the following sorts of operations:
Occasionally, system certificates are misconfigured, so that SSL to
Amazon fails at the server authentication step. (Note that this is a
different problem than when using self-signed certificates for remote
signing mode.) The option --insecure-aws tells aws to continue anyway.
If aws isn't working, you can test for this problem with
Things to do:
HMAC and SHA1 functions have been included, so that no perl modules
need be installed. The SHA1 function has not been thoroughly tested
and might fail under certain circumstances. (It has been tested on
x86 and x86/64, but there are many other possibilities.)
If SHA1 fails, use Digest::SHA1 or Digest::SHA::PurePerl instead. Use
CPAN to install one, uncomment the relevant "use" statement, and
remove the included sha1() function.
aws supports streaming, even if Amazon S3 does not. The following
incredibly useful command would work if Amazon were to add streaming
If you work primarily with a single bucket, you can
aws will prepend bucket/ to any name you specify in any command.
To deactivate S3_DIR, use
s3put foo # put file foo to object timkay/foo
s3ls # list objects in bucket timkay
s3cat foo # display object timkay/foo
s3rm foo # remove object timkay/foo
export -n S3_DIR
aws does not yet support ec2 Images and Image Attributes functionality, only because I have no need for them. Everything else is there.
tar czf . - |s3put timkay/backup.tgz -
I want to hear from you! I constantly improve this code, and I want to know what you need. I typically respond to emails within a few hours. My email address is in the copyright notice below.
Copyright 2007-2011 Timothy Kay (email@example.com)
Thus, you can do the following sorts of operations:
Occasionally, system certificates are misconfigured, so that SSL to Amazon fails at the server authentication step. (Note that this is a different problem than when using self-signed certificates for remote signing mode.) The option --insecure-aws tells aws to continue anyway. If aws isn't working, you can test for this problem with
Things to do:
HMAC and SHA1 functions have been included, so that no perl modules need be installed. The SHA1 function has not been thoroughly tested and might fail under certain circumstances. (It has been tested on x86 and x86/64, but there are many other possibilities.)
If SHA1 fails, use Digest::SHA1 or Digest::SHA::PurePerl instead. Use CPAN to install one, uncomment the relevant "use" statement, and remove the included sha1() function.
aws supports streaming, even if Amazon S3 does not. The following incredibly useful command would work if Amazon were to add streaming support: