The Amazon Web Service Command Line Interface (AWS CLI, for short) expands the options for a user interacting with the AWS. It makes AWS tools accessible via the API because, while a user can log in to the web console through a browser, not all the options are exposed through that console. Without the AWS CLI, extra code is required to talk to the API to gain access to the full catalogue of potential interactive options with AWS.
Server side encryption settings for S3 buckets, for example, can have specific keys encrypted by the AES-256 encryption mechanism when a user instructs AWS to perform the encryption through the web console. However, users are unable to utilize the Key Management Service (KMS keys) directly with AWS S3 without using the API. The aws-cli
uses the API to expose hidden features that would normally have to be accessed directly through the REST API.
Essentially, the user acts as if they are utilizing the API from a command line in order to configure options absent from inside the web console:
$ echo "super secret text" > encrypt_me.txt $ aws s3 mb s3://calvinhp-blogdemo $ aws s3 cp encrypt_me.txt s3://calvinhp-blogdemo/encrypt_me.txt
This file has been uploaded, but is it Server Side Encrypted:
$ aws s3api head-object --bucket calvinhp-blogdemo --key encrypt_me.txt
This will output the following JSON:
{ "AcceptRanges": "bytes", "ContentType": "text/plain", "LastModified": "Mon, 23 Jan 2017 14:50:25 GMT", "ContentLength": 18, "ETag": "\"93589c366fb8cbf03f9e81b302f9be70\"", "Metadata": { } }
It seems like the default policy for the bucket isn't to encrypt, but we can encrypt the file in place using a key from the AWS KMS service. Note that this will fail unless you have configured S3 with signature_version = s3v4
. We'll show how to do that below:
$ aws s3api copy-object --copy-source calvinhp-blogdemo/encrypt_me.txt --key encrypt_me.txt --bucket calvinhp-blogdemo --server-side-encryption aws:kms
This will output the following JSON:
{ "CopyObjectResult": { "LastModified": "2017-01-23T14:56:26.000Z", "ETag": "\"771b72c25d2e5016465236bfad10e361\"" }, "SSEKMSKeyId": "arn:aws:kms:us-east-1:l337h4ck3r:key/deadbeef-dead-beef-dead-beef00000075", "ServerSideEncryption": "aws:kms" }
Now running the same head-object
command will show that the item is encrypted using KMS (which can't be done from the web console):
$ aws s3api head-object --bucket calvinhp-blogdemo --key encrypt_me.txt
This will output the following JSON:
{ "AcceptRanges": "bytes", "ContentType": "text/plain", "LastModified": "Mon, 23 Jan 2017 14:56:26 GMT", "ContentLength": 18, "ETag": "\"771b72c25d2e5016465236bfad10e361\"", "ServerSideEncryption": "aws:kms", "SSEKMSKeyId": "arn:aws:kms:us-east-1:l337h4ck3r:key/deadbeef-dead-beef-dead-beef00000075", "Metadata": { } }
For most people, the AWS CLI proves to be a nice utility for those users who are savvy with the command line and who are seeking greater use of the AWS. There is now no need to go to the web interface to interact with the various resources. The CLI can actually make your interactions with AWS faster because you are not clicking around multiple times to find the correct settings.
To illustrate, consider the difference between Windows and Linux. People who use Windows typically click around to find all the various configuration screens and options; however, these options that are present in Windows' display are not all available from the web browser in the case of AWS. When AWS is used to its full potential by accessing the CLI, it ends up being more like Linux than Windows. People who use Linux can enter a few commands on the command line to avoid searching through UI's to find all of these options. In short, they can accomplish their task directly without the hassle of searching.
Obviously one of the biggest benefits to using a CLI is speed. This holds true for AWS API's as well because using the CLI allows for changes to be made faster than if a user had to log into and click through a web browser to input commands. Similarly, it expedites the coding process by allowing users to script a set of commands or deployments (i.e. attaching EBS block devices, encrypting said block devices, or setting up a CloudFront server in front of an ELB) in the CLI that will repeat, saving time for the coder.
The AWS CLI also saves the user time when it comes to installation: just pick the package management system for the platform of choice and said platform will initiate the installation. For example, Mac users can install the CLI through Homebrew, and there are plenty of options for Windows and Linux users as well since the software is based on Python Boto3 API.
Conveniently, the CLI documentation is thorough and all its functions mimic the Boto3 API documents as well. If a user knows Boto3, they should easily be able to transfer that knowledge to their experience with this CLI.
Users have the option to set environment variables to store their AWS account's key ID and their secret access key, but it is even simpler to create a CLI config file in ~/.aws/config
and store your account profiles in one spot. Typically, it is a good idea to set up profiles for which region (i.e. U.S. West, U.S. East) they want to connect to, as well as what credentials to use for each profile:
[default] region=us-east-1 aws_access_key_id=foo aws_secret_access_key=bar s3 = signature_version = s3v4 [profile personal] region=us-east-2 s3 = signature_version = s3v4
It's possible to have multiple AWS accounts managed simultaneously using profiles such as a default one, a personal account or multiple company accounts. When running the CLI commands, the user can specify which profile they want to run and can even switch back and forth between profiles without having to reset environment variables. Just set up an .aws
folder, enter credentials and profiles, then run your aws-cli
commands directly:
$ aws --profile personal s3 ls 2014-03-13 16:53:46 calvinhp-foo 2017-01-23 09:50:22 calvinhp-blogdemo 2016-11-29 16:25:58 calvinhp-bar
The same goes for using listing the keys in a bucket:
$ aws --profile personal s3 ls s3://calvinhp-blogdemo 2017-01-23 09:56:26 18 encrypt_me.txt
In addition, using cp
creates a copy which can be sent to an S3 bucket:
$ aws --profile personal s3 cp s3://calvinhp-blogdemo/encrypt_me.txt s3://calvinhp-blogdemo/encrypt_me_2.txt copy: s3://calvinhp-blogdemo/encrypt_me.txt to s3://calvinhp-blogdemo/encrypt_me_2.txt $ aws --profile personal s3api head-object --bucket calvinhp-blogdemo --key encrypt_me_2.txt
It is important to note that the new copy of the object is not using SSE. You would need to use the s3api
to copy and maintain that as an option:
{ "AcceptRanges": "bytes", "ContentType": "text/plain", "LastModified": "Mon, 23 Jan 2017 15:11:45 GMT", "ContentLength": 18, "ETag": "\"93589c366fb8cbf03f9e81b302f9be70\"", "Metadata": { } }
The list doesn't end there. Other potential functions of this CLI include:
The list of options are numerous because the AWS CLI and the S3 API subcommand can do anything the S3 can do through a console, and then some.
One great way to explore is with the built in help. You can request help for any command like this:
$ aws s3 help
And for subcommands like this:
$ aws s2api head-object help
One word of caution: sometimes the documentation of the CLI might lag behind the Boto3 library. I've run into a couple cases where the commands didn't work as expected, but that was due to advances in the code that weren't reflected yet in the documentation. Knowledge of the Boto3 docs can work in your favor here since there may be new features available that aren't fully documented.