An Ultimate Dummies Guide to using S3 Storage with EC2

Rajesh Rajamani
7 min readMar 4, 2021
EC2 and S3 companionship

This is my third article of my series on Cloud for Dummies. You can find my other articles on this series here.

If you already understand a bit of S3 and EC2 then feel free to proceed . And if you don’t it’s a good idea to read the articles above to get a hang of both the services and proceed further.

Our Goal for this article:

We are going to configure an IAM role (Identity and Access Management ) for the EC2 instance to be able to access S3 buckets so that we can quickly lift and shift data between EC2 to S3 .

We will also discuss some use-cases for using S3 as a storage point from within EC2.

Prerequisites:

  1. An AWS Account preferably with root access ( for IAM setup )
  2. An EC2 Instance ( Follow this article from my series )
  3. A S3 bucket containing data ( Follow this article from my series until you create an S3 bucket )

Assuming that you have all the above, let’s proceed further.

Step 1:

a. Navigating to IAM to create a role to enable access to S3.

Locating IAM within AWS Console Search

b. Under IAM , click on Roles

Roles in IAM AWS

Step 2:

Create a role that provides access to S3.

a. Click on Create Role

You can see that a role can provide access to multiple things within IAM. However we want an EC2 instance to be able to access the S3 bucket .Hence we are going to select an “AWS Service” to be the trusted entity which will benefit from this role.You can already spot that EC2 is a common use case when it comes to creating Role Based Access Control to S3.

b. Click on EC2 ( after you click it will be highlighted as follows ).

c. Click on Permissions

d. Select the Policy AmazonS3FullAccess . You can locate this by typing s3 in the Filter Policies box above .

It is worth noting that this policy is a “AWS managed” policy. More about this here.

Click on the Policy and scroll down to the bottom of the page to add the Tags.

Now provide the tag details. It’s always a good idea to ensure the tagging for each resource that you create on your cloud is reasonably clear.

Step 3:

Save the Role with a suitable name . For example EC2_S3_FullAccess

_

Step 4:

Verify if the role is available in your roles .

Roles in IAM AWS

Type ec2_ and you should be able to see the role.

If at this state you cannot see the role , something has gone wrong .Please repeat the steps 1–4 before proceeding further.

Now it’s time to configure an EC2 instance to have access to your S3 buckets.

At this state please remember the S3 bucket that you have already created at the beginning of this exercise. I’m going to use a bucket named ihmeforecastsdump ( some statistical data that I collected during the COVID Lockdown period last year using Lambda . Check out the below article about how I did it if you are interested )

Step 5:

Associating the newly created role to an EC2 instance. By now you should already have an EC2 instance for this tutorial.

Click on Actions > Security > Modify IAM Role

Now select the role that you created earlier .

Step 6:

Verifying S3 access from the EC2 instance.

For this there are 2 requirements

a. You should be able to access the EC2 instance via SSH . If you have not yet done that , the refer to my article on EC2 from above.

b. You should install the aws cli in your EC2 instance . You can do this with

sudo apt install awsclimkdir dumpscd dumps

Fire the following command to list contents of your desired S3 bucket

aws s3 ls <your s3 bucket name>

If there are any files available in your bucket then they should show up now in the console.

See below the list of files that I had in my bucket are being visible.

Step 7:

Now let’s copy all files from the S3 bucket to your EC2 instance . Observe that I have used a dot symbol in this command to denote that the files be copied to the same folder from where the command is issued . It is the folder named dumps in my case.( from step 6 )

aws s3 sync s3://<your s3 bucket> .

Check if the files are visible in your EC2 instance folder.

ls -l

Success . You have the files from S3 in your EC2 instance now .

Step 8:

Copy a local file from EC2 to S3.

Create a simple text file with the following command. This creates a text file with the name testfilefors3.txt in the same folder.

echo "This is a test file for EC2 to S3 Copy" > testfilefors3.txt

Now issue this command to copy the file from EC2 to the target S3 folder.

aws s3 cp testfilefors3.txt s3://ihmeforecastsdump

Now this is file is copied to S3.

Let’s verify with the S3 console.

At this stage if your file is not visible in S3 , please verify all the steps to ensure that you have not missed anything.

Why S3 as a Storage ?

S3 is so versatile , cheap and scalable storage that can be used in a variety of ways to complement the native storage such as EBS or EFS.

Observe that I have mentioned S3 complements but never replaces EBS or EFS.

AWS CLI S3 Cheatsheet

Some use-cases to consider S3 as the storage :

  1. Snapshots of your Disks . You can automate disk snapshots to be stored in S3 for a Disaster-Recovery arrangement
  2. Log Storage . You can transmit all of your log files to S3 and then can apply log analytics at a later date
  3. Object Storage . Let’s assume you run a web-page handling a lot of pictures that need to be processed with the EC2-instance . You can simply store the files on S3 and play around with them whenever there is a need with this setup.

If you want to stay up-to date with my articles on Cloud , Solution Architecture and other emerging technology you can consider the following options.

Alternatively you can also follow me on Medium.

--

--