Post by Mathew Rowlands, AWS Senior Solutions Architect and Antoine Brochet, AWS Senior Business Development Manager

In our roles as AWS Startup Solutions Architects and Business Development Managers, we constantly work with startups to help them architect their cloud infrastructures from the ground up and to optimize their existing infrastructures so they use AWS services as efficiently as possible. We recently had an increased number of questions from startups regarding how to effectively reduce their infrastructure footprint and reduce spend in the context of reduced activity whilst maintaining their services. It is a best practice to periodically review your startup’s current setup and reassess the resources that you have provisioned in the past, both to see if their usage patterns have changed, but also to see if there has been an updated feature or even a more optimal service that you can take advantage of.

In the first part of this blog post series, we will look at four best practices to help reduce AWS spend with quick wins, all achievable in under 2 hours each. Prior to jumping in this blog post, be sure to watch the webinar “6 Ways to Reduce your AWS Bill,” which also provides nifty advice.

Get a deep and accurate view of your AWS resources

Without visibility into your deployed estate, it is difficult to make sure you are focusing in the right areas for cost saving. To help you get a complete view on this, we recommend enabling AWS Cost Explorer and tagging all resources so you can filter and drill into your spend in the billing console in detail. Typical tagging strategies can be done at the Stack (e.g. Dev/Test/Prod), user level, cost center or application level.

A cost explorer view to analyze usage per instance type

A cost explorer view to analyze usage per instance type

To enable and access the AWS Cost Explorer, first access the AWS billing console by selecting “My Account” in the AWS Management Console. Then select “Launch Cost Explorer” and “Cost Explorer.”

From there, you will be able to create and save your reports. We recommend looking at daily costs in order to understand periodic usage pattern and base lines.

Right size your compute services

Once this visibility and monitoring is in place, you can start looking for ways to optimize your deployed estate that are highly effective for your use case.

A great first step in this direction is to right size your compute resources: This means analyzing the recent utilization rate (in terms of CPU or RAM) of your Amazon EC2 instances and re-evaluating their families and sizes. We recommend using the Resource Optimization Recommendations tool that we launched in November 2019. The tool will provide insights and recommendations about underutilized instances or idle instances. For more information around at the User Guide.

graphic 2 resource optimization recommendations tool

The Resource Optimization Recommendations Tool helps identifying resources to shut-down or down scale

There will be two main action points from this report:

(1) Stop or Terminate Idle instances. You or your team members may have launched instances for test or abandoned projects and forgot about terminating these. The Resource Optimization Recommendations will detect idle instances, which should be first in line to be investigated and terminated if not needed anymore.

(2) Resize underutilized instances. You may run oversized instances for some of your workloads. The Resource Optimization Recommendations will analyse the usage patterns of your EC2 instances (in terms of CPU and RAM) and recommend candidates for downsizing (e.g. from a M5.xlarge to M5.large) or family modification (from Compute Optimized C5 family to Memory Optimized R5 family).

Once you optimized your EC2 instances fleet, consider using AWS Spot Instances (spare compute capacity available in the AWS cloud) for your fault tolerant workloads in order to save up to 90% in compute cost vs on-demand price. If you are running containerized workloads on ECS Fargate, we recently launched AWS Fargate Spot to save up to 70% vs on-demand price. Take a look at this article for a comprehensive walkthrough.

Re-evaluate your Amazon Elastic Block Store (EBS) storage strategy

In order to cut cost with Amazon EBS, we will look into 5 potential optimization strategies:

(1) Reduce your EBS volume size by reducing the amount of data written in your volumes to the bare minimum. If possible, use EBS for your operating system and applications only and use other data stores, such as Amazon Simple Storage Service (Amazon S3), for the rest. A quick win here is to send instance logs to Amazon CloudWatch or ELK provided as part of Amazon Elasticsearch Service (Amazon ES), instead of storing them on EBS.

(2) Reconsider the size of your existing volumes as some of them can reveal being over-provisioned. In order to reduce the size of an EBS volume, follow these steps in the EC2 console:

a. Stop your running instance
b. under the EBS Volume Tab, take a Snapshot the EBS volume to resize
c. Copy the Snapshot ID in your clipboard
d. Create a new, smaller, volume and enter the Snapshot ID when prompted
e. Detach the old volume from the stopped instance
f. Attach the new volume to the stopped instance
g. Restart the stopped instance
f. Delete the old, unattached, volume when ready

More information about creating a new EBS volume from a snapshot can be found in this guide.

(3) Audit your detached EBS volumes in the AWS Management Console. If the volume and the data is not needed anymore, delete these volumes. If you will need the data at a later stage, create a snapshot of the volume to save cost (e.g. General Purpose SSD (gp2) Volumes costs $0.10 per GB-month of provisioned storage while EBS Snapshot costs $0.05 per GB-month of data stored).

(4) Audit your existing snapshots. You may have gathered a number of snapshots over time through automated backups, and a number of them can reveal being unnecessary.

(5) Automate the management of your backups and snapshots, and set policies to automatically delete outdated snapshots to avoid piling up more snapshots again.

Use S3 and EFS storage classes effectively

Amazon S3 is an industry-leading object storage service that provides several storage classes, which support different data access levels at corresponding rates. If part of your data is infrequently accessed, or you only need reduced availability, you can save up to 50% on your storage cost. So, where to start?
If you only have a few minutes in front of you, start by switching from Amazon S3 Standard Class to Intelligent Tiering Class. For a small monitoring and automation fee, S3 Intelligent-Tiering monitors access patterns and moves objects that have not been accessed for 30 consecutive days to the infrequent access tier. If the data is accessed later, it is automatically moved back to the frequent access tier.

Image 3 switching from amazon s3 standard tiering0 to intelligent tiering

Switching from Amazon S3 Standard Tier to Intelligent-Tiering can be done in minutes

To automate these best practices and realize savings with any new data written within your S3 buckets, we recommend setting up automated storage lifecycle policies. You will find a walkthrough here.

Amazon Elastic File System (Amazon EFS) is an elastically scaling shared file service, often used by startups for hosting interpreted language websites like PHP. If you deployed Amazon EFS a while ago, you might not be aware that we launched an Infrequent Access tier, offering up to 92% savings compared to the standard on – demand usage tier. You may well find that your website runs pretty much from memory once loaded by the server. It is also worth investigating using a network cache, like Amazon CloudFront, to absorb requests back to the service, via an aggressive caching policy, for assets or for websites served from EFS. Here’s a walkthrough for enabling lifecycle management in EFS.

Conclusion and next steps

In this post, we showed you tips and tricks on how to quickly reduce your AWS cost with a focus on compute and storage resources. Following these steps can help your startup save hundreds or even thousands of dollars. Startups move and change fast and so does your AWS infrastructure. We recommend frequently auditing your AWS stack and apply these best practices to keep your cost low. In the next part of the “How to Scale Down your AWS Architecture” series, we will cover database, caching, and networking. Until then, feel free to share your best practices to keeping to AWS cost low in comments.