Organizations use multiple environments, each with different security and compliance controls, as part of their deployment pipeline. Following the principle of least privilege, production environments have the most restrictive security and compliance controls. They tightly limit who can access the environment and which actions each user (or principal) can perform. Development and test environments also have security and compliance controls, but they are typically less restrictive. For example, they might limit users to a subset of AWS services that have been approved for use by the security and compliance teams. Production, development, and test environments are also typically shared environments in which multiple users and teams are building, deploying, and managing resources. To support customers who are using environments like these, AWS publishes best practices for setting up a multi-account AWS environment.

Many organizations need another type of environment, one where users can build and innovate with AWS services that might not be permitted in production or development/test environments because controls have not yet been implemented. These more permissive environments are commonly referred to as sandbox environments. To protect critical assets of your organization, you need a clearly defined usage policy for sandbox accounts. In this blog post, we enlisted the best practices that allow you to create secure and robust sandbox accounts in AWS.

Your sandbox usage policy

Your sandbox usage policy should be an agreement between your development teams and security and compliance teams. In exchange for more open AWS permissions, your developers agree to work within guardrails defined by the security and compliance teams.

Here are some areas to cover in your usage policy:

  • Data classification: Specify which classes of data are allowed in sandbox accounts. Many organizations prohibit the use of customer data (names, email addresses, phone numbers, payment information, and so on) in sandbox environments.
  • Network connectivity: Specify whether networks in the sandbox environment are allowed to connect to networks in other or shared services environments. If you decide to restrict network connectivity, set up guardrails to alert the appropriate groups when peering connections, AWS Transit Gateway attachments, or AWS PrivateLink connections are established. We’ll discuss how to implement guardrails for these types of activities later in the post.
  • Access control: Specify who has access to sandbox AWS accounts. Accounts can either be dedicated to a single developer or shared by a team. Using individual accounts simplifies cost reporting and makes it much easier to identify resource owners. Shared accounts are simpler to centrally monitor and manage. Your usage policy should also specify that a cross account Identity and Access Management (IAM) role must be implemented in each sandbox account to give security and compliance teams the access required to monitor resources and activity in the account.
  • Tagging policy: Even in a sandbox account, tagging your resources is critical. Tagging helps you identity who created resources, who is responsible for them, and how costs should be allocated. Your policy should specify which tag keys should be applied to all resources in sandbox environments.
  • Resource lifecycle policy: Specify how long resources can persist in a sandbox account. That’s a good way to prevent them from becoming shadow development or even production environments. Organizations often implement lifecycle policies to shut down resources after a specified number of days. We’ll discuss how to implement lifecycle policies later in the post.

Implement guardrails

To prevent unwanted actions, apply appropriate guardrails to your sandbox account. When you design and implement guardrails, be sure to reference the sandbox usage policy. When you implement preventative and detective measures in your multi-account sandbox environment, you can use AWS Control Tower to manage the implementation of guardrails.

AWS services for security guardrails

You can use a number of AWS security services to protect your sandbox environment from security threats. For example, Amazon Macie is useful for detecting personally identifiable information (PII) type data and other sensitive data types in your Amazon Simple Storage Service (Amazon S3) storage. That’s why it’s important to classify which type of data is suitable for your sandbox environment in your sandbox usage policy. Using Macie to detect PII data will help you maintain your data classification policy. You might also want to use Amazon GuardDuty for threat monitoring, Amazon Inspector for vulnerability assessment, and Amazon Detective for identifying the root cause of security issues.

You can also use the following services to create custom guardrails:

AWS Config

As a preventative control, you can use AWS Config to implement managed and custom rules to manage resource configurations across your AWS account. Config makes it possible to evaluate the configuration of your AWS resources to evaluate appropriate settings. For example, you can implement a managed rule to monitor that all S3 buckets have server-side encryption enabled. If an S3 bucket in your sandbox environment does not have server-side encryption enabled, AWS Config detects and flags the noncompliant resource. You can optionally include an auto-remediation using AWS Systems Manager to resolve the rule. For more information, see the Amazon S3 bucket compliance using the AWS Config auto remediation feature blog post. AWS Config conformance packs can also help you implement best practices and custom controls across your sandbox environments.

AWS CloudTrail and Amazon EventBridge

You can use AWS CloudTrail as an auditing tool to continuously monitor API calls in your AWS environment. You can use Amazon EventBridge to create rules that trigger on the information captured by CloudTrail. For example, you can set up an EventBridge rule to trigger when CloudTrail records an API call to create a VPC peering connection. The EventBridge rule can use Amazon Simple Notification Service (Amazon SNS) as a target to notify your development team of the new peering, thus setting up detective controls for your sandbox environment.

AWS Organizations

Service control policies (SCPs) are management feature of AWS Organizations that allow you to manage permissions across your organization. You should implement SCPs as a security guardrail to prevent unwanted actions.

Here are some example SCPs to consider implementing in your sandbox environment:

You can also allow access only to the services that you want your developers to use rather than denying access to specific services.

Tracking and managing costs

Given their exploratory nature, it would be wise to implement budget controls on your sandbox accounts. Budget controls allow you to view and manage the amount of spend on AWS accounts in your organization.

Here are two best practices for limiting spending on your sandbox accounts:

  • Create a spending budget using AWS Budgets.
    You can plan how much you want to spend on a service (cost) or how much you want to use on one or more services (usage). You can also set up optional notifications to warn you if you exceed, or are about to exceed, your allotted amount for cost or usage budgets.
  • Use cost allocation tags.
    When you tag your AWS resources, it’s much easier to organize, categorize, and track your AWS costs. Cost allocation tags are useful for tracking expenditure on exploratory workloads in your sandbox account.

AWS Cost Explorer

For inspection of your cost and usage, consult AWS Cost Explorer in the AWS Management Console. It displays your past usage and cost, forecasts expected spend, and provides recommendations for ways to optimize costs. AWS Cost Explorer is useful for tracking costs across your sandbox accounts.

Resource lifecycle management

If you do not implement lifecycle management policies, your sandbox environment can easily become an undocumented and unsupportable production environment. A defined lifecycle policy allows you to control costs, prevent unauthorized use of sandbox environments, and set expectations for sandbox account users.

For example, you might want EC2 instances to be shut down or S3 buckets to be deleted after a fixed number of days. You might also want the environment to reset to a baseline set of resources on a periodic basis. Your approach to automating these policies will vary, depending on how resources are created.

If you use AWS CloudFormation stacks to create resources in the sandbox environment, see the approach described in the scheduling automatic deletion of AWS CloudFormation stacks blog post.

If you are using the console or API operations to create resources, you should implement AWS Lambda functions to determine when resources were created and when they should be deleted. There are a number of ways to do this. One way is to set a tag key-value pair for when resources should be deleted and then implement a Lambda function to delete them. Another way is to use AWS Config (for services that it supports) to find when resources were created to determine if they should be deleted.

If you want to reset the environment to a baseline set of resources (for example, a baseline VPC, required IAM roles, configuration S3 buckets, and so on) on a periodic basis, you can use tool like aws-nuke. When you run the tool, it deletes or shuts down more than 325 AWS resource types automatically. You can also apply filters to preserve your baseline resource list.

Conclusion

In this blog post, we explained the role sandbox accounts play in your AWS environment. For your development teams, sandbox accounts provide freedom to test and validate AWS services in a safe setting. For your security and compliance teams, a sandbox account usage policy and associated guardrails help to make sure that your sandbox environments have appropriate levels of isolation and data classification. By implementing resource lifecycle management in your sandbox accounts, you can minimize costs and prevent unauthorized usage.

 

About the authors

Sonakshi 1Sonakshi Pandey is a Solutions Architect at Amazon Web Services, where she designs large scale distributed solutions with sole focus of migrating applications, software and services on the AWS platform. Prior to her cloud journey, she worked as a software engineer for the Amazon Forecasting Platform team.

 

 

NishaNisha Nadkarni is a Startup Solutions Architect based in Seattle, Washington. She provides customers with technical guidance to build well-architected solutions on the AWS cloud platform.

 

 

 

JeffJeff Stockamp is a Senior Solutions Architect based in Seattle, Washington. Jeff helps guide customers as they build well architected-applications and migrate workloads to AWS.  Jeff holds both the AWS Certified Professional Solutions Architect and DevOps Engineer certifications as well as Networking and Security Specialty certifications.