By Mark Kriaf, Partner Solutions Architect – AWS
By Laureen Harris, Tech Content Editor – CircleCI
By Angel Rivera, Developer Advocate – CircleCI
Arm processors and architectures are becoming widely available as development teams adopt them as compute nodes in many application infrastructures.
Organizations turn to Arm-based servers when looking for a cost-effective way to improve performance for their common workloads like microservices, application servers, and databases.
With CircleCI, a continuous integration and continuous delivery (CI/CD) platform that automates the build, test, and deploy processes for teams looking to do more at scale, customers who need Arm-based compute can already use self-hosted runners based on AWS Graviton2.
To give developers the option to run code on Arm-based instances in their CI/CD pipelines without maintaining infrastructure on their own, CircleCI added new Arm-based resource classes based on Graviton2 as an option for all users.
In this post, we’ll introduce the new Arm resource classes and demonstrate how to use them in your pipelines to build, test, and deploy applications for Arm.
Why Use Arm-Based Resources?
Arm resource classes offer development teams instant, on-demand access to a clean and secure runtime environment to build and test code.
They also deliver unmatched flexibility in the cloud by giving teams the ability to have varying central processing units (CPUs) and memory, prepare for the next generation of devices, and deliver significant performance improvements without sacrificing power or increasing cost.
You can run code on Arm-based instances in CI/CD pipelines without maintaining the infrastructure in-house, reducing costs and speeding delivery. This post provides steps you can use to start building, testing, and deploying applications for Arm-based devices.
Before you can get started with this tutorial, you need to complete a number of tasks:
- Fork this arm-executors example repository on GitHub:
- Replace the value for default in the variable “key_pair” within the arm-executors/terraform/aws/ecs/variables.tf file with your key pair instead of “devrel-angel-rivera”
- Sign-up for a CircleCI account
- Create an Amazon Web Services (AWS) account
- Create an IAM user with programmatic access
- Create a Docker Hub account
- Create a Docker Hub Access Token
- Install Terraform CLI locally
- Create a Terraform Cloud account
- Create a new Terraform Cloud organization
- Create a new Terraform Cloud workspace named
arm-aws-ecsand choose the
- Enable local execution mode in the
- Create a new Terraform API token – learn more in the documentation
- Edit the main.tf file with the name of your Terraform workspace and organization
- After you create the project in CircleCI, create these environment variables:
AWS_ACCESS_KEY_ID: enter the value of the AWS access key ID
AWS_SECRET_ACCESS_KEY: enter the value of the AWS secret access key
DOCKER_LOGIN: enter the value of your Docker username
DOCKER_PWD: enter the value of Docker Hub access token
TERRAFORM_TOKEN: enter the value of Terraform API token
Arm Compute Resource Classes
Arm compute resource classes are currently available as machine: executors and can be implemented as such within pipeline configuration definitions.
The following sections of this tutorial will demonstrate configuring and running CI/CD pipelines on Arm-based executors, and show how to create, deploy, and destroy Amazon Elastic Container Service (Amazon ECS) clusters based on Graviton2 compute nodes using Terraform.
Implement Arm Compute Within the ‘config.yml’
The following pipeline config example shows how to define Arm resource classes.
In this code example, the
run-tests: job shows how to specify a machine executor and assign it an Arm compute node resource class:
image: key specifies the operating system assigned to the executor. The
resource_class: specifies which CircleCI resource class to utilize.
In this case, we’re using the
arm.medium resource class type, which enables pipelines to run and build code on and for Arm architectures and resources. The
build_docker_image: job is a great way to use the
arm.medium resource class to build an Arm64 capable Docker image that can be deployed to Arm compute infrastructures, such as Graviton2.
Set Up the ‘arm-executor’ Project in CircleCI
To use the example code, you’ll need to create a project to which you’ll add the code. Go to the CircleCI Projects page and find the forked repository: arm-executors. Select Set Up Project.
At the bottom of the sample configs pop-up, select Skip this step. Follow the prompts to add a new config file, and then copy and paste the code from the previous example.
Deploy to Amazon ECS
The code example in the previous section shows how to leverage the Arm resource classes within a pipeline. This section describes how to extend that code to create AWS resources such as Amazon ECS clusters. You can create these resources with underlying Graviton2 compute nodes using Terraform and infrastructure as code (IaC).
Note: The example ECS cluster used here should not be used in production grade environments and is built for demonstration purposes only.
Before using the following example, you might need to edit the Terraform main.tf file under terraform/aws/ecs/main.tf.
This code extends the original pipeline config example:
As you may have noticed, a few new jobs have been defined. The
destroy_aws_ecs: jobs are the new elements in this extended config. First, though, you should know about the
Run the ‘install_terraform:’ Command
CircleCI encapsulates and reuses configuration code using pipeline parameters. The
install_terraform: command is an example of defining reusable pipeline code.
If your pipelines repeatedly run specific commands, we recommend you define reusable
command: elements to provide extensible and centrally managed pipeline configurations.
destroy_aws_ecs: jobs run Terraform code, so the pipeline will need to download and install the Terraform CLI more than once. The
install_terraform: command provides valuable reusability.
The following code block defines the
install_terraform: reusable command:
parameters: key maintains a list of parameters, and the parameters
arch: define the Terraform CLI version and CPU architecture respectively. These parameters download and install the client in the executor.
Because this block of code represents a
command: element, a command
steps: key must be defined. In the previous example, the
run: element applies the corresponding
command: key. This key downloads the specific Terraform client using the
<< parameter.version >> and
<< parameter.arch >> variables to specify the client version number and CPU architecture.
Pipeline parameters are useful for optimizing and centrally managing functionality within your pipeline configuration. To learn more, see Managing reusable pipeline configuration with object parameters.
Run the ‘deploy_aws_ecs’ Job
deploy_aws_ecs: job defined in the pipeline leverages IaC to create a new ECS cluster. It includes all of the required resources, such as virtual private networks (VPCs), subnets, route tables, application load balancers, and Amazon Elastic Compute Cloud (Amazon EC2) auto scale groups.
This job creates and provisions the infrastructure needed to deploy and run applications. Because the target architecture is Arm, the ECS cluster must be composed of Gravtion2 ECS compute nodes. These nodes will initiate the Arm-based Docker application image build in previous pipeline jobs.
The following code block demonstrates how to use the
install_terraform: command described previously:
We have set the
version: parameter to 0.14.2 and the
arch: parameter to arm64. The final
run: element initializes the Terraform code, and then runs a `terraform apply` command with corresponding parameters that pass through the values of the Docker image name and tag created in this pipeline run.
Upon completion, this job will create and deploy the application to a Graviton2 ECS-based cluster.
Run the ‘destroy_aws_ecs’ Job
In an earlier step, we created the Amazon ECS infrastructure in the
destroy_aws_ecs jobs perform the inverse and programmatically destroy all of the infrastructure and resources created. This is the most efficient method of terminating unnecessary infrastructure.
In the following code block, most of the job definition is the same as in the previous code block, except for the final
In this element, we are issuing a Terraform initialization and
terraform destroy command which will, as expected, destroy all of the resources created in the previous step.
Workflows: ‘approve_destroy’ Job
The last item we’ll discuss is the
approve_destroy: found in the
workflows: element of the config example. This job is a manual approval type where workflow will be intentionally halted and remain in a hold until a manual interaction is completed.
In this case, a button must be selected in the CircleCI dashboard in order for the
destroy-aws-ecs: to run. Without this approval job, the pipeline would automatically trigger the destroy job and terminate all the resources created in previous jobs.
Approval type jobs are useful for situations where manual intervention or approvals are required within pipeline executions.
CircleCI has Arm-capable executors in the form of Arm compute nodes, giving developers access to Arm architectures for pipelines.
In this post, we have shown how to implement the CircleCI Arm compute nodes as pipeline executors, and how to deploy applications to Amazon ECS clusters powered by AWS Graviton2 nodes using Terraform and infrastructure as code.
All of the code examples in this tutorial can be found in the arm-executors repo on GitHub.