By Mark Kriaf, Partner Solutions Architect – AWS
By Laureen Harris, Tech Content Editor – CircleCI
By Angel Rivera, Developer Advocate – CircleCI

CircleCI-AWS-Partners

Arm processors and architectures are becoming widely available as development teams adopt them as compute nodes in many application infrastructures.

Organizations turn to Arm-based servers when looking for a cost-effective way to improve performance for their common workloads like microservices, application servers, and databases.

With CircleCI, a continuous integration and continuous delivery (CI/CD) platform that automates the build, test, and deploy processes for teams looking to do more at scale, customers who need Arm-based compute can already use self-hosted runners based on AWS Graviton2.

To give developers the option to run code on Arm-based instances in their CI/CD pipelines without maintaining infrastructure on their own, CircleCI added new Arm-based resource classes based on Graviton2 as an option for all users.

In this post, we’ll introduce the new Arm resource classes and demonstrate how to use them in your pipelines to build, test, and deploy applications for Arm.

Why Use Arm-Based Resources?

Arm resource classes offer development teams instant, on-demand access to a clean and secure runtime environment to build and test code.

They also deliver unmatched flexibility in the cloud by giving teams the ability to have varying central processing units (CPUs) and memory, prepare for the next generation of devices, and deliver significant performance improvements without sacrificing power or increasing cost.

You can run code on Arm-based instances in CI/CD pipelines without maintaining the infrastructure in-house, reducing costs and speeding delivery. This post provides steps you can use to start building, testing, and deploying applications for Arm-based devices.

Prerequisites

Before you can get started with this tutorial, you need to complete a number of tasks:

Arm Compute Resource Classes

Arm compute resource classes are currently available as machine: executors and can be implemented as such within pipeline configuration definitions.

The following sections of this tutorial will demonstrate configuring and running CI/CD pipelines on Arm-based executors, and show how to create, deploy, and destroy Amazon Elastic Container Service (Amazon ECS) clusters based on Graviton2 compute nodes using Terraform.

Implement Arm Compute Within the ‘config.yml’

The following pipeline config example shows how to define Arm resource classes.

In this code example, the run-tests: job shows how to specify a machine executor and assign it an Arm compute node resource class:

version: 2.1
orbs: node: circleci/[email protected]
jobs: run-tests: machine: image: ubuntu-2004:202101-01 resource_class: arm.medium steps: - checkout - node/install-packages: override-ci-command: npm install cache-path: ~/project/node_modules - run: name: Run Unit Tests command: | ./node_modules/mocha/bin/mocha test/ --reporter mochawesome --reporter-options reportDir=test-results,reportFilename=test-results - store_test_results: path: test-results - store_artifacts: path: test-results build_docker_image: machine: image: ubuntu-2004:202101-01 resource_class: arm.medium steps: - checkout - run: name: "Build Docker Image ARM V8" command: | export TAG='0.1.<< pipeline.number >>' export IMAGE_NAME=$CIRCLE_PROJECT_REPONAME docker build -t $DOCKER_LOGIN/$IMAGE_NAME -t $DOCKER_LOGIN/$IMAGE_NAME:$TAG . echo $DOCKER_PWD | docker login -u $DOCKER_LOGIN --password-stdin docker push -a $DOCKER_LOGIN/$IMAGE_NAME
workflows: build: jobs: - run-tests - build_docker_image

The image: key specifies the operating system assigned to the executor. The resource_class: specifies which CircleCI resource class to utilize.

In this case, we’re using the arm.medium resource class type, which enables pipelines to run and build code on and for Arm architectures and resources. The build_docker_image: job is a great way to use the arm.medium resource class to build an Arm64 capable Docker image that can be deployed to Arm compute infrastructures, such as Graviton2.

Set Up the ‘arm-executor’ Project in CircleCI

To use the example code, you’ll need to create a project to which you’ll add the code. Go to the CircleCI Projects page and find the forked repository: arm-executors. Select Set Up Project.

At the bottom of the sample configs pop-up, select Skip this step. Follow the prompts to add a new config file, and then copy and paste the code from the previous example.

Deploy to Amazon ECS

The code example in the previous section shows how to leverage the Arm resource classes within a pipeline. This section describes how to extend that code to create AWS resources such as Amazon ECS clusters. You can create these resources with underlying Graviton2 compute nodes using Terraform and infrastructure as code (IaC).

Note: The example ECS cluster used here should not be used in production grade environments and is built for demonstration purposes only.

Before using the following example, you might need to edit the Terraform main.tf file under terraform/aws/ecs/main.tf.

This code extends the original pipeline config example:

 version: 2.1 orbs: node: circleci/[email protected] commands: install_terraform: description: "specify terraform version & architecture to use [amd64 or arm64]" parameters: version: type: string default: "0.13.5" arch: type: string default: "arm64" steps: - run: name: Install Terraform client command: | cd /tmp wget https://releases.hashicorp.com/terraform/<< parameters.version >>/terraform_<< parameters.version >>_linux_<< parameters.arch >>.zip unzip terraform_<< parameters.version >>_linux_<< parameters.arch >>.zip sudo mv terraform /usr/local/bin jobs: run-tests: machine: image: ubuntu-2004:202101-01 resource_class: arm.medium steps: - checkout - node/install-packages: override-ci-command: npm install cache-path: ~/project/node_modules - run: name: Run Unit Tests command: | ./node_modules/mocha/bin/mocha test/ --reporter mochawesome --reporter-options reportDir=test-results,reportFilename=test-results - store_test_results: path: test-results - store_artifacts: path: test-results build_docker_image: machine: image: ubuntu-2004:202101-01 resource_class: arm.medium steps: - checkout - run: name: "Build Docker Image ARM V8" command: | export TAG='0.1.<< pipeline.number >>' export IMAGE_NAME=$CIRCLE_PROJECT_REPONAME docker build -t $DOCKER_LOGIN/$IMAGE_NAME -t $DOCKER_LOGIN/$IMAGE_NAME:$TAG . echo $DOCKER_PWD | docker login -u $DOCKER_LOGIN --password-stdin docker push -a $DOCKER_LOGIN/$IMAGE_NAME deploy_aws_ecs: machine: image: ubuntu-2004:202101-01 resource_class: arm.medium steps: - checkout - run: name: Create .terraformrc file locally command: echo "credentials \"app.terraform.io\" {token = \"$TERRAFORM_TOKEN\"}" > $HOME/.terraformrc - install_terraform: version: 0.14.2 arch: arm64 - run: name: Deploy Application to AWS ECS Cluster command: | export TAG=0.1.<< pipeline.number >> export DOCKER_IMAGE_NAME="${DOCKER_LOGIN}/${CIRCLE_PROJECT_REPONAME}" cd terraform/aws/ecs terraform init terraform apply \ -var docker_img_name=$DOCKER_IMAGE_NAME \ -var docker_img_tag=$TAG \ --auto-approve destroy_aws_ecs: machine: image: ubuntu-2004:202101-01 resource_class: arm.medium steps: - checkout - run: name: Create .terraformrc file locally command: echo "credentials \"app.terraform.io\" {token = \"$TERRAFORM_TOKEN\"}" > $HOME/.terraformrc - install_terraform: version: 0.14.2 arch: arm64 - run: name: Destroy the AWS ECS Cluster command: | cd terraform/aws/ecs terraform init terraform destroy --auto-approve workflows: build: jobs: - run-tests - build_docker_image - deploy_aws_ecs - approve_destroy: type: approval requires: - deploy_aws_ecs - destroy_aws_ecs: requires: - approve_destroy

As you may have noticed, a few new jobs have been defined. The deploy_aws_ecs:, approve_destroy:, and destroy_aws_ecs: jobs are the new elements in this extended config. First, though, you should know about the commands: and install_terraform: elements.

Run the ‘install_terraform:’ Command

CircleCI encapsulates and reuses configuration code using pipeline parameters. The install_terraform: command is an example of defining reusable pipeline code.

If your pipelines repeatedly run specific commands, we recommend you define reusable command: elements to provide extensible and centrally managed pipeline configurations.

Both the deploy_aws_ecs: and destroy_aws_ecs: jobs run Terraform code, so the pipeline will need to download and install the Terraform CLI more than once. The install_terraform: command provides valuable reusability.

The following code block defines the install_terraform: reusable command:

 commands: install_terraform: description: "specify terraform version & architecture to use [amd64 or arm64]" parameters: version: type: string default: "0.13.5" arch: type: string default: "arm64" steps: - run: name: Install Terraform client command: | cd /tmp wget https://releases.hashicorp.com/terraform/<< parameters.version >>/terraform_<< parameters.version >>_linux_<< parameters.arch >>.zip unzip terraform_<< parameters.version >>_linux_<< parameters.arch >>.zip sudo mv terraform /usr/local/bin

The parameters: key maintains a list of parameters, and the parameters version: and arch: define the Terraform CLI version and CPU architecture respectively. These parameters download and install the client in the executor.

Because this block of code represents a command: element, a command steps: key must be defined. In the previous example, the run: element applies the corresponding command: key. This key downloads the specific Terraform client using the << parameter.version >> and << parameter.arch >> variables to specify the client version number and CPU architecture.

Pipeline parameters are useful for optimizing and centrally managing functionality within your pipeline configuration. To learn more, see Managing reusable pipeline configuration with object parameters.

Run the ‘deploy_aws_ecs’ Job

The deploy_aws_ecs: job defined in the pipeline leverages IaC to create a new ECS cluster. It includes all of the required resources, such as virtual private networks (VPCs), subnets, route tables, application load balancers, and Amazon Elastic Compute Cloud (Amazon EC2) auto scale groups.

This job creates and provisions the infrastructure needed to deploy and run applications. Because the target architecture is Arm, the ECS cluster must be composed of Gravtion2 ECS compute nodes. These nodes will initiate the Arm-based Docker application image build in previous pipeline jobs.

The following code block demonstrates how to use the install_terraform: command described previously:

deploy_aws_ecs: machine: image: ubuntu-2004:202101-01 resource_class: arm.medium steps: - checkout - run: name: Create .terraformrc file locally command: echo "credentials \"app.terraform.io\" {token = \"$TERRAFORM_TOKEN\"}" > $HOME/.terraformrc - install_terraform: version: 0.14.2 arch: arm64 - run: name: Deploy Application to AWS ECS Cluster command: | export TAG=0.1.<< pipeline.number >> export DOCKER_IMAGE_NAME="${DOCKER_LOGIN}/${CIRCLE_PROJECT_REPONAME}" cd terraform/aws/ecs terraform init terraform apply \ -var docker_img_name=$DOCKER_IMAGE_NAME \ -var docker_img_tag=$TAG \ --auto-approve

We have set the version: parameter to 0.14.2 and the arch: parameter to arm64. The final run: element initializes the Terraform code, and then runs a `terraform apply` command with corresponding parameters that pass through the values of the Docker image name and tag created in this pipeline run.

Upon completion, this job will create and deploy the application to a Graviton2 ECS-based cluster.

Run the ‘destroy_aws_ecs’ Job

In an earlier step, we created the Amazon ECS infrastructure in the deploy_aws_ecs. The destroy_aws_ecs jobs perform the inverse and programmatically destroy all of the infrastructure and resources created. This is the most efficient method of terminating unnecessary infrastructure.

In the following code block, most of the job definition is the same as in the previous code block, except for the final run: element:

destroy_aws_ecs: machine: image: ubuntu-2004:202101-01 resource_class: arm.medium steps: - checkout - run: name: Create .terraformrc file locally command: echo "credentials \"app.terraform.io\" {token = \"$TERRAFORM_TOKEN\"}" > $HOME/.terraformrc - install_terraform: version: 0.14.2 arch: arm64 - run: name: Destroy the AWS ECS Cluster command: | cd terraform/aws/ecs terraform init terraform destroy --auto-approve

In this element, we are issuing a Terraform initialization and terraform destroy command which will, as expected, destroy all of the resources created in the previous step.

Workflows: ‘approve_destroy’ Job

The last item we’ll discuss is the approve_destroy: found in the workflows: element of the config example. This job is a manual approval type where workflow will be intentionally halted and remain in a hold until a manual interaction is completed.

In this case, a button must be selected in the CircleCI dashboard in order for the destroy-aws-ecs: to run. Without this approval job, the pipeline would automatically trigger the destroy job and terminate all the resources created in previous jobs.

Approval type jobs are useful for situations where manual intervention or approvals are required within pipeline executions.

Conclusion

CircleCI has Arm-capable executors in the form of Arm compute nodes, giving developers access to Arm architectures for pipelines.

In this post, we have shown how to implement the CircleCI Arm compute nodes as pipeline executors, and how to deploy applications to Amazon ECS clusters powered by AWS Graviton2 nodes using Terraform and infrastructure as code.

All of the code examples in this tutorial can be found in the arm-executors repo on GitHub.

Categories: APNCompute