Data has become the language of business. Organizations leverage data to better understand and deliver value to their customers. As a result, there is a growing need in many organizations for flexible patterns that can be leveraged to develop new applications and functionality to interact with their data.

APIs, or application program interfaces, are a utility that can help to enable organizations continuously deliver customer value. APIs have grown in popularity as organizations have increasingly designed their applications as microservices. The microservice model configures an application as a suite of small services. Each service runs its own processes and is independently deployable. APIs work in conjunction with microservices as they can be leveraged to connect services together, provide a programmable interface for developers to access data, and provide connectivity to existing legacy systems.

In this article, we will demonstrate how to build and deploy an API running in a microservice architecture.

The project we will create addresses how to build and deploy an API to the AWS cloud using open source tools. Specifically, we will deploy a Python Flask REST API that will allow users to post their favorite artists and songs from the ’90s to an Amazon DynamoDB database. Flask is a micro web framework written in Python. We will containerize our Flask application using Docker, an open source tool used to create and execute applications in containers, and deploy it to Amazon Elastic Container Service (Amazon ECS). In addition to explaining how to configure an API, we will cover how to automate the deployment of AWS services using Terraform, an open source infrastructure as code software tool. Lastly, we will perform basic testing of the API we create using SoapUI, a functional testing tool for SOAP and REST-based APIs.

Finally, our goal is to provide the knowledge of how to deploy an API to the AWS cloud leveraging open source tools.

Solution overview

The following depiction illustrates the solution workflow. The diagram shows the process and a summary of what is being performed and delivered:

Diagram illustrating the process and summary of a typical manual deployment using the various open source tools.

This diagram illustrates a typical manual deployment mechanism using the open source tools defined above. We are using GitHub to store code used in this blog in a repository. That information is retrieved using a git pull command by a DevOps team member. That team member then follows the processes defined in this article to create an Amazon Elastic Container Registry (Amazon ECR) repository for storing the Docker image. A docker build command is executed to create the container image, which is then stored in Amazon ECR. Terraform is then used to plan the architecture and apply the desired state configuration for the infrastructure and the API service running on ECS to a specified AWS account.

The following illustration shows the solution deployment in the AWS cloud environment:

Diagram illustrating the solution deployment in the AWS cloud environment.

Let’s review the above diagram. We are deploying an Amazon Virtual Private Cloud (Amazon VPC) with two private and two public subnets. We have attached an Internet Gateway to our VPC so that we can access the internet. Our resources in the private subnets access the internet through NAT gateways in the public subnets. The application will be hosted on an Amazon ECS cluster running on Amazon Elastic Compute Cloud (Amazon EC2) instances, which will be located in the private subnets.

The ECS cluster will be launched with an Auto Scaling group and an Application Load Balancer (ALB). Users will access our application via a public load balancer located in the public subnets. For our backend database solution, we are using Amazon DynamoDB. Amazon CloudWatch is utilized to provide data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. Lastly, Amazon ECR is used to host our container images.

Prerequisites

Application configuration

REST (representational state transfer) APIs are built on a subset of HTTP and are constrained by the REST architecture. They streamline interoperability between two computer systems and allow users to interact with a service via HTTP requests and responses. One advantage of REST APIs is that they provide flexibility. Data is not tied to resources or methods, so REST can handle multiple types of calls. REST APIs are stateless. Calls can be made independently of one another, and each call contains all of the data required to complete itself successfully. You will deploy a REST API backed by a DynamoDB database. Python has a number of web frameworks that can be used to create web apps and APIs. We have chosen to utilize Flask as it is a framework that has a set project structure as well as many built-in tools. These predefined structures can save time and effort for developers.

You will utilize Python Flask to build an API. Python Flask is a micro framework for building web applications. Our API defines two routes. The first route maps to /, and the second maps to /v1/bestmusic/90s/artist. These are the only paths recognized by the application. If you enter any other URL when accessing the API, you will receive an error message. You can define specific error responses in the API routes.

For example, referencing the Python functions in the get_artist method, “Artist does not exist” is the response returned when a users requests an artist that is not present in the DynamoDB table (musicTable). The create_artist method posts an artist and song to your DynamoDB. If malformed data (i.e., data that is not structured properly) or a payload with required fields is sent to the API using this method, a “Please provide Artist and Song” response will be returned. Take some time to review the application configuration.

#!/usr/bin/env python3.8
# -*- coding: utf-8 -*-
import os
import boto3 from flask import Flask, jsonify, request app = Flask(__name__)
client = boto3.client('dynamodb', region_name='us-east-1')
dynamoTableName = 'musicTable' @app.route("/")
def hello(): return "Hello World!" @app.route("/v1/bestmusic/90s/<string:artist>")
def get_artist(artist): resp = client.get_item( TableName=dynamoTableName, Key={ 'artist': { 'S': artist } } ) item = resp.get('Item') if not item: return jsonify({'error': 'Artist does not exist'}), 404 return jsonify({ 'artist': item.get('artist').get('S'), 'song': item.get('song').get('S') }) @app.route("/v1/bestmusic/90s", methods=["POST"])
def create_artist(): artist = request.json.get('artist') song = request.json.get('song') if not artist or not song: return jsonify({'error': 'Please provide Artist and Song'}), 400 resp = client.put_item( TableName=dynamoTableName, Item={ 'artist': {'S': artist }, 'song': {'S': song } } ) return jsonify({ 'artist': artist, 'song': song }) if __name__ == '__main__': app.run(threaded=True, host='0.0.0.0', port=5000)

Open the Dockerfile located in app directory. We are utilizing the python:3.8-slim docker image to keep our image as small as possible. This docker image is created using a multi-stage build. Using this approach, you can selectively copy artifacts from one stage to another. This allows you to leave out anything you do not want in the final image. In this case, we will create one image for building the app and another image for running it. Adopting this pattern, separating the build/run, greatly reduces our final image size resulting in faster deployments. Please take time to review this configuration.

FROM python:3.8-slim as builder
COPY . /src
RUN pip install --user fastapi uvicorn boto3 flask FROM python:3.8-slim as app
COPY --from=builder /root/.local /root/.local
COPY --from=builder /src . ENV PATH=/root/.local:$PATH
EXPOSE 5000 CMD ["python3", "app.py"]

Now clone the source code repository; you will use this to deploy the solution:

git clone https://github.com/aws-samples/deploy-python-flask-microservices-to-aws-using-open-source-tools.git

Application build

You will be deploying the REST API to Amazon ECS. Your application will be hosted on an ECS cluster located in private subnets. Users will access the application through an ALB. We will deploy this infrastructure later in the article.

Now, let’s walk through the steps of building our application. This will require you to build a Docker image locally, tag the image, and then push it to a registry. We will use ECR as our registry. Let’s build and push our container image to ECR.

1. First, you will need to create an ECR repository. Run the following AWS CLI command from your terminal:

aws ecr create-repository \ --repository-name flask-docker-demo-app \ --image-scanning-configuration scanOnPush=true \ --region us-east-1

The output should look like the following:

{ "repository": { "repositoryArn": "arn:aws:ecr:us-east-1:<AWS_ID>:repository/flask-docker-demo-app", "registryId": "<AWS_ID>", "repositoryName": "demo-app", "repositoryUri": "<AWS_ID>.dkr.ecr.us-east-1.amazonaws.com/flask-docker-demo-app", "createdAt": 1615491551.0, "imageTagMutability": "MUTABLE", "imageScanningConfiguration": { "scanOnPush": true }, "encryptionConfiguration": { "encryptionType": "AES256" } }
}

2. In the AWS Console, open Services, Elastic Container Registry. Select the flask-docker-demo-app as seen in the following image:

Screenshot of the AWS console displaying the flask-docker-demo-app in the Elastic Container Registry.

For successive commands, replace <AWS_ID> with your AWS account ID.

3. Now, log into ECR from the command line. Run the following command:

aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin <AWS_ID>.dkr.ecr.us-east-1.amazonaws.com/flask-docker-demo-app

The output should look like the following:

Login Succeeded

4. Navigate to the root of the repository and run:

cd app

5. Run the following image to build the Docker image:

docker build --tag flask-docker-demo-app .

The output should look like the following:

Sending build context to Docker daemon 4.096kB
Step 1/7 : FROM python:3.8-slim as builder
3.8-slim: Pulling from library/python
45b42c59be33: Pull complete f875e16ab19c: Pull complete 3e2c62b3a6f9: Pull complete c6acb963480f: Pull complete 6b5baef197ea: Pull complete Digest: sha256:60d8ae7490dc6be75ad9ed8b504cd8835be12f9a828b2b9fe0e19e4d66b6f636
Status: Downloaded newer image for python:3.8-slim ---> 5bacf0a78697
Step 2/7 : RUN pip install fastapi uvicorn boto3 Flask ---> Running in 13e7537cd889
Collecting fastapi Downloading fastapi-0.63.0-py3-none-any.whl (50 kB)
Collecting uvicorn Downloading uvicorn-0.13.4-py3-none-any.whl (46 kB)
Collecting boto3 Downloading boto3-1.17.25-py2.py3-none-any.whl (130 kB)
Collecting Flask Downloading Flask-1.1.2-py2.py3-none-any.whl (94 kB)
Collecting s3transfer<0.4.0,>=0.3.0 Downloading s3transfer-0.3.4-py2.py3-none-any.whl (69 kB)
Collecting botocore<1.21.0,>=1.20.25 Downloading botocore-1.20.25-py2.py3-none-any.whl (7.3 MB)
Collecting jmespath<1.0.0,>=0.7.1 Downloading jmespath-0.10.0-py2.py3-none-any.whl (24 kB)
Collecting python-dateutil<3.0.0,>=2.1 Downloading python_dateutil-2.8.1-py2.py3-none-any.whl (227 kB)
Collecting urllib3<1.27,>=1.25.4 Downloading urllib3-1.26.3-py2.py3-none-any.whl (137 kB)
Collecting six>=1.5 Downloading six-1.15.0-py2.py3-none-any.whl (10 kB)
Collecting pydantic<2.0.0,>=1.0.0 Downloading pydantic-1.8.1-cp38-cp38-manylinux2014_x86_64.whl (13.7 MB)
Collecting starlette==0.13.6 Downloading starlette-0.13.6-py3-none-any.whl (59 kB)
Collecting typing-extensions>=3.7.4.3 Downloading typing_extensions-3.7.4.3-py3-none-any.whl (22 kB)
Collecting click>=5.1 Downloading click-7.1.2-py2.py3-none-any.whl (82 kB)
Collecting Jinja2>=2.10.1 Downloading Jinja2-2.11.3-py2.py3-none-any.whl (125 kB)
Collecting itsdangerous>=0.24 Downloading itsdangerous-1.1.0-py2.py3-none-any.whl (16 kB)
Collecting Werkzeug>=0.15 Downloading Werkzeug-1.0.1-py2.py3-none-any.whl (298 kB)
Collecting MarkupSafe>=0.23 Downloading MarkupSafe-1.1.1-cp38-cp38-manylinux2010_x86_64.whl (32 kB)
Collecting h11>=0.8 Downloading h11-0.12.0-py3-none-any.whl (54 kB)
Installing collected packages: six, urllib3, python-dateutil, jmespath, typing-extensions, MarkupSafe, botocore, Werkzeug, starlette, s3transfer, pydantic, Jinja2, itsdangerous, h11, click, uvicorn, Flask, fastapi, boto3
Successfully installed Flask-1.1.2 Jinja2-2.11.3 MarkupSafe-1.1.1 Werkzeug-1.0.1 boto3-1.17.25 botocore-1.20.25 click-7.1.2 fastapi-0.63.0 h11-0.12.0 itsdangerous-1.1.0 jmespath-0.10.0 pydantic-1.8.1 python-dateutil-2.8.1 s3transfer-0.3.4 six-1.15.0 starlette-0.13.6 typing-extensions-3.7.4.3 urllib3-1.26.3 uvicorn-0.13.4
Removing intermediate container 13e7537cd889 ---> 96fc5bb677f1
Step 3/7 : COPY . /src ---> 3c5d59ad3b40
Step 4/7 : FROM python:3.8-slim as app ---> 5bacf0a78697
Step 5/7 : COPY --from=builder /src/app.py /src/app.py ---> 309de7f12027
Step 6/7 : EXPOSE 5000 ---> Running in c84b5ea3e78b
Removing intermediate container c84b5ea3e78b ---> 67eb76536d35
Step 7/7 : CMD cd /src && python app.py ---> Running in c6365ec12914
Removing intermediate container c6365ec12914 ---> fc4d236bec18
Successfully built fc4d236bec18
Successfully tagged flask-docker-demo-app:latest

6. Run the following command to tag the Docker image and make sure to update the command with your account ID:

docker tag flask-docker-demo-app:latest <AWS_ID>.dkr.ecr.us-east-1.amazonaws.com/flask-docker-demo-app:latest

7. You will now push your newly created Docker image to ECR. Recall that in Step 1 we authenticated to ECR, so now all that remains is to deploy the image to ECR. Run the following command to deploy the image to ECR:

docker push <AWS_ID>.dkr.ecr.us-east-1.amazonaws.com/flask-docker-demo-app:latest

The output should look like the following:

The push refers to repository [<AWS_ID>.dkr.ecr.us-east-1.amazonaws.com/flask-docker-demo-app]
126f63c00332: Preparing fbef2d89b129: Preparing e1d1d5e18a71: Preparing 20647095e33f: Preparing a246fbb5898d: Preparing fbef2d89b129: Pushed 839cd333d76f: Layer already exists abedcaf0315b: Layer already exists 27f242dffe9b: Layer already exists 307ac48d659d: Layer already exists a0b7001030c9: Layer already exists 263952646769: Layer already exists 20d2214f2a03: Layer already exists ab7cee3b27b8: Layer already exists d9f7e0344b81: Layer already exists a991318c6224: Layer already exists 3b9f5b66025b: Layer already exists 7bc11c1a177f: Layer already exists 22e30b4e8499: Layer already exists 508c3f3b7a64: Layer already exists 7e453511681f: Layer already exists b544d7bb9107: Layer already exists baf481fca4b7: Layer already exists 3d3e92e98337: Layer already exists 8967306e673e: Layer already exists 9794a3b3ed45: Layer already exists 5f77a51ade6a: Layer already exists e40d297cf5f8: Layer already exists latest: digest: sha256:02858c813b8d30155af51d072fa228fef54363360c8e982907735b49f53ae2d9 size: 6177

Note the URI value highlighted above, as we will reference this value later in the article.

You can also verify and retrieve the URI for the image repository in the AWS console. Open Services, Elastic Container Registry, Repositories. Select the flask-docker-demo-app repository.

List of private repositories in the AWS console showing the flask-docker-demo-app repository.

Terraform overview

Terraform is an open source infrastructure as code (IaC) software tool developed by HashiCorp. Terraform provides a means to define infrastructure as code on numerous platforms, including Amazon Web Services. With Terraform, you can draft declarative configuration files in HashiCorp Configuration Language (HCL) to describe the desired state configuration for resources in various cloud environments. With HCL, architects can quickly and efficiently define complex infrastructure in a simple and intuitive manner.

Terraform is used to deliver the infrastructure for running the container environment. In order to execute the commands to test the code and deliver the environment, you should first run a Terraform command to initialize the working directory for Terraform on your workstation. Run the following command in the terraform directory:

terraform init

This results in the following console output:

Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work. If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Terraform directory structure

Once the working directory has been initialized, you’ll want to examine the content of the Terraform directory structure. Here, we can see that several .tf files exist. These files represent the Terraform configuration files that are used by the Terraform client to deliver the desired state configuration for the target environment.

Terraform directory structure displaying several .tf files.

These files are described as follows:

  • config.tf: This file contains the Terraform provider and AWS module version constraints or requirements. Other provider information can be passed here. The AWS Region is specified here for the purpose of this article.
  • data.tf: This file acts as a data handler. It retrieves specific information for later use by other resources.
  • main.tf: This is the main execution file.
  • outputs.tf: This file contains outputs to be passed to the shell during execution that are also made available to other modules at runtime.
  • variables.tf: This file provides input variables for the Terraform configuration.
  • README.md: This file provides the user information regarding the usage of the code for simulating an environment.

Typically, several sub-modules will be used to separate functions within a stack deployment as they align to different aspects of the infrastructure. In this capacity, variables files also control environment specific configurations. For the sake of simplicity, a single module will be used for the demonstration in this article.

This Terraform template is designed to define a set of specific resources and will perform the following tasks:

  • Define the desired state configuration for security, resources, and configurations for delivering defined elements using infrastructure as code concepts.
  • Separate security configurations, cluster configurations, and bootstrapping processes into source control managed definitions making them reusable, defined, and flexible.
  • Provide a functional process whereby an ECS cluster with these defined dependencies can be effectively leveraged and quickly delivered.

The Terraform code will deliver resources that pertain to the following configuration aspects:

  • IAM: Identity access management policy configuration
  • VPC: Public and private subnets, routes, and a NAT Gateway
  • EC2: Autoscaling implementation
  • ECS: Cluster configuration
  • ALB: Load balancer configuration
  • DynamoDB: Table configuration
  • CloudWatch: Alert metrics configuration

Deploy application with Terraform

Now we want to perform basic steps and deploy our Terraform code. Within the same directory as specified for the initialized Terraform environment, we want to first run commands to perform basic tests. Once those tests are completed, we can then deploy the environment for our API service.

Pre-deployment testing

To evaluate the Terraform configuration files for syntax issues and for consistency with the style guide, the following tasks are recommended before deploying from this template into an environment.

Run a Terraform FMT

terraform fmt is used to check the formatting of a Terraform file to ensure that it meets suggested formatting according to the Terraform style guide. By default, terraform fmt will rewrite Terraform configuration files to meet the style guide.

To run a terraform fmt check, run the following command from the terraform directory:

terraform fmt -recursive

If any files contain formatting errors, the file containing the errors will be listed in the console output by the filename.

If you do not want Terraform to overwrite any files on execution (e.g., if running in a pipeline), run the command with the following switches:

terraform fmt -check -recursive

If any files contain formatting errors, the file containing the errors will be listed in the console output by the filename.

Run a Terraform validate

terraform validate is used to validate that Terraform configuration files in a module are syntactically correct, referentially consistent, and consistently parameterized. The terraform validate command is helpful as a step in evaluating modules prior to execution as it will display errors within this scope.

To run a terraform validate check, run the following command from the root module directory:

terraform validate

If the Terraform configuration is valid, you will receive the following message:

Success! The configuration is valid, but there were some validation warnings as shown above.

The validation warnings can be disregarded as we are passing in interpolated expressions intentionally in these fields.

Run a Terraform plan

terraform plan is used to create an execution plan. Because Terraform is an orchestration tool used to automate resource delivery in various environments, a terraform plan action is provided to give administrators the ability to review the expected changes to an environment. terraform plan will show resources are being added, changed, or destroyed based on the provided variable inputs passed to the module during execution. terraform plan is an ideal instrument for change control processes, audit trails, and general operational awareness. Often, in a pipeline, terraform plan will be executed with a tollgate for an approver to accept or validate the environment changes prior to executing a change.

To run a terraform plan, execute the following command from the root module directory:

terraform plan

When prompted to provide the ECR image path, copy the URI from the Application Build phase shown above:

Root module directory when running a Terraform plan.

Press Enter to run the terraform plan.

Deployment

Now that we prepared the environment and performed pre-deployment tests, we can proceed with deploying the application environment using Terraform. To do so, within the Terraform directory, run a terraform apply command referencing any variables files used for your account environment.

terraform applyis the command used to change a desired target state for an environment. You will be prompted for changes made to an environment prior to deployment. The response for this action may be automated using a switch at the time of execution.

To run a terraform apply, execute the following command from the root module directory:

terraform apply

When prompted to provide the ECR image path, copy the URI from the Application Build phase seen above.

During the build process, the console output shows the status of the resources as they are being provisioned to the target AWS account:

Console output showing the status of the resources as they are being provisioned during the build process.

Console output showing the status of the resources as they are being provisioned during the build process.

When the process completes, you will receive the following message:

Apply complete! Resources: 47 added, 0 changed, 0 destroyed. Outputs: alb_dns_name = "ecsalb-477948635.us-east-1.elb.amazonaws.com"

Retrieve the DNS name for the load balancer from the alb_dns_name output seen above. Navigate to this address in your web browser as follows:

Hello world! message showing that the service is responding.

The Hello World! message means that the service is responding. Now we can test our service!

API testing

1. Open the SoapUI Application and select New SOAP Project.

SoapUI application start page.

2. Under Project Name, enter TestSoap and select OK:

New SOAP project details where project name is TestSoap.

3. Scroll to the Navigator, right-click on the TestSoap project under Projects, and select New REST Service from URI:

Context menu of the TestSoap project displaying the option of "New REST Service from URI".

4. Enter the DNS name of the load balancer:

New REST Service from URI details.

5. Select HTML and then the choose the green arrow to execute a GET request from the base URL:

Request window displaying a GET execution in HTML from the base URL.

6. You should get a response similar to this:

Hello World! response from the GET request from the base URL.

7. Now, let’s write data to our DynamoDB table. Update Method to POST. Under Media Type, select application/json and copy the following text into the text box.

{"artist":"Nirvana","song":"Smells like teen spirit"}

8. Add /v1/bestmusic/90s to Resource block. Update the Response to Raw and press the green arrow. The output should be something like this:

Output when updating the Response to Raw.

9. Now let’s see whether we can read the data that we just wrote to DynamoDB through our API. Update the Method to GET and update the Resource to v1/bestmusic/90s/Nirvana. Select the green button; you should get an output similar to this:

Test to see if the data written in DynamoDB through the API can be displayed.

Clean up

To destroy an environment based on the information in the state file and delete all of the AWS infrastructure you previously deployed, run the following command from the root module directory:

terraform destroy

Conclusion

In this article, we explained how to use popular open source tools in conjunction with AWS services. We walked through how to build, deploy, and test a Python Flask API to Amazon ECS. We also explained how to deploy infrastructure to AWS using Terraform. Thank you for reading!