This article is a guest post from Sebastien Goasguen, co-founder of TriggerMesh.
Deploying AWS Lambda functions with the serverless framework is arguably the easiest way to deploy functions and configure how they get triggered. If you want to automate your function deployment, you will most likely do so via your CI/CD workflow. A CI/CD pipeline can be implemented in many different ways using a variety of tools (e.g., Jenkins, AWS CodePipeline, Google Cloud Build, CircleCI, Bamboo, Rundeck). In March 2019, the Linux Foundation announced the Continuous Delivery Foundation (CDF), whose mission is to provide a neutral home to collaborate on the development of the next generation continuous delivery systems. Tekton, which provides Kubernetes-like API resources to describe CI/CD pipelines, is an open source Google project hosted by the CDF.
In this article, we explain how to use Tekton to automate the deployment of AWS Lambda functions using the serverless framework. We start with a quick review of the serverless framework, and then dive into Tekton core API objects.
First install the serverless framework and generate a skeleton for a Python function:
npm install -g serverless serverless create -t aws-python3
The resulting skeleton written in the working directory is:
. ├── handler.py └── serverless.yml
The function is stored in the
handler.py file and the manifest describing how to deploy the function and how it gets invoked is
To be able to reach your function from the internet, you must edit the
serverless.yml manifest and set the functions section like so:
functions: hello: handler: handler.hello events: - http: path: / method: get
To deploy the function:
Once the deployment is finished, information describing the function on stdout is shown, similar to:
... Service Information service: foo stage: dev region: us-east-1 stack: foo-dev resources: 9 api keys: None endpoints: GET - https://i0vh8byjr9.execute-api.us-east-1.amazonaws.com/dev/ functions: hello: foo-dev-hello layers: None Serverless: Run the "serverless" command to setup monitoring, troubleshooting and testing.
Now you can call the publicly-accessible endpoint and parse the output to get a nice message from the serverless framework:
curl -s https://i0vh8byjr9.execute-api.us-east-1.amazonaws.com/dev/ | jq .message "Go Serverless v1.0! Your function executed successfully!"
This is straightforward, however, as mentioned in the introduction you will most likely deploy and update your functions through a CI/CD pipeline, which could be a traditional CI/CD pipeline using Jenkins, or one of the SaaS solutions available. Or, if you want your CI/CD pipeline to execute within your Kubernetes cluster and be defined through a Kubernetes-like API, you should give Tekton a try.
An introduction to Tekton concepts
Tekton is a set of API objects that can be used to describe your CI/CD pipeline and run the pipeline in a Kubernetes cluster.
Tekton provides a Kubernetes-native API (thanks to custom resources) to express CI/CD pipeline. This in itself is a worthy development in a world that has become Kubernetes-centric.
The Tekton API is composed of five key objects:
Task: Describes a set of steps that will get executed within containers. The
Taskspecification is similar to a Kubernetes Pod.
TaskRun: Object that triggers the execution of a
Taskand defines the set of input and output necessary for the
Pipeline: Set of
Tasksthat need to be executed.
PipelineRun: Object that triggers the execution of the
PipelineResource: Defines what can be used as input and output of a
To learn more about the
Pipeline object, check out the tutorial on GitHub.
In the final section of this article, we will concentrate on the
Task object and use it to deploy an AWS Lambda function.
Deploying Lambdas with Tekton
Now that we have seen in the previous section how to deploy an AWS Lambda function with the serverless framework and we have quickly covered the Tekton concepts, let’s connect the two.
What we need to do is:
- Define a GitHub repository that contains our function code as a
- Create a
Taskobject that will run
- Execute the
Taskby creating a
TaskRunobject referencing the
- Check the logs of the Pod that executes that
Taskand see the function deployment happening.
The YAML manifests that are shown are all available on GitHub. To start, you can clone the repo to get easy access to the sample files:
git clone https://github.com/sebgoa/klr-demo cd klr-demo
Or you can get the manifests directly via
curl like so:
PipelineResource has the following shape, which will be familiar to all Kubernetes users. You see an API version, a kind, the name of the object defined in the metadata section, and in the spec section you will see the URL of the git repository, which contains our function code.
apiVersion: tekton.dev/v1alpha1 kind: PipelineResource metadata: name: klr-demo spec: type: git params: - name: revision value: master - name: url value: https://github.com/sebgoa/klr-demo
If we decompose the
Task object necessary to deploy the function, we will see the
spec section as in all Kubernetes objects. The
spec will start with a reference to a resource of type git in the
inputs section so that we can point to the git repository, which contains the function code:
apiVersion: tekton.dev/v1alpha1 kind: Task metadata: name: serverless-deploy spec: inputs: resources: - name: repository type: git …
Task will define the
steps. In our
Task we only have one step. This step should run
serverless deploy. To do this, we need a container image that contains the
serverless node package. We built this image at Triggermesh and it is publicly available at
gcr.io/triggermesh/serverless. The step looks like this:
steps: - name: deploy workingDir: '/workspace/repository' image: gcr.io/triggermesh/serverless command: ["serverless"] args: ["deploy"] …
For this to run properly, your AWS credentials will need to be available to the Kubernetes Pod that will run the step. You can pass your credentials as environment variables using a Kubernetes Secrets or use a volume mount. Here is the simplest form using two environment variables:
env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: awscreds key: aws_access_key_id - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: awscreds key: aws_secret_access_key
And that’s it to define your
PipelineResource. The declaration of your short pipeline is done. To launch the execution of this pipeline (a single
Task), you now need to write a
You can get the
TaskRun manifest via
curl like so:
curl -s https://raw.githubusercontent.com/sebgoa/klr-demo/master/deploy.yaml
The object again has a familiar shape compared to all the Kubernetes objects, with the usual
apiVersion: tekton.dev/v1alpha1 kind: TaskRun metadata: name: deploy spec: …
spec we set the input of the task to point to our
PipelineResource, which defined the git repo to use. Finally, in the
taskRef section, we point to the
Task that does our
spec: inputs: resources: - name: repository resourceRef: name: klr-demo taskRef: kind: Task name: serverless-deploy
With the objects properly configured and a Secret containing your AWS credentials, you are ready to create your objects and launch the deployment of your functions via Tekton and the serverless framework.
If you have cloned the sample repository:
kubectl apply -f resources.yaml kubectl apply -f deploy.yaml
The following diagram depicts the key Tekton API objects (
TaskRun) and shows that basic flow. Creating the
TaskRun object creates a Pod, which runs the serverless deploy command to deploy the function to AWS Lambda.
Task will execute in a Pod and you will get the serverless logs from the Pod Logs, for example:
kubectl logs serverless-deploy-fgn9x-pod-570cb9 -c build-step-deploy Serverless: Packaging service... Serverless: Excluding development dependencies... Serverless: Creating Stack... Serverless: Checking Stack create progress... ..... 2020-01-28T14:18:24.054347892Z service: aws-python-simple-http-endpoint 2020-01-28T14:18:24.054360411Z stage: dev 2020-01-28T14:18:24.054367679Z region: us-east-1 2020-01-28T14:18:24.054375006Z stack: aws-python-simple-http-endpoint-dev 2020-01-28T14:18:24.05438248Z resources: 10 2020-01-28T14:18:24.054725635Z api keys: 2020-01-28T14:18:24.054763849Z None 2020-01-28T14:18:24.055481345Z endpoints: 2020-01-28T14:18:24.055540592Z GET - https://1st3ojbj1d.execute-api.us-east-1.amazonaws.com/dev/ping 2020-01-28T14:18:24.055893183Z functions: 2020-01-28T14:18:24.055951609Z currentTime: aws-python-simple-http-endpoint-dev-currentTime 2020-01-28T14:18:24.056206865Z layers: 2020-01-28T14:18:24.056266366Z None</code
AWS Lambda users who have embraced the serverless framework to deploy their functions must develop continuous deployment automation so that changes in function code can automatically get tested and deployed. Although there already are a significant number of CI/CD solutions—including Jenkins, CircleCI, and CodePipeline— Tekton, a relative newcomer in the field, offers a Kubernetes-like API of interest to Kubernetes users, such as Amazon Elastic Kubernetes Service (Amazon EKS). This opens the door to using Amazon EKS clusters for running containerized workloads as well as CI/CD pipelines, including the ones that drive a serverless architecture.
If you want to give Tekton a more in-depth try, check out the Task catalog. The serverless Tasks described in this article will be contributed to the catalog in the coming weeks.
Additionally, if you want to couple Tekton with Knative, you may be interested in the TriggerMesh Lambda runtime previously described in the article “Deploying AWS Lambda-Compatible Functions in Amazon EKS using TriggerMesh KLR,” as well as the Knative event sources for AWS services on GitHub, which allow you to tie your AWS services events with on-premises applications.
The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.
Feature image via Pixabay.