By Ryo Hang, Solution Architect at ASCENDING
By Sean Yuan, Cloud Developer at ASCENDING
By Le Yi, DevOps at ASCENDING
By Gloria Zhang, Director, Marketing & Business Development at ASCENDING

Connect with ASCENDING-1

When building a complex web service such as a serverless application, sooner or later you must deal with permission control.

Amazon Cognito is a powerful authentication and authorization service managed by Amazon Web Services (AWS) and is often combined with Amazon API Gateway and AWS Lambda to build secure serverless web services.

In this post, we will describe how to implement object-based authorization in serverless applications on AWS.

In particular, we’ll walk through the code and strategy that implements a robust and scalable solution of object-based authorization with Amazon API Gateway, Lambda, and an identity provider like Amazon Cognito. This allows us to secure each item in Amazon DynamoDB for different identity.

ASCENDING is an AWS Select Consulting Partner, Public Sector Partner, and AWS Lambda Service Delivery Partner.

The Challenge

Let’s take a look at a real life-scenario. In a payroll system, we have Employee A, Employee B, and their manager, Employee C. They can all use their username and password to log in to the payroll application. This is authentication.

All of them can retrieve their paystubs through role-based authentication using:

REST API GET /api/employee/{employee_id}/paystubs

We can define only the employee role of the company that can access that particular API. However, we need extra protection specifying that Employee A can only request his/her own paystubs. For example:


We want to make sure Employee A can’t access others’ paystubs. We also need to grant Employee C (the manager) permission to access not only their own paystubs but the paystubs of Employees A and B.


Figure 1 – Role and object authorization.

This scenario is object-based authorization. Popular frameworks like Spring, Django, and Node.js all have libraries to implement it. We will next discuss how to implement an object-based authorization in an AWS serverless architecture.

The Solution

We were able to build a solution completely relying on AWS managed services that fulfilled the current requirement and met future scaling needs.


Figure 2 – Architectural diagram.

These are the main processes in the solution:

  1. An AWS Lambda function is triggered before the API request to control accept or deny.
  2. Build a policy and ACL object for the Lambda function to determine if the current user has access to the requested object.
  3. Cache the policy object to accelerate the authorization process in a large scale application.

AWS Lambda Authorizer

The AWS team has integrated Lambda functions in most managed services since the day it was launched in 2015. For each request to Amazon API Gateway, developers can turn on an AWS Lambda authorizer, which returns allow or deny Boolean results.

The authorizer serves as the central brain to determine if the API requester has the permission to relevant API resources.

Here’s how to implement it:

  • Create authorizers in Amazon API Gateway.


  • Assign the AWS Lambda function to any of the API resources you would like to secure.


  • Configure the AWS Lambda authorizer. You can follow the AWS documentation to configure it in the Amazon API Gateway console.

Next, we’ll go into detail about how permission models are built and updated. The authorizer leverages that information to determine if it should allow or deny the request.

Building User Access Control List (ACL)

Building and maintaining user permission up-to-date is a non-trivial task. The solution varies depending on the identity provider (IDP) you use. Let’s take Amazon Cognito as an example. You’ll need to:

  1. Initialize the ACL
  2. Maintain the ACL
  3. Generate a policy from the ACL
  4. Evaluate the policy

Step 1: Initialize the ACL

Because each user has a unique identification, we can insert an item in the Amazon DynamoDB permission table to describe user permissions upon user registration with Amazon Cognito. In this example, we are using single integer IDs for Employees A, B, and C. In a real application, the UUID strings are quite long.

Employee A:
uuid: xxxx-xxx-xxx-xxxx-xxxxxxxx
employee: {“allow”: [“/5”,”/5/*”]}
paystubs: {“allow”:[1,2]} Employee B: uuid: xxxx-xxx-xxx-xxxx-xxxxxxxx
employee: {“allow”: [“/6”,”/6/*”]}
paystubs: {“allow”:[3,4]} Manager C: uuid: xxxx-xxx-xxx-xxxx-xxxxxxxx
employee: {“allow”: [“*”],”deny”:[“/7”,”/7/*”]} // employee_id=7 is the manager of C
paystubs: {“allow”:[1,2,3,4]}

As the project grows, we can expand the number of columns to describe the access control to other types of objects in the permission table.

This permission table in DynamoDB essentially stores the ACL for each user; the implementation could be different based on your application use cases.

We implemented the Allow and Deny for better logic evaluation. For example, a manager could have access to most employee paystubs, but not to those of his supervisors. Using the Deny feature is much more effective in that case. The logic behind that is very similar to AWS Identity and Access Management (IAM) role evaluation logic.

Code Sample

It’s quite flexible to customize Amazon Cognito. You can trigger a Lambda function to build an ACL for a particular user at a post-confirmation event. We use the following code sample in the Amazon Cognito post confirmation trigger.

def lambda_handler(event, content): uuid = event["request"]["userAttributes"]["sub"] table = boto3.resource('dynamodb').Table(os.environ.get('AUTH_TABLE')) response = table.put_item( Item={ 'uuid': uuid, 'employee': { 'allow': [ '/employee': 'GET', '/employee/' + uuid: '*' ] }, 'paystubs': { 'allow': [ '/paystubs/5': 'GET' ] } } ) return event

Step 2: Maintain the ACL

In a complex project, user permissions are dynamic and change all the time. Let’s consider two simple scenarios:

  • Employee A has been promoted to manager, so they will not only have access to they own paystubs but also to the paystubs of their employees.
  • The system automatically generates pay stubs for each employee, so the ACL has to change pay stubs objects for each employee.

We chose the Amazon DynamoDB stream to manage user permissions:


We can easily integrate any Lambda function whenever an item in the table is modified or inserted.

Again, for our example, we can write a Lambda function in the paystub table that includes paystub information as well as employee and other information. The Lambda function is run for any new entry or update to an existing entry.

def lambda_handler(event, content): # Read dynamoDB stream from paystub table for record in event['Records']: principalId = record['dynamodb']['Keys']['userId']['S'] # If paystub record is deleted, we delete it from acl table if record['eventName'] == 'REMOVE': response = table.update_item( # Remove paystub from acl table Key={ 'uuid': principalId }, UpdateExpression="set paystub.allow=:a", ExpressionAttributeValues={ # ':a': new paystub lists }, ) # Update paystub to the acl table else: client = boto3.client('dynamodb') response = record['dynamodb']['NewImage'] table = boto3.resource('dynamodb').Table(os.environ.get('AUTH_TABLE')) response = table.update_item( # Update acl table Key={ 'uuid': principalId }, UpdateExpression="set paystub.allow=:a", ExpressionAttributeValues={ # ':a': new paystub lists }, ) return event

Step 3: Generate a Policy from the ACL

Since we built an ACL for user permissions on each object, it’s time to generate an authorization policy (auth policy) document for the Lambda function authorizer to evaluate if it should allow or deny the user API request.

def lambda_handler(event, context): token = event['authorizationToken'] client = boto3.client('cognito-idp') response = client.get_user(AccessToken=token) # get user uuid principalId = response['UserAttributes'][0]['Value'] # Configure your policy: restApiId, region, stage, etc. policy = AuthPolicy(principalId, awsAccountId) # Get rules from auth table client = boto3.client('dynamodb') response = client.get_item( TableName=os.environ.get('AUTH_TABLE'), Key={'uuid': {"S": principalId}}) # Add your rules to policy here # example: for k, v in response['Item'].items(): # policy.allowMethod(v['S'], k) if k != 'uuid': for k,v in v['M']['allow']['M'].items(): policy.allowMethod(v['S'], k) # Build policy authResponse = return authResponse

Step 4: Evaluate the Policy

As you can see, the Lambda function builds the following sample policy JSON from the ACL table based on user UUID (principalId).

{ "principalId": "xxxxxxx", // the principal user identification associated with the token send by the client "policyDocument": { // example policy shown below, but this value is any valid policy "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "execute-api:Invoke" ], "Resource": [ "arn:aws:execute-api:us-east-1:xxxxxxxxxxxx:xxxxxxxx:/test/*/employee/5" ] }, { "Effect": "Allow", "Action": [ "execute-api:Invoke" ], "Resource": [ "arn:aws:execute-api:us-east-1:xxxxxxxxxxxx:xxxxxxxx:/test/*/employee/5/paystub" ] } ] }

These steps complete the policy generation and management.

Amazon API Gateway has an internal policy evaluation mechanism. As long as the AWS Lambda authorizer can return standard policy JSON, Amazon API Gateway automatically evaluates it. Here’s the official document talking about the output standard of the As Lambda authorizer.

However, there’s a small caveat in the AWS Lambda authorizer. For each API request, we need to go through heavy computing of generating a policy again and again. If we inspect the AWS Lambda authorizer at runtime, we notice auth policy JSON is mostly identical for the same user.

Caching for Scaling

In large-scale applications, the policy document is rather huge and which would take some time to generate. To improve the performance and scalability in the future, we can cache it in a cache server.

We recommend using either Amazon ElastiCache for Redis, which is another managed service and great for reducing the maintenance cost. Follow these steps to do so.

Step 1: Read the Policy

We can leverage Redis key-value store to read/write policy documents as values associated with a particular user UUID as key. The AWS Lambda authorizer turns into the following code logic, so it no longer builds policy in real-time. It instead reads policy from the Redis cache first.

def lambda_handler(event, context): token = event['authorizationToken'] client = boto3.client('cognito-idp') response = client.get_user(AccessToken=token) # Find user sub id principalId = response['UserAttributes']['sub'] r = redis.Redis(host=os.environ.get('REDIS_HOST'), port=os.environ.get('REDIS_PORT'), db=0) # If auth policy exists in elasticache we will fetch it from cache if r.exists(principalId): return r.get(principalId) # Otherwise we get policy from acl table and store it to the elasticache awsAccountId = 'your_aws_accountID' policy = AuthPolicy(principalId, awsAccountId) authResponse = # Cache the auth policy to the elasticache for future usage r.set(principalId, json.dumps(authResponse)) return authResponse

Step 2: Update the Policy

Of course, we also have to handle the authorization policy update. If the Amazon DynamoDB permission table has a new entry or updates to an existing entry, it triggers a Lambda function to write into the Redis cache. It’s very similar to how we previously handled the update to the permission table.

def lambda_handler(event, context): r = redis.Redis(host=os.environ.get('REDIS_HOST'), port=os.environ.get('REDIS_PORT'), db=0) # Read dynamoDB stream for record in event['Records']: principalId = record['dynamodb']['Keys']['uuid']['S'] # If acl record is deleted, we delete it from cache if record['eventName'] == 'REMOVE': r.DELETE(principalId) else: # Generate policy from updated acl table awsAccountId = 'your_aws_accountID' policy = AuthPolicy(principalId, awsAccountId) client = boto3.client('dynamodb') response = record['dynamodb']['NewImage'] # read policy from auth_table here # policy.allowMethod(xxx,xxx) authResponse = # Update the auth policy in elasticache r = redis.Redis(host=os.environ.get('REDIS_HOST'), port=os.environ.get('REDIS_PORT'), db=0) r.set(principalId, json.dumps(authResponse))


Serverless architectures on AWS have evolved so much in the last few years. When we started exploring AWS Lambda functions in late 2015, we used them for only internal processes.

As more and more AWS managed services integrated into Lambda functions, we started to build a highly scalable, available and cost-effective serverless application for our clients.

We were able to write a few Lambda functions and integrate them with Amazon API Gateway, Amazon Cognito, and Amazon DynamoDB to build a sophisticated object-based authorization. Then, we integrated Amazon ElastiCache for Redis for scaling.

You can find the blueprint of an AWS Lambda authorizer in the most popular language on GitHub. Our code repository was built on top of that.

These GitHub repositories also include code samples and references:

The content and opinions in this blog are those of the third-party author and AWS is not responsible for the content or accuracy of this post.


ASCENDING – AWS Partner Spotlight

ASCENDING is an AWS Select Technology Partner that ASCENDING team is commited to delivering scalable and high available serverless or containerization solutions to customers.

Contact ASCENDING | Partner Overview | AWS Marketplace

*Already worked with ASCENDING? Rate the Partner

*To review an AWS Partner, you must be a customer that has worked with them directly on a project.