In large enterprise organizations, it’s challenging to maintain standardization across environments. This is especially true if these environments are provisioned in a self-service manner—and even more so when new users access these provisioning services. Once you have the resources deployed into an environment, it can be hard, or even impossible, to change it.
In case of a failure, all of the manual resources require maintenance if they still exist and every change to the environment needs to be documented. Migration and testing range from difficult to practically impossible in some cases.
The main options available to solve these problems have been cloning the AWS Lambda functions manually, which might take less time and effort but wouldn’t be sufficient, taking into account disasters or environment failures. The second approach is to automate those AWS Lambda functions using an automation tool to reduce the clone time and the deployment effort in case unforeseen events happen.
In this post, I describe how AWS and Lexis Nexis found a balance operating between agility, governance, and standardization for AWS resources. I walk you through one of the solutions that we use to automate resource creation and provision by leveraging an Infrastructure as Code approach. The solution uses Amazon Python SDK (Boto3) and the following AWS services:
Using this solution allows relatively quick and painless resource migration—specifically, that of Lambda functions—into any other AWS account or region by leveraging AWS CloudFormation templates.
Lexis Nexis was using AWS services in one account and one region. While doing that is convenient to maintain, it is also hard to recreate and govern in case of a disaster. Their main goal was to automate AWS resource creation and provisioning by separating the different environments to Dev and Prod environments. In addition, they needed to remove version maintenance so every change would appear immediately within the affected resources. The majority of the resources deployed in the account were Lambda functions. They needed a solution that leveraged automatic deployment and saved time and effort in manually creating templates for Lambda functions.
The solution provides a framework that delivers better user experience and easier operations for cloning existing resources in compliance with Lexis Nexis’ governance practices. Since the Lambda functions were already deployed into a development account, the solution uses the function’s existing settings without provisioning or adding any new components while replicating the resources. Once the framework runs over the target Lambda functions, it creates AWS CloudFormation templates to be deployed as part of the Lexis Nexis Jenkins CI pipeline.
Once it was clear which functions needed to be automated and deployed upon multiple environments, the function names needed to be passed to the framework . It leverages the SDK for Python, AWS CloudFormation, and the existing Lambda functions in the target account. The framework receives a list of functions to recreate from a target account as an input, then uses API calls to get all of the content of the Lambda function, including environment variables and the deployment package, so all of the dependencies and code will remain untouched.
Once the critical information is received, the software creates uses a predefined AWS CloudFormation template locally for each of the Lambda target functions. On top of the generic template, all of the Lambda variables are injected to remain true to the target Lambda function. The last stage is saving the local template into an output directory that contains a ready-to-deploy template. The moment the final template is generated, it can be deployed to any AWS region and can also be duplicated across different AWS accounts without restrictions, as shown in the following screenshot.
This solution is just one of many ways that Lexis Nexis helps developers provision compliant resources and deploy their Lambda functions in a more scalable way to the AWS cloud. Lexis Nexis and AWS want to empower new cloud users to provision and deploy resources faster, with fewer clicks, but also in a reusable manner that follows audit and compliance requirements.
About the Author
Ido Michael is a Data Engineering Consultant as part of the Global Specialty Practice in AWS Professional Services. He focuses on building Data and Analytics solutions at scale with open source technologies and AWS. On his spare time, Ido is traveling and climbing mountains.
from AWS Management & Governance Blog: https://aws.amazon.com/blogs/mt/duplicating-infrastructure-on-aws/