By Warwick Levey, VP Sales & Marketing – Silicon Overdrive
By Jabu Sibanyoni, Sr. Solutions Architect – AWS
By Yusuf Mayet, Partner Solutions Architect – AWS
Due to the COVID-19 pandemic and subsequent worldwide lockdown restrictions, students and professionals have found themselves working from home more than ever before.
Because of these restrictions, more people found they needed to turn to an online solution to maintain productivity and continue with some sense of normality. For most businesses, that meant adopting online meeting tools, but for higher education a more robust solution needed to be implemented.
To accomplish this, universities either deployed new or expanded existing Learning Management Systems (LMS). One of the popular LMS options is Sakai, a full-featured Collaboration and Learning Environment (CLE).
For universities already running Sakai, many had their environments deployed on-premises, running on existing hardware. While that was a feasible solution when most students were studying on campus, the increased demand with students and faculty working from home meant the on-premises deployments struggled with performance and could not scale to meet demand.
To address this, universities turned to Amazon Web Services (AWS) to deploy their LMS solutions like Sakai in the cloud.
In this post, we’ll cover best practices for achieving operational excellence to ensure optimal scaling when implementing Sakai LMS on AWS.
The Silicon Overdrive team worked with the AWS teams to architect and implement the Sakai solution on AWS for a South African university. We’ll identify the various components of a typical Sakai implementation in order to improve performance, resiliency, and cost efficiency, culminating in a highly available, multi-server, scalable architecture.
Silicon Overdrive is an AWS Advanced Consulting Partner and member of the AWS Public Sector Partner and Well-Architected Partner programs. Silicon Overdrive also holds the AWS DevOps Competency.
What is Sakai?
Sakai is an open source Learning Management System (LMS) that’s primarily focused on higher education and has been adopted by hundreds of institutions across the world.
It provides users learning, portfolio, library, and project tools, among others, and allows students to upload assignments, complete tests, and interact with educators and classmates alike.
Sakai is flexible and can be configured for a variety of specialized audiences; it’s capable of scaling to the most demanding environments.
Sakai allows users to create and control their sites and gives them the choice of which tools to include in the sites they create. The tools include chat, forums, wiki, polls, search capabilities, and more that can enhance group coherence and allow for intuitive interaction.
Benefits of Running Sakai in AWS
When students and faculty began working from home in greater volumes, many existing on-premises deployments for LMS solutions could not cope with the increased workloads. That meant universities were left with two options.
- Invest large sums of money into capital expense by purchasing and provisioning new hardware. Not only does this mean a large capital expense, it also takes longer to deploy since before any work can begin the new hardware would need to be delivered and installed.
- Migrate to the cloud, which allows for a faster deployment and shifts the cost from CapEx to OpEx.
The pandemic adversely impacted aspects of life, and it also came without warning. The higher education sector didn’t have the luxury of time when needing to scale their LMS systems. The only viable solution was to migrate their workloads to the cloud.
Besides deployment speed, benefits of migrating an LMS to the AWS Cloud include:
Elasticity and Scalability
By using AWS, universities don’t need to guess what their capacity requirements will be. Instead, they can provision new resources on demand.
Since AWS is elastic, they can downgrade services when not in demand, eliminating the need to future-proof hardware.
Using AWS, you will gain the control and confidence you need to securely run your organization with the most flexible and secure cloud computing environment available today.
To ensure safety, AWS utilizes an end-to-end approach to secure and harden its infrastructure, including physical, operational, and software measures.
One of the fundamental concerns with any on-premises deployment is reliability. If there is load shedding or a local power outage, the university would be required to run a generator to keep their server online.
Furthermore, if there is a catastrophic failure of the on-premises hardware, most institutions will not have a backup solution they can roll over to.
By implementing Sakai using Amazon Elastic Compute Cloud (Amazon EC2), Amazon Relational Database Service (Amazon RDS) and Amazon Elastic File System (Amazon EFS) across multiple AWS Availability Zones (AZs), universities can achieve a durable and highly available solution.
All of this ensures students and faculty members have uninterrupted access to the Sakai LMS environment.
The architecture below shows how to implement Sakai on AWS. The solution uses Amazon CloudFront to deliver static and dynamic content, while the Internet Gateway (IGW) allows communication between instances in a virtual private cloud (VPC) and the internet.
The Network Address Translation (NAT) Gateways in each public subnet enable AWS instances in private subnets to access the internet or other AWS services, but prevent the internet from initiating a connection with those instances.
Placing Sakai instances inside Auto Scaling Groups helps ensure you have the correct number of EC2 instances available to handle the load for your application. Amazon RDS simplifies your database administration by automating time-consuming tasks such as hardware provisioning, database setup, patching, and backups.
Amazon EC2 instances seamlessly access shared Sakai data in Amazon EFS using EFS Mount Targets in each Availability Zone. Amazon EFS also provides Sakai instances a simple, scalable, fully managed elastic network file system to share data.
Figure 1 – Sakai hosting on AWS.
Architecting Sakai for Reliable Scalability
To achieve reliable scalability requires developers to build solutions that cater to current scale requirements as well as projected growth of the solution. This growth can be either organic over time or event-related.
Event-related growth is what we have seen during the pandemic, when governments introduced lockdowns and student engagements were moved online. This resulted in increased demand for web traffic on the online management systems such as Sakai.
Following are primary design and configuration guidelines for Sakai on AWS to achieve reliability in the context of scale.
Sakai web servers must be placed behind Elastic Load Balancing to distribute incoming traffic across multiple servers, as well as across multiple Availability Zones.
Using an Elastic Load Balancer to distribute traffic across multiple Sakai web servers and AZs improves availability, allows automatic scaling, and provides robust security necessary to make applications fault tolerant.
Resizing the Database
The Sakai database implemented using Amazon RDS can be resized (scale up) to handle a higher load. This can be achieved by scaling up the master database vertically with a simple push of a button.
Amazon RDS offers more than 18 instance sizes to choose from, allowing you to choose the best resource for your database server.
Lastly, you’ll scale your RDS configuration up or out to meet the growing needs of your Sakai application. With Amazon RDS for SQL Server, you also need to also ensure that MySQL parameters, such as tmp_table_size and max_heap_table_size, are configured correctly for and follows best practice.
The best practice here is that changes to your Sakai workload or its environment must be anticipated and accommodated to achieve reliable. Educational institutions know they’ll experience a spike in demand on their Sakai workload whenever a large number of students are writing exams concurrently.
Monitoring Sakai Resources
You can monitor the components of the workload with Amazon CloudWatch or third-party tools. Logs and metrics are powerful tools to gain insight into the health of your Sakai workload. Monitoring is critical to ensure you’re meeting availability requirements.
To monitor your Amazon RDS database, you’ll use the Performance Insights feature, which expands on existing RDS monitoring features to illustrate your database’s performance and helps you analyze any issues affecting it.
Load Testing Your Sakai Workload
You must adopt a load testing methodology to measure if scaling activity will meet Sakai workload requirements. Cloud solution architects need to be aware of the importance of performing sustained load testing.
Load tests helps you discover the breaking point of your Sakai workload. AWS makes it easy to set up temporary testing environments that model the scale of your production workload.
In the cloud, you’ll cost effectively create a production-scale test environment on demand, complete your testing, and then decommission the resources.
Leveraging Amazon EFS
Leveraging Amazon EFS as part of the Sakai solution architecture provides a simple, scalable, fully managed elastic Network File System (NFS) for use with your Sakai web servers.
Amazon EFS is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files. This eliminates the need to provision and manage capacity to accommodate growth.
Leveraging the Content Delivery Network
Leveraging Amazon CloudFront and its edge networking locations as part of the Sakai solution architecture enables your application to scale rapidly and reliably at a global level, without adding any complexity to the solution.
In this post, we covered best practices for achieving operational excellence to ensure optimal scaling when implementing Sakai LMS on AWS. The design principles discussed here act as the foundational pillars to support reliable scalability.
You have learned about the benefits of running Sakai on AWS, and we also shared the reference architecture for the implementation and how to architect Sakai for reliable scalability.
Silicon Overdrive – AWS Partner Spotlight
Silicon Overdrive is an AWS Advanced Consulting Partner with a focus on achieving and maintaining data security safeguards and compliance.
*Already worked with Silicon Overdrive? Rate the Partner
*To review an AWS Partner, you must be a customer that has worked with them directly on a project.