The adoption of the ISO 20022 messaging standard by the financial industry will benefit all participants across the payments chain: banks, market infrastructures, corporate, and consumers. By moving the SWIFT messaging and communications infrastructure stack onto AWS, customers can speed their adoption of ISO 20022. At the same time, they can reduce costs, and improve security and resiliency of their critical payments channel. AWS Cloud adoption will also enable banks to be more agile and innovate using the rich data model of ISO 20022. They will be able to provide their own customers with an improved payments experience.
SWIFT Connectivity Architecture Types
Customers have several options to connect to SWIFT, each with varying architectures. There are three common architecture patterns:
|Use-Case||Messaging Interface||Communication Interface + Security|
|Full Stack||Customer fully owns and operates:||Customer owns and operates:|
|Partial Stack||Customer owns and operates:||Customer outsources to a Service Bureau to host:|
|Lite2||Customer owns and operates:||Customer owns and operates:|
While this blog post provides a migration approach from a Full Stack on-premises SWIFT infrastructure to AWS, the architecture principles also apply to the other two types of SWIFT architecture patterns.
The reference architecture shown preceding represents an example pattern for the Full Stack SWIFT implementation on AWS. SWIFT connectivity is considered a critical infrastructure for financial institutions, which must achieve high availability to facilitate cross-border and high value payments. The top priorities for consideration are resiliency and security. Therefore, the primary SWIFT connectivity stack should be deployed in multiple Availability Zones (AZ) for high availability and in multiple AWS Regions for disaster recovery (DR). Availability Zones give customers the ability to operate production applications and databases that have several benefits. They are more highly available, fault tolerant, and scalable than would be possible from a single on-premises data center. For DR, the same topology is also deployed to a different Region. The primary and secondary Regions should be carefully chosen to minimize the latency for cross-site replication while also satisfying regulatory requirements.
From a security perspective, customers are required to comply with SWIFT’s Customer Security Controls Framework (CSCF) and attest against SWIFT’s mandatory security controls. You can find the CSCF guidance for a deployment on AWS and infrastructure as code deployment template in our AWS QuickStart for SWIFT Client Connectivity.
Secure Communication: SWIFT HSM and Alliance Connect VPN
Besides software components, SWIFT requires two hardware devices to secure the communication with the SWIFT network in the Full Stack option: SWIFT provided HSM and VPN.
The SWIFT HSM serves two main purposes: signing SWIFT messages for non-repudiation and storing private key and associated certificate information. SWIFT currently offers the HSM as a hardware appliance. You can use an AWS Partner colocation facility to host this component. This ensures the latency between the SNL and HSM is minimized as that connection is latency sensitive.
SWIFT Alliance Connect VPN is responsible for establishing IPsec VPN tunnels to SWIFT Multi-Vendor Secure IP Network (MV-SIPN), and is currently also offered as a hardware appliance. The VPN component can be deployed in the same location as the HSM but they must be segregated and isolated from a networking perspective.
For connectivity to SWIFT Points of Presence (POP), customers can consult with SWIFT Network Partners to establish either a private leased line, or internet connectivity from the colocation facility to SWIFT. For cloud connectivity, colocation partners typically offer Cloud Connect fabric. This can establish network connectivity between AWS and the SWIFT HSM/VPN through AWS Direct Connect.
Deploying HSM and VPN in an HA configuration is out of scope for this blog. For the SWIFT-provided resiliency guidance on this topic see Resilience Guide.
Communication Interface: SWIFT Alliance Gateway, SWIFTNet Link, and SWIFT Web Platform
SWIFTNet Link (SNL) provides functionality to access SWIFTNet messaging services over the SWIFT MVSIPN network.
SWIFT Alliance Gateway (SAG) is a software package that is installed on top of SWIFTNet Link. SAG enables application-to-application communication and facilitates connectivity to the SWIFT MVSIPN network. Alliance Web Platform (SWP) is a web interface for managing and configuring the SWIFT Alliance Gateway.
SWIFT offers SAG and SNL as a licensed software package on Red Hat Enterprise Linux (RHEL) or Windows Server operating systems. It can be installed on an Amazon EC2 instance with a static IP assigned. The static IP is mandatory and required for the VPN component to reach back to it. Customers should deploy multiple active-active SAG/SNLs across AZs to ensure high availability of the system. In the event of a host failure, the system operator would take action to recover.
SWIFT Web Platform, a component required for administration tasks, should be installed on a different EC2 as SAG/SNL. Since it is an administrative portal, the recommendation is to isolate it from the main messaging transaction flow.
With SAG and SNL deployed on EC2, customers can leverage a variety of AWS services. They can include AWS Systems Manager, EC2 Image Builder, and AWS CodePipeline to automate patching, software installation and software deployment. AWS Systems Manager can prepare and execute runbooks to remediate and recover from system failure. EC2 Image Builder would alleviate the need for SWIFT administrators to follow tedious installation and configuration steps to install SWIFT software. Finally, AWS CodePipeline can be used for the host deployment to accomplish immutable infrastructure.
Messaging Interface: Alliance Message Hub and SWIFT Alliance Access
Alliance Messaging Hub (AMH) is a flexible, customizable, and scalable multi-network, high-volume financial messaging solution designed to help ensure high availability. Alliance Access (SAA) is also a SWIFT messaging interface designed to connect customers’ business applications to SWIFT messaging services.
Customers typically decide which messaging interface based on the transaction volumes, messaging functionality, and resiliency requirements. Both solutions depend on Oracle database as the data store for storing the application and messaging state. Amazon RDS for Oracle is recommended for deploying AMH or SAA on AWS as Amazon Relational Database Service (RDS) greatly simplifies database maintenance tasks. An Amazon RDS Multi-AZ deployment ensures zero transaction loss in the event of an Oracle instance failure.
Most customers deploying AMH or SAA rely on a message broker and both messaging solutions support a JMS Connector. Amazon MQ is a managed message broker service for Apache ActiveMQ that streamlines set up and operates message brokers on AWS. Amazon MQ – Active MQ provides all the standard JMS features including point-to-point (message queues), publish-subscribe (topics), request/reply, persistent and non-persistent modes, JMS transactions, and distributed (XA) transactions.
SWIFT AMH team has published a high availability reference architecture for running AMH on AWS.
The upcoming industry-wide standards change is due to take effect starting November 2022, and will impact the entire cross-border payments value chain. For customers currently evaluating migration options to meet SWIFT’s new Cross-Border Payment and Reporting (CBPR+) message formats, we hope this blog provides you an approach for migrating your SWIFT connectivity stack to AWS. You will begin to realize the agility, reliability, resiliency, and security benefits inherent to the cloud.