Himasha Guruge
3 min readFeb 25, 2020

--

WSO2 API Manager Deployment in AWS : A Reference Architecture

WSO2 products can be deployed in various cloud providers such as AWS, GCP, Azure and more. This is an attempt to provide recommendations to consider ,when performing a fully distributed API Manager deployment in AWS cloud. This post assumes that you have an understanding of general AWS concepts ,if not please go through https://docs.aws.amazon.com/index.html

Reference Architecture

Above reference architecture contains a fully distributed HA deployment by nature. When deployed in AWS it is,

  1. Highly-Available and resilient/fault-tolerant : As each node of a given profile is placed under two different availability zones (us-west-1b and us-west-1c). Therefore even if one data centre becomes unavailable you can survive.
  2. Scalable : As homogenous instances are placed within an autoscaling group.
  3. Production-grade storage : As the database service (Mysql RDS in this case) is a multi-AZ deployment and supports disaster recovery. If performance optimization is required this could be updated with read replicas. These instances are also fronted by a separate security group opening up DB ports for incoming (and outgoing) traffic.
  4. Secure by nature: WSO2 instances are placed in private subnets of each availability zone so that these instances cannot be accessed through the internet by default . These WSO2 instances (EC2) are fronted by security groups which defines the incoming and outgoing traffic rules of each instance.
  5. Load balancers are placed in a public subnet which has necessary external access (fronted by a network ACL consisting of a route to the internet).

Important Note about the Traffic Manager Component

Traffic Manager nodes of WSO2 API Manager is not fronted by a load balancer as it does client-side load balancing. (Gateway has to directly communicate with the Traffic Manager nodes hence need to know the available Traffic Manager IPs.) If we use auto scaling, we have no way to control the number of nodes we configure in Gateway side.Therefore Traffic Manager cluster is not auto scalable and is not fronted by a Load balancer. Therefore,

  1. Traffic Manager nodes are deployed in two separate autoscaling groups. In this scenario, by placing inside auto scaling groups we ensure there is at least 1 instance (1..1)running at a given time.(If one instance gets terminated due to an error, the autoscaling group will start another instance to ensure the count of instances defined is maintained. )
  2. In AWS, we cannot assign a static IP to an instance. So if one node gets terminated and a new node is up, its IP will be different.In that case we need to configure the new IP (of traffic manager) in Gateway side through configuration which would require the gateway nodes to restart. To avoid this, we can use DNS mapped host names instead.
<ReceiverUrlGroup>{tcp://tm1.local:9612},{tcp://tm2.local:9613} </ReceiverUrlGroup>

A Lambda function can be written which will make an API call and map the new IP address (ex: 172.10.0.37)to the DNS host name (ex: tm1.local) . This lambda function can be triggered through CloudWatch in a recursive manner if/when a new traffic manager node is up and running.

Administration and Maintenance Recommendations

Application of WSO2 Update Manager (WUM) updates: An EC2 instance (placed in a private subnet to restrict direct external access) can connect to WSO2 WUM servers through a NAT gateway( through routes)that is placed in the public subnet which has internet access. Based on the updates received through this connection the EC2 instance can forward the new pack with updates to management subnet (which consists of a preferred CI/CD option i.e puppet) could then update the deployment.

For administration purposes where you would need to SSH/RDP into WSO2 instances, a bastion host/jump box can be placed in the public subnet which can then SSH/RDP into your private subnet instances . It is important to harden this bastion host for security purposes.

--

--