As your organization seeks to increase IT agility and reduce operating costs building an orchestration platform like OpenStack to automate the deployment of resources makes a lot of sense. As you plan the implementation of your OpenStack platform ensuring application availability and performance is a necessary design goal. There are a number of things to consider to this end, for example how do you minimize downtime, or support your legacy applications, as well as your applications that are built for the cloud. You might need to host multiple tenants on your cloud platform, and deliver performance SLAs to them. Larger application deployments might require extending cloud platform services to multiple locations.
To ensure a successful implementation of OpenStack you need design recommendations around best practices for multi-zone and multi-region cloud architectures. There are two major areas to look at. One is resource segregation or ‘pooling’ and the use of cloud platform constructs such as availability zones and host aggregates to group infrastructure into fault domains and high-availability domains. The other is how to use an ADC to provide highly available, highly performant, application delivery and load balancing services in your distributed, multi-tenant, fault-tolerant cloud architecture.
Best Practices for Multi-Zone and Multi-Region Cloud Integration
It’s easier to build resilient and scalable OpenStack data centers if three best-practice rules are applied in planning:
•Segregate physical resources to create fault domains, and plan to mark these as OpenStack Availability Zones.
•Distribute OpenStack and database controllers across three or more adjacent fault domains to create a resilient cluster.
•Networks - Design considerations should include both data plane and control plane networking criteria for scale out and high-availability.
•Fully stateless applications, e.g., web apps, can be made resilient by deploying redundant instances in separate AZs, behind load balancing.
•Apps requiring quorum-type HA can be deployed across three separate AZs with their synchronization scaffolding, just as the OpenStack and database (Ceph) controllers are.
Thereafter, scale-out can be done by adding compute, storage, and network capacity within each fault domain up to OpenStack controller capacity, and thereafter, by adding infrastructure aggregations in their own fault domains, each with a new OpenStack controller and storage controller components.
Incorporating ADCs in a Scale out and High availability Model
In addition to connectivity and isolation, the availability and redundancy model for network services such as load balancers, and firewalls needs to be carefully considered while planning high availability for your OpenStack cloud. Primarily, the following factors need to be taken into account:
•Scale out - A key aspect of designing for the cloud is the ability to add more capacity on demand to meet the needs of growing workloads. The recommended approach is to add more nodes as incremental units of capacity. With this scale out approach, your load balancing capacity can grow proportionally along with the application’s compute capacity.
•High availability model - Although an active-passive HA model is very popular in standard enterprise deployments, active-active is the preferred model for typical cloud architectures. The reason goes back to the fact that the cloud design is based on a scale out architecture where ideally, every node is actively processing requests while providing N+1 redundancy for the other nodes.
Considerations in Deploying Stateful Applications
New applications designed for the cloud are typically stateless and are built to fail, with the philosophy of replacing failed nodes with new ones as opposed to repairing the ones that have failed. There may be legacy applications that still rely on shared state between multiple nodes and this has to be taken into account. From an ADC standpoint, vendor technologies need to be evaluated for supporting synchronization and sharing of application state across multiple nodes in the scale out designs.
Load Balancing as a Cloud Service in OpenStack
OpenStack Neutron LBaaS offers an “as a service” consumption model for load balancing through a set of APIs, which are agnostic of specific vendor technologies as well as abstracted away from the infrastructure complexities that are involved with managing load balancing appliances. Neutron LBaaS gives the Openstack operator flexibility of choice on the backend ADC technologies to use.
Production-Grade Openstack LBaaS with Citrix NetScaler
NetScaler’s OpenStack LBaaS integration has been designed as a production grade solution for organizations that are running business critical applications at scale. It addresses the operational concerns around running infrastructure-as-a-service, while providing flexibility and control over performance, availability and scale.
NetScaler’s OpenStack LBaaS solution is based on a purpose-built orchestration product from Citrix called NetScaler Control Center (NCC), which simplifies the operational complexity involved in deploying LBaaS in OpenStack seamlessly integrating all NetScaler appliances both physical and virtual.
The Mirantis and Citrix Solution
Mirantis and Citrix have collaborated to cover this complex topic space, with each bringing to bear deep engineering expertise and considerable real-world customer experience. Mirantis is the leading pure-play OpenStack company, and creator of the Mirantis OpenStack distribution. With NetScaler Citrix pioneered Application Delivery Controller deployment in virtualized cloud architectures and have unique expertise in large-scale software-defined networking environments.
For more information on best practices for multi-zone and multi-region cloud integration see this white paper from Mirantis and Citrix, Using Production-Grade ADC Services to Build Scalable, Redundant OpenStack Clouds.