Jump to content
Welcome to our new Citrix community!
  • Building a highspeed and scalable NetScaler solution to handle millions of requests per second


    Steven Wright
    • Validation Status: Validated
      Summary: NetScalers are used by some of the world's largest service providers to handle inbound application traffic. In this article, I will show how you can deploy an ADC environment that will scale to millions of HTTP requests and SSL handshakes per second.
      Assigned To: Steven Wright
      Has Video?: No

    NetScalers are used by some of the world's largest service providers to handle inbound application traffic. In this article, I will show how you can deploy an NetScaler environment that will scale to millions of HTTP requests and SSL handshakes per second. In the next, I will show you how to automate that environment with a basic introduction to Terraform.

     

    The suggested architecture will allow for multiple datacenters. In addition, it will provide load balancing, SSL offload, and the capability to secure and optimize your web application traffic. The architecture will also be scalable and fully automatable.

     

    We will begin by reviewing technologies to get traffic to your datacenter, how to distribute that traffic across NetScalers once it arrives, and how you can configure your NetScalers with code using in-house deployment scripts.

     

    Assuming you have sized the rest of your environment appropriately, this design should allow you to process around 100 million HTTP requests per second at each datacenter. However, the layout is scalable and can be used for smaller implementations without significant changes.

     

     

    Directing users to their most appropriate datacenter

    Historically, there have been two approaches to directing users to their closest datacenter: dynamic routing (anycast) and GSLB.

     

     

    Dynamic routing (anycast)

    With the dynamic routing / anycast approach, the routers at each datacenter announce the same IP ranges and rely on the Internet's routing protocol (BGP) to take the shortest path.

    The approach operates on the assumption that a British user will have a shorter path to your British datacenter and your American user to an American datacenter. The method also assumes that a short path will be the best route.

     

    The concept is illustrated in simplistic terms in the diagram below. Users, sending traffic to their ISP, will have that traffic forwarded to the datacenter that is the smallest number of hops away.

     

    image.png.3dad4c05797b003967311829187c0575.png

     

    LinkedIn's engineering team wrote an excellent article on this topic some years ago, which is well worth reading. They found that global anycast showed cross-continent problems but was good when used within a continent. 

     

    Their recommendation was to use DNS to return an IP address within the user's continent and to use anycast within each region. Users would perform a DNS lookup and be directed to the best continent to service their request. If you had multiple datacenters in that continent, these would share the same IP addresses using anycast, and users would then take the shortest path.

     

     

     

    GSLB (Global Server Load Balancing) / CADS Service (Citrix Application Delivery and Security Service)

    GSLB, Global Server Load Balancing, is a DNS based technology used to direct users to datacenters. Administrators commonly configure GSLB to ensure users receive the IP address of a datacenter in their region (or a redundant location if they are not using anycast). 

     

    NetScaler fully supports GSLB, and you can read about configuring it here. However, GSLB configurations are often simplistic as they rely on relatively few data points. 

     

    CADS Service (Citrix Application Delivery and Security Service) includes a feature known as ATM (Intelligent Traffic Management), which is the next generation of GSLB and an "Internet Aware" SaaS offering.

    While GSLB only takes metrics from your locations, CADS also allows you to integrate the radar tag into your website and collect metrics from end-users.

     

    To highlight the difference, if there were a problem at a regional ISP entirely separate from your environment, a GSLB configuration would examine metrics from your datacenters, conclude everything was healthy, and take no action. In contrast, ITM would see that users from the regional ISP are unable to reach your datacenter and immediately redirect them to a more performant location.

     

    If you intend to deploy without anycast, CADS is a must-have capability to detect and avoid connectivity issues between your users and datacenters. However, with anycast, CADS remains of significant benefit as it operates on objective user metrics to direct users to the location that provides the best possible service, avoiding both intercontinental link issues that LinkedIn wrote about and temporary connectivity problems.

     

    In this solution, we will assume that you have implemented anycast within each region and are using CADS’s ITM feature to ensure DNS sends users to the region where they receive the lowest latency and the fastest application response.

    image.png.67e2d3e714ea1495e72e3406fc466c98.png

     

     

     

     

    A standardized repeatable design at each datacenter

    With traffic now arriving at the most appropriate datacenter, we can focus on handling the resultant traffic.  

     

    We will design each datacenter to operate with a standardized design so that it's simple and easy to update and maintain.

     

    We will use physical hardware NetScalers known as "MPXs" for their high performance. However, our design is equally suitable for virtual NetScalers "VPX" if you favor a complete infrastructure-as-code approach.

     

    In our design, traffic (directed by CADS) will enter the network at the router/firewall. The router will then distribute these inbound connections to our NetScalers.

     

    Next, the NetScalers will terminate the connections, process the requests, and deliver them to backend servers.

     

    Finally, the servers will service the requests, potentially using local data or a storage solution replicated between your datacenters.

     

    All of the configuration will be automated using Terraform (covered in the next article). However, the use of Terraform is a personal preference, and NetScaler also supports a range of other automation technologies, such as:

    • Ansible
    • Citrix ADM Service
    • Nitro REST API to allow for custom scripts.

    We assume you have appropriately sized your routers, firewalls, and storage. And you have ensured your storage is replicated between datacenters if your servers so require. This article shows a single router as a simplification.

     

    image.png.5e63facfc97234a70e5a904da693ecbe.png

     

     

     

     

    Splitting traffic within the datacenter

    Within the datacenter, the router/firewall will divide connections between our NetScalers using dynamic routing.  We will be automating the configuration in the next article but, it would be helpful to first understand how this could be achieved manually.

     

    image.png.f5d559374c74ae0435a6fc28cfedc291.png

     

     

    In our dynamic routing configuration, each NetScalers "announces" the IP addresses of the virtual servers (vServers) they host to the router. As each NetScaler will host vServers with the same IP addresses, the router will distribute the inbound connections between the NetScalers.

     

    On a Cisco router, the distribution of traffic between OSPF neighbors will likely occur in hardware using a technology known as CEF, Cisco Express Forwarding, to reduce the impact on CPU. You must take care to ensure your router has sufficient resources to handle the distribution.

     

    Additionally, to ensure all packets from a user are directed to the same NetScaler, you should consult your routers manual to ensure it is using per-destination load-sharing, which is the default on a Cisco router.

     

    To implement our dynamic routing configuration, we first need to configure our router to have an OSFP neighbor relationship with the NetScalers. On a Cisco router, a very simplistic OSPF configuration would look as follows:

     

    enable

    router# config t

    router(config)# router ospf 100

    router(config-router)# network 192.168.1.0 0.0.0.255 area 0

    router(config-router)# maximum-paths 16

    router(config-router)# end

    After we enter the "network 192.168.1.0 0.0.0.255 area 0" command, our router will enable OSPF on all of its interface addresses within that network range.

     

    Next, on our NetScalers, we need to enable the OSPF routing facility with the command "enable ns feature ospf".After enabling OSPF, we can configure the NetScaler's included routing platform known as "ZebOS", by entering the command "vtysh".

     

    The ZebOS configuration on each NetScaler would look as follows:

     

    NetScaler# config terminal

    NetScaler(config)# router ospf

    NetScaler(config-router)# network 192.168.1.0/24 area 0

    NetScaler(config-router)# redistribute kernel

    With the OSPF configuration created, we can tell each NetScaler to advertise the IP address of a vServer (such Load Balancing vServer) using the following command:

     

    set ns ip 10.0.0.10 255.255.255.255 -hostroute ENABLED -vserverRHILevel ALL_VSERVERS

    This command will tell the NetScaler to announce the IP address 10.0.0.10 to the router.

     

     

    With each of our NetScalers now announcing itself to our router as a path to the Load Balancing vServer, the router will distribute inbound traffic to each of the NetScalers.

     

    After enabling OSPF, you can validate communication between the NetScalers and router using the following commands on the router. Here, we can see three NetScalers on "192.168.1.10", "1.11", and "1.12" advertising the LB vServer on "10.0.0.10".

     

    R1# sh ip ospf | s 10.0.0.10

    > 10.0.0.10/32, Intra, cost 1, area 0

      via 192.168.1.10, GigabitEthernet1.1

      via 192.168.1.11, GigabitEthernet1.2

      via 192.168.1.12, GigabitEthernet1.3

    R1# sh ip cef 10.0.0.1 detail

    10.0.0.1/32, epoch 0, per-destination sharing

      nexthop 192.168.1.10 GigabitEthernet1.1

      nexthop 192.168.1.11 GigabitEthernet1.2

      nexthop 192.168.1.12 GigabitEthernet1.3

    We have used a relatively simplistic configuration with a single router and each NetScaler acting as an OSPF path. As routers are generally limited to 16 OSPF paths, this design will likely be limited to 16 NetScalers.  More advanced implementations will be able to expand on this simple design.

     

    16 low-end appliances (such as the MPX9130) will equate to around 33 million L7 HTTP requests per second. Using high-end appliances (such as the MPX26200-50S), this design should service over 100 million L7 requests/sec at each datacenter.

     

    Note: You will have multiple routers in a production environment to meet redundancy and throughput requirements and may need to work with your network team to adjust the proposed configuration.

     

     

     

    Testing

    To test, our configuration, we will configure each NetScaler to return a different colored web page. The colors will make it immediately apparent which NetScaler is servicing our traffic.

     

     

    image.png.f63511a21b0c5368dda96aea59af0703.png

     

     

    To return the colored web page, we need to add the green, blue, or purple responder policy to each of our NetScalers and bind it to the LB vServer.

     

    #NetScaler 1

    add responder action Green_Responder_Act respondwith q{"HTTP/1.1 200 OK\r\n\r\n<html><body><p style=\"background-color:#78c445;\">Green NetScaler</p></body></html>\n"}

    add responder policy Green_Responder_Pol true Green_Responder_Act

    bind lb vserver LB_vServer -policyName Green_Responder_Pol -priority 1 -gotoPriorityExpression END -type REQUEST

     

    # NetScaler 2

    add responder action Blue_Responder_Act respondwith q{"HTTP/1.1 200 OK\r\n\r\n<html><body><p style=\"background-color:#00c9cc;\">Green NetScaler</p></body></html>\n"}

    add responder policy Blue_Responder_Pol true Blue_Responder_Act

    bind lb vserver LB_vServer -policyName Green_Responder_Pol -priority 1 -gotoPriorityExpression END -type REQUEST

     

    # NetScaler 3

    add responder action Purple_Responder_Act respondwith q{"HTTP/1.1 200 OK\r\n\r\n<html><body><p style=\"background-color:#a87bd9;\">Purple NetScaler</p></body></html>\n"}

    add responder policy Green_Responder_Pol true Purple_Responder_Act

    bind lb vserver LB_vServer -policyName Green_Responder_Pol -priority 1 -gotoPriorityExpression END -type REQUEST

     

    With the responder rules now added we can test traffic from different locations before removing the responder to restore usual service.

     

     

     

    Next steps

    We have seen how to create a scalable NetScaler solution that allows for multiple datacenters, scales to 100M L7 requests/sec at each, and supports both physical and virtual NetScalers.

     

    In the next article, we will explore how to automate the design using Terraform.

     

     

     
     

    User Feedback

    Recommended Comments

    There are no comments to display.



    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

×
×
  • Create New...