Jump to content
Welcome to our new Citrix community!
  • Configuring Multi-cluster Kubernetes Ingress with GSLB


    Richard Mancuso
    • Validation Status: Work In Progress
      Summary: Citrix provides a multi-cluster Kubernetes ingress and load balancing solution which globally monitors applications, collects & shares metrics across multiple clusters, and provides intelligent load balancing decisions.
      Has Video?: No

    Citrix provides a multi-cluster Kubernetes ingress and load balancing solution which globally monitors applications, collects & shares metrics across multiple clusters, and provides intelligent load balancing decisions. It ensures better performance and reliability for Kubernetes services that are exposed using Ingress or type LoadBalancer. Citrix’s solution supports Kubernetes clusters both on-premise (including RedHat OpenShift) and in public cloud (e.g. AWS, Azure, GCP) as well as across different geographies. For additional information, please see https://developer-docs.citrix.com/projects/citrix-k8s-ingress-controller/en/latest/multicluster/multi-cluster/, https://www.citrix.com/blogs/2020/10/09/delivering-resilient-secure-multi-cloud-kubernetes-apps-with-citrix/ and https://docs.citrix.com/en-us/citrix-adc/current-release/global-server-load-balancing.html.

     

    In this post, I will walk you through all the steps to:

    • Deploy two single node Kubernetes 1.23.3 clusters on Ubuntu Server 20.04.03
    • Deploy a static Apache web site unique to each cluster using a ConfigMap
    • Deploy Citrix Ingress Controller to expose Apache web site using type LoadBalancer
    • Deploy Citrix GSLB Controller to load balance the Apache web site across clusters

    This configuration requires two NetScaler Advanced/Premium Edition appliances already deployed with NSIP and SNIP defined and a Virtual IP (one for each cluster simulating two geographic regions). In my case I deployed two VPX 1000 Premium Edition Appliances running build 13.0-84.11 (configured with 2vCPU & 8GB RAM).

     

    Install Ubuntu Server LTS 20.04.03 from ISO image (first cluster)

    • 4 CPUs, 12GB RAM, 30GB disk
    • Set manual IP address
    • Set max size for / volume
    • Select “Install OpenSSH server”
    • Don't select any features (especially docker)
    • Hostname: ubuntu1

    Update /etc/sudoers to not require password

    sudo visudo  (change "%sudo   ALL=(ALL:ALL) ALL" to "%sudo   ALL=(ALL:ALL) NOPASSWD: ALL")

     

    Update /etc/fstab to disable swap

    sudo nano /etc/fstab  (comment out swap line)

    sudo swapoff -a

    top  (confirm swap size is 0)

     

    Allow iptables to see bridged traffic:

    sudo modprobe overlay

    sudo modprobe br_netfilter

    sudo tee /etc/sysctl.d/kubernetes.conf<<EOF

    net.bridge.bridge-nf-call-ip6tables = 1

    net.bridge.bridge-nf-call-iptables = 1

    net.ipv4.ip_forward = 1

    EOF

    sudo sysctl --system

     

    Apply system updates:

    sudo apt update

    sudo apt upgrade

     

    Install prerequisite packages:

    sudo apt install -y curl git wget apt-transport-https gnupg2 software-properties-common ca-certificates

     

    Install Docker:

    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

    sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

    sudo apt update

    sudo apt install -y containerd.io docker-ce docker-ce-cli

     

    sudo mkdir -p /etc/systemd/system/docker.service.d

    sudo tee /etc/docker/daemon.json <<EOF

    {

      "exec-opts": ["native.cgroupdriver=systemd"],

      "log-driver": "json-file",

      "log-opts": {

        "max-size": "100m"

      },

      "storage-driver": "overlay2"

    }

    EOF

    sudo systemctl daemon-reload

    sudo systemctl restart docker

    sudo systemctl enable docker

     

    Install Kubernetes:

    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

    echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

    sudo apt update

    sudo apt install -y kubelet kubeadm kubectl

    sudo systemctl enable kubelet

     

    Edit user .profile for kubectl kc alias, kubectl/kc autocompletion and set kube editor to nano

    cd ~

    nano .profile  (add following lines)

    alias kc="kubectl"

    export KUBE_EDITOR="nano"

    source <(kubectl completion bash)

    complete -F __start_kubectl kc

     

    Create clone image of VM (update hostname and IP address):

    sudo hostnamectl set-hostname ubuntu2

    sudo nano /etc/hosts  (change host to ubuntu2)

    sudo nano /etc/netplan/00-installer-config.yaml  (update IP address)

     

    Initialize Kubernetes cluster (Flannel CNI, Kubernetes Metrics Server & Dashboard, Taint for single node) on both nodes:

    sudo kubeadm init --pod-network-cidr=10.244.0.0/16

     

    mkdir -p $HOME/.kube

    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

    sudo chown $(id -u):$(id -g) $HOME/.kube/config

     

    kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

     

    kubectl taint nodes --all node-role.kubernetes.io/master-

     

    kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

    kubectl edit deploy metrics-server -n kube-system  (add arg - --kubelet-insecure-tls)

     

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml

    kubectl -n kubernetes-dashboard edit service kubernetes-dashboard  (change type from Cluster IP to NodePort)

    kubectl get services -A  (check NodePort for Dashboard)

     

    nano dashboard-user.yaml  (add following lines)

    ---

    apiVersion: v1

    kind: ServiceAccount

    metadata:

      name: dashboard-user

      namespace: kubernetes-dashboard

    ---

    apiVersion: rbac.authorization.k8s.io/v1

    kind: ClusterRoleBinding

    metadata:

      name: dashboard-user

    roleRef:

      apiGroup: rbac.authorization.k8s.io

      kind: ClusterRole

      name: cluster-admin

    subjects:

    - kind: ServiceAccount

      name: dashboard-user

      namespace: kubernetes-dashboard

    ---

     

    kubectl apply -f dashboard-user.yaml

     

    kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-user | awk '{print $1}')  (Note token for both nodes)

     

    Verify Dashboard (https://node ip address:NodePort – use token for login)

    image.thumb.png.f68c021134ce24b674d4c34c99fe6538.png

     

    Deploy Citrix Ingress Controller on both clusters:

    kubectl create secret generic adclogin --from-literal=username='nsroot' --from-literal=password='******'

     

    wget https://raw.githubusercontent.com/citrix/citrix-k8s-ingress-controller/master/deployment/baremetal/citrix-k8s-ingress-controller.yaml

     

    nano citrix-k8s-ingress-controller.yaml  (update deployment)

    - Set NSIP for respective ADC

    - Change secretkeyref name to adclogin

    - Set feature-node-watch to true (allows for automatic ingress routing to pod network)

     

             - name: "NS_IP"

               value: "xxx.xxx.xxx.xxx"

             # Set username for Nitro

             - name: "NS_USER"

               valueFrom:

                secretKeyRef:

                 name: adclogin

                 key: username

             - name: "LOGLEVEL"

               value: "INFO"

             # Set user password for Nitro

             - name: "NS_PASSWORD"

               valueFrom:

                secretKeyRef:

                 name: adclogin

                 key: password

             # Set log level

             - name: "EULA"

               value: "yes"

            args:

              - --ingress-classes

                citrix

              - --feature-node-watch

                true

     

    kubectl apply -f citrix-k8s-ingress-controller.yaml

     

    Check ADC route table for static entry for pod network (10.244.0.0)

    image.png.60a51d9ef4ee35e127303dc71ebbcfd3.png

     

    Note: In case NetScaler and Kubernetes clusters are in different network/subnet, feature-node-watch will not work. You need Citrix Node controller to create network connectivity between NetScaler and K8s clusters. Refer https://github.com/citrix/citrix-k8s-node-controller for more details.

     

    Deploy Apache web server on both clusters with static page:

    nano apache-index-html.yaml  (add following lines)

    ---

    apiVersion: v1

    kind: ConfigMap

    metadata:

      name: index-html-configmap

      namespace: default

    data:

      index.html: |

        <html>

        <h1>Welcome</h1>

        </br>

        <h1>Hi! This is Apache running on Cluster 1 </h1>  (and Cluster 2)

        </html>

    ---

     

    nano apache-deployment.yaml  (add following lines)

    ---

    apiVersion: apps/v1

    kind: Deployment

    metadata:

      name: apache

    spec:

      selector:

        matchLabels:

          app: apache

      replicas: 1

      template:

        metadata:

          labels:

            app: apache

        spec:

          containers:

          - name: apache

            image: httpd:latest

            ports:

            - containerPort: 80

              name: http

            volumeMounts:

                - name: apache-index-file

                  mountPath: /usr/local/apache2/htdocs/

            imagePullPolicy: IfNotPresent

          volumes:

          - name: apache-index-file

            configMap:

              name: index-html-configmap

    ---

    apiVersion: v1

    kind: Service

    metadata:

      name: apache

      annotations:

        service.citrix.com/service-type-0: "http"

    spec:

      selector:

        app: apache

      type: LoadBalancer

      loadBalancerIP: "xxx.xxx.xxx.xxx"  (Virtual IP for each respective ADC)

      ports:

        - name: http

          port: 80

          targetPort: 80

    ---

     

    kubectl apply -f apache-index-html.yaml

    kubectl apply -f apache-deployment.yaml

     

    Check NetScaler LB VServer, LB Service Group & CS Vserver on both appliances

    image.png.755fd91601eef5e1790c49d5dce55b21.png

    image.png.706e11f98861b873139a225ed3c7bbf1.png

    image.png.5d586d77b945d7f38d4e1fac6891f233.png

     

     

    Check NetScaler Virtual IPs for both web sites

    image.png.6f8dbf5bd9705e932c9d32193bdcc822.png

    image.png.7b6933c1f804d010102c1b5781541594.png

     

     

    Configure NetScaler GSLS Prerequisites on both NetScaler's

    • Enable management access on SNIP for GSLB autosync
    • On master: Change GSLB Settings: enable autosync and set sync mode to FullSync (Note: There is a known issue with IncrementalSync that is not removing Content Switch actions on remote sites preventing the deletion of GSLB Vservers)

    image.png.d1336d52b283386ebbf78f76d60838a7.png

     

    Either use GSLB Wizard or manually create ADNS service and GSLB sites on master NetScaler:

    • Create ADNS service (specify SNIP address)
    • Create local site "site1" (specify SNIP, specify RPC password/secure)
    • Create remote site "site2" (specify SNIP, specify RPC password/secure)

    image.png.9d5d6d7448f8d9c9c0ce246f496e5528.png

    image.png.db6e530f24f1c6396290bb9666f97e89.png

    image.png.1f90d913ffcc1dc20b6940a9893ea4f5.png

     

    Either use GSLB Wizard or manually create ADNS service and GSLB sites on remote NetScaler:

    • Create ADNS service (specify SNIP address)
    • Create local site "site2" (specify SNIP, specify RPC password/secure)
    • Create remote site "site1" (specify SNIP, specify RPC password/secure)

    Deploy GSLB Controller on both clusters:

    wget https://raw.githubusercontent.com/citrix/citrix-k8s-ingress-controller/master/multicluster/Manifest/gse-crd.yaml

    wget https://raw.githubusercontent.com/citrix/citrix-k8s-ingress-controller/master/multicluster/Manifest/gtp-crd.yaml

    wget https://raw.githubusercontent.com/citrix/citrix-k8s-ingress-controller/master/multicluster/Manifest/gslb-controller.yaml

    wget https://raw.githubusercontent.com/citrix/citrix-k8s-ingress-controller/master/multicluster/Manifest/gslb-rbac.yaml

     

    kubectl apply -f gslb-rbac.yaml

    kubectl apply -f gtp-crd.yaml

    kubectl apply -f gse-crd.yaml

     

    nano gslb-controller.yaml  (Update following parameters)

    • secret name=adclogin
    • ns_port=443
    • local_region=east (west on second cluster)
    • local_cluster=cluster1 (cluster2 on second cluster)
    • sitenames=site1,site2 (match names from ADC GSLB sites)
    • site1_ip=xxx.xxx.xxx.xxx (NSIP for master ADC)
    • site2_ip=xxx.xxx.xxx.xxx (NSIP for remote ADC)
    • site1_region=east
    • site2_region=west

            - name: "NS_PORT"

              value: "443"

            - name: "LOCAL_REGION"

              value: "east"

            - name: "LOCAL_CLUSTER"

              value: "cluster1"

            - name: "SITENAMES"

              value: "site1,site2"

            - name: "site1_ip"

              value: "xxx.xxx.xxx.xxx"

            - name: "site1_region"

              value: "east"

            - name: "site1_username"

              valueFrom:

                secretKeyRef:

                  name: adclogin

                  key: username

            - name: "site1_password"

              valueFrom:

                secretKeyRef:

                  name: adclogin

                  key: password

            - name: "site2_ip"

              value: "xxx.xxx.xxx.xxx"

            - name: "site2_region"

              value: "west"

            - name: "site2_username"

              valueFrom:

                secretKeyRef:

                  name: adclogin

                  key: username

            - name: "site2_password"

              valueFrom:

                secretKeyRef:

                  name: adclogin

                  key: password

     

    kubectl apply -f gslb-controller.yaml

     

    nano apache-gtp.yaml  (add following lines)

    ---

    apiVersion: "citrix.com/v1beta1"

    kind: globaltrafficpolicy

    metadata:

      name: apache-gtp

    spec:

      serviceType: 'HTTP'

      hosts:

      - host: 'apache.gslb.home'   (GLSB hostname for app)

        policy:

          trafficPolicy: 'ROUNDROBIN'

          targets:

          - destination: 'apache.default.east.cluster1'  (naming format: service name.namespace.region name.cluster name)

            weight: 1

          - destination: 'apache.default.west.cluster2'  (naming format: service name.namespace.region name.cluster name)

            weight: 1

          monitor:

          - monType: http

            uri: ''

            respCode: 200

    ---

     

    kubectl apply -f apache-gtp.yaml

     

    kubectl get gse  (Confirm GSE is auto created)

     

    image.png.67f0190ecb1afed336f2e430d3e18c14.png

    image.png.8bbd9c8ddfa257fcb6cd65dfd123e59c.png

     

    Note: for GSE auto creation to work the following information must show in "kubectl get service apache -o yaml"

          status:

            loadBalancer:

              ingress:

              - ip: xxx.xxx.xxx.xxx

     

    Check GSLB entries on master/remote NetScaler's (Domains, Virtual Servers, Service Groups)

    image.png.faadb19e4bad8d1a10c1b528c7f4e3e2.png

    image.png.7c89f336bea327ee49d4790120bcd3e2.png

    image.png.8ec8f569f62d8f6af2e476b20d8f4264.png

     

     

    Test GSLB from Windows Client using YogaDNS (simulate domain delegation to NetScaler's):

    • Install YogaDNS (https://yogadns.com/download/)
      • Manual configuration
      • Disable AutoStart (under File)
      • Set Screen Log to Errors Only
      • Set File Log to Disabled
      • Add both ADC
      • SNIPs as plain DNS servers (Configuration -> DNS Servers)
      • Create pool (type = Load Balancing) for both VPX DNS servers

     

    image.png.3160ec82e54e261509fb724a196ebf38.png

     

      • Add iana.org/192.0.43.8 A records on both ADCs for YogaDNS check
      • Add rule for *.gslb.home pointing to DNS pool (Configuration -> Rules)

    image.png.6063bb34df3ba256c731a3bd94320086.png

     

    • Validate GSLB working using curl http://apache.gslb.home (every 5+ seconds it should switch clusters with Roundrobin configuration)
    • Optional: create batch file
    :loop

    curl http://apache.gslb.home

    @timeout /t 6 /nobreak

    goto loop

    image.png.1c8a4ab61077de931063ef226372b712.png

     


    User Feedback

    Recommended Comments

    There are no comments to display.



    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

×
×
  • Create New...