NetScaler ADC CPX in Kubernetes with Diamanti and Nirmata Validated Reference Design Part 2
September 12, 2022
Author: Luis Ugarte, Beth Pollack
Commands on VPX to expose Tier-2 CPX
Tier-1 VPX should do ssl encryption/decryption and insert X-forward header while sending to Tier-2 CPX. Tier-1 configuration should be performed manually. X-Forward header can be inserted using -cip ENABLED in servicegroup. Open config.txt.
Create a csverver:
Upload the certkey in NetScaler ADC: wild.com-key.pem, wild.com-cert.pem
add cs vserver frontent_grafana HTTP <CS_VSERVER_IP> 80 -cltTimeout 180
Expose www.hotdrinks.com, www.colddrinks.com, www.guestbook.com on Tier-1 VPX:
add serviceGroup team_hotdrink_cpx SSL -cip ENABLEDadd serviceGroup team_colddrink_cpx SSL -cip ENABLEDadd serviceGroup team_redis_cpx HTTPadd ssl certKey cert -cert "wild-hotdrink.com-cert.pem" -key "wild-hotdrink.com-key.pem"add lb vserver team_hotdrink_cpx HTTP 0.0.0.0 0add lb vserver team_colddrink_cpx HTTP 0.0.0.0 0add lb vserver team_redis_cpx HTTP 0.0.0.0 0add cs vserver frontent SSL 10.106.73.218 443add cs action team_hotdrink_cpx -targetLBVserver team_hotdrink_cpxadd cs action team_colddrink_cpx -targetLBVserver team_colddrink_cpxadd cs action team_redis_cpx -targetLBVserver team_redis_cpxadd cs policy team_hotdrink_cpx -rule "HTTP.REQ.HOSTNAME.SERVER.EQ("www.hotdrinks.com") && HTTP.REQ.URL.PATH.STARTSWITH("/")" -action team_hotdrink_cpxadd cs policy team_colddrink_cpx -rule "HTTP.REQ.HOSTNAME.SERVER.EQ("www.colddrinks.com") && HTTP.REQ.URL.PATH.STARTSWITH("/")" -action team_colddrink_cpxadd cs policy team_redis_cpx -rule "HTTP.REQ.HOSTNAME.SERVER.EQ("www.guestbook.com") && HTTP.REQ.URL.PATH.STARTSWITH("/")" -action team_redis_cpxbind lb vserver team_hotdrink_cpx team_hotdrink_cpxbind lb vserver team_colddrink_cpx team_colddrink_cpxbind lb vserver team_redis_cpx team_redis_cpxbind cs vserver frontent -policyName team_hotdrink_cpx -priority 10bind cs vserver frontent -policyName team_colddrink_cpx -priority 20bind cs vserver frontent -policyName team_redis_cpx -priority 30bind serviceGroup team_hotdrink_cpx 10.1.3.8 443bind serviceGroup team_colddrink_cpx 10.1.2.52 443bind serviceGroup team_redis_cpx 10.1.2.53 80bind ssl vserver frontent -certkeyName cert
Update the IP address to the CPX pod IPs for servicegroup:
root@ubuntu-211:~/demo-nimata/final/final-v1# kubectl get pods -n cpx -o wideNAME READY STATUS RESTARTS AGE IP NODEcpx-ingress-colddrinks-5bd94bff8b-7prdl 1/1 Running 0 2h 10.1.3.8 ubuntu-221cpx-ingress-hotdrinks-7c99b59f88-5kclv 1/1 Running 0 2h 10.1.2.52 ubuntu-213cpx-ingress-redis-7bd6789d7f-szbv7 1/1 Running 0 2h 10.1.2.53 ubuntu-213
To access www.hotdrinks.com, www.colddrinks.com, www.guestbook.com, the hosts file (of the machine from where the pages are to be accessed) should be appended with the following values:
<CS_VSERVER_IP> www.hotdrinks.com <CS_VSERVER_IP> www.colddrinks.com <CS_VSERVER_IP> www.guestbook.com
After this is done, the apps can be accessed by visiting: www.hotdrinks.com, www.colddrinks.com, www.guestbook.com
Validating Tier-2 CPX configuration
To validate the CPX configuration, go to the CPX environment. Select the CPX running application.
Select the cpx-ingress-hotdrinks deployment, then click on the cpx-ingress-hotdrinks-xxxx-xxxx pod.
On the next page, go to the running container and launch the terminal for cpx-ingress-hotdrinks by typing the “bash” command.
When terminal gets connected, validate the configuration using the regular NetScaler command via cli_script.sh.
- cli_script.sh “sh cs vs”
- cli_script.sh “sh lb vs”
- cli_script.sh “sh servicegroup”
The validation can be done for other CPX deployment for team-colddrink and team-mongodb in the same manner.
Performing scale up/scale down
To perform scale up/scale down:
- Go to the team-hotdrink environment. Select the team-hotdrink running application.
- Click the frontend-hotdrinks deployment.
- On the next page, click Update replicas. Increase it to 10.
Refer: Validating Tier-2 CPX configuration to check the configuration in CPX (deployment: cpx-ingress-hotdrinks).
- Go to the CPX environment. Select a running CPX application.
- Click the cpx-ingress-hotdrinks deployment.
- Click the cpx-ingress-hotdrinks-xxxx-xxxx pod.
- On the next page, go to the running container and launch the terminal for cpx-ingress-hotdrinks by typing the “bash” command.
- cli_script.sh "sh servicegroup < servicegroup name >".
Performing rolling update
To perform rolling update:
- Go to the team-hotdrink environment. Select the team-hotdrink running application.
- Deploy the frontend-hotdrinks.
- On the next page, go to the Pod template.
- Update the image to: quay.io/citrix/hotdrinks-v2: latest.
- Let the update complete.
- Access the application again. The new page should come with an updated image after rolling the update.
Deploying Prometheus
NetScaler Metrics Exporter, Prometheus, and Grafana are being used to automatically detect and collect metrics from the ingress CPX.
Steps to deploy Prometheus:
Create the environments for running CPX and the apps:
- Go to the Environment tab.
- Click Add Environment.
- Create the environments for running Exporter, Prometheus, and Grafana.
- Create the environment: monitoring.
Upload the .yaml file using Nirmata:
- Go to the Catalog tab.
- Click Add Application.
- Click Add to add the applications.
- Add application: monitoring (monitoring.yaml).
Running Prometheus application:
- Go to the Environment tab and select the correct environment: monitoring.
- Click Run Application using the name monitoring.
- This deploys the Exporter, Prometheus, and Grafana pods, and begins to collect metrics.
- Now Prometheus and Grafana need to be exposed through the VPX.
Commands on the VPX to expose Prometheus and Grafana:
Create a csvserver:
add cs vserver frontent_grafana HTTP <CS_VSERVER_IP> 80 -cltTimeout 180
Expose Prometheus:
add serviceGroup prometheus HTTPadd lb vserver prometheus HTTP 0.0.0.0 0add cs action prometheus -targetLBVserver prometheusadd cs policy prometheus -rule "HTTP.REQ.HOSTNAME.SERVER.EQ("www.prometheus.com") && HTTP.REQ.URL.PATH.STARTSWITH("/")" -action prometheusbind lb vserver prometheus prometheusbind cs vserver frontent_grafana -policyName prometheus -priority 20bind serviceGroup prometheus <PROMETHEUS_POD_IP> 9090
Note:
Get the prometheus-k8s-0 pod IP using “kubectl get pods -n monitoring -o wide”
Expose Grafana:
add serviceGroup grafana HTTPadd lb vserver grafana HTTP 0.0.0.0 0add cs action grafana -targetLBVserver grafanaadd cs policy grafana -rule "HTTP.REQ.HOSTNAME.SERVER.EQ("www.grafana.com") && HTTP.REQ.URL.PATH.STARTSWITH("/")" -action grafanabind lb vserver grafana grafanabind cs vserver frontent_grafana -policyName grafana -priority 10bind serviceGroup grafana <GRAFANA_POD_IP> 3000
Note:
Get the grafana-xxxx-xxx pod IP using kubectl get pods -n monitoring -o wide
- Now the Prometheus and Grafana pages have been exposed for access via the cs vserver of the VPX.
To access Prometheus and Grafana, the hosts file (of the machine from where the pages are to be accessed) should be appended with the following values:
<CS_VSERVER_IP> www.grafana.com <CS_VSERVER_IP> www.prometheus.com
- When this is done, access Prometheus by visiting www.prometheus.com. Access Grafana by visiting www.grafana.com.
Visualize the metrics:
- To ensure that Prometheus has detected the Exporter, visit www.prometheus.com/targets. It should contain a list of all Exporters which are monitoring the CPX and VPX devices. Ensure all Exporters are in the UP state. See the following example:
- Now you can use Grafana to plot the values which are being collected. To do that:
- Go to www.grafana.com. Ensure that an appropriate entry is added in the host file.
- Login using the default username admin and password admin.
- After logging in, click Add data source in the home dashboard.
- Select the Prometheus option.
- Provide/change the following details:
- Name: prometheus (all lowercase).
- URL: http://prometheus:9090.
- Leave the remaining entries with default values.
- Click Save and Test. Wait for a few seconds until the Data source is working message appears at the bottom of the screen.
- Import a pre-designed Grafana template by clicking the + icon on the left hand panel. Choose Import.
- Click the Upload json button and select the sample_grafana_dashboard.json file (Leave Name, Folder, and Unique Identifier unchanged).
- Choose Prometheus from the prometheus dropdown menu, and click Import.
- This uploads a dashboard similar to the following image:
Licensing and performance tests
Running CPXs for perf and licensing.
The number of CPX cores and license server details is given in the following environment variables.
Environment variable to select the number of cores
- name: “CPX_CORES”
- value: “3”
Environment variable to select the license server
- name: “LS_IP”
- value: “X.X.X.X”
Diamanti annotations: diamanti.com/endpoint0: '{"network":"lab-network","perfTier":"high"}
Point to correct license server by setting correct IP above.
- Add the mentioned above environment variables as well as Diamanti specific annotations in the cpx-perf.yaml file.
- Go to the Environment tab and create the cpx-perf environment.
- Upload the YAML application using Nirmata.
- Go to the Catalog tab.
- Click Add Application.
- Click Add to add an application: cpx-perf.yaml. Application name: cpx-perf.
- Running CPX:
- Go to the Environment tab and select the cpx-perf environment.
- In the cpx-perf environment, run the cpx-svcacct application.
- In the cpx-perf environment, run the cpx-perf application.
- After running the application, go to the cpx-perf application.
- Under Deployments > StatefulSets & DaemonSets tab, click on the cpx-ingress-perf deployment. On the next page, edit the Pod template. Enter CPX in the Service Account.
- Validate the license is working and that the license checkout is happening in Citrix ADM.
- To validate on the CPX, perform the following steps:
- kubectl get pods -n cpx
- kubectl exec -it -n cpx bash
- cli_script.sh ‘sh licenseserver’
- cli_script.sh ‘sh capacity’
- View a similar output:
- To validate on the CPX, perform the following steps:
root@cpx-ingress-colddrinks-66f4d75f76-kzf8w:/# cli_script.sh 'sh licenseserver'exec: sh licenseserver1) ServerName: 10.217.212.228Port: 27000 Status: 1 Grace: 0 Gptimeleft: 0Doneroot@cpx-ingress-colddrinks-66f4d75f76-kzf8w:/# cli_script.sh 'sh capacity'exec: sh capacity Actualbandwidth: 10000 VcpuCount: 3 Edition: Platinum Unit: Mbps Maxbandwidth: 10000 Minbandwidth: 20 Instancecount: 0Done
- To validate on the ADM, go to the license server and navigate to the Networks > Licenses > Virtual CPU Licenses.
- Here you should see the licensed CPX along with the core count.
Annotations table
Annotation | Possible value | Description | Default (if any) |
kubernetes.io/ingress.class | ingress class name | It’s a way to associate a particular ingress resource with an ingress controller. For example, kubernetes.io/ingress.class:"Citrix" | Configures all ingresses |
ingress.citrix.com/secure_backend | Using the .json format, list the services for secure-backend | Use True, if you want NetScaler ADC to connect your application with the secure HTTPS connection. Use False, if you want NetScaler ADC to connect your application with an insecure HTTP connection. For example, ingress.citrix.com/secure_backend: {‘app1’:"True", ‘app2’:"False", ‘app3’:"True"} | “False” |
ingress.citrix.com/lbvserver | In JSON form, settings for lbvserver | It provides smart annotation capability. Using this, an advanced user (who has knowledge of NetScaler LB Vserver and Service group options) can directly apply them. Values must be in the .json format. For each back-end app in the ingress, provide a key value pair. Key name should match with the corresponding CLI name. For example, ingress.citrix.com/lbvserver: '{"app-1":{"lbmethod":"ROUNDROBIN"}}' | Default values |
Recommended Comments
There are no comments to display.
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now