Schlagwort-Archive: Container Service Extension

How to use a per Service SSL certificate in a CSE TKGm cluster

Using VMware Cloud Director and the Container Service extension you can use kubectl expose to create a service of type Load Balancer.

The Kubernetes clusters, created by the Container Service extension, can leverage the NSX Advanced Load Balancer (formerly know as AVI Load Balancer). The integration, that is done via the Cloud Controller Manager, support L4 load balancing. With the latest version of CCM, you are now be able to define a certificate per service created.


To follow the following steps, I assume that CSE 3.1.2 is deployed (CSE Installation) , NSX Advanced deployed and configured to be used by CSE (Enable NSX Advanced Load Balancer in VCD).

First of all, you have to check if CCM version 1.1.0 is deployed.

kubectl get deployment vmware-cloud-director-ccm -n kube-system

apiVersion: apps/v1
kind: Deployment
  annotations: "1" |
  creationTimestamp: "2022-01-31T17:00:35Z"
  generation: 1
    app: vmware-cloud-director-ccm
  name: vmware-cloud-director-ccm
  namespace: kube-system
  resourceVersion: "826"
  uid: 9c0ec466-03f1-41c4-81f2-ee14075c7286
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 2
      app: vmware-cloud-director-ccm
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
      annotations: ""
      creationTimestamp: null
        app: vmware-cloud-director-ccm
            - matchExpressions:
              - key:
                operator: Exists
      - command:
        - /opt/vcloud/bin/cloud-provider-for-cloud-director
        - --cloud-provider=vmware-cloud-director
        - --cloud-config=/etc/kubernetes/vcloud/vcloud-ccm-config.yaml
        - --allow-untagged-cloud=true
        imagePullPolicy: IfNotPresent
        name: vmware-cloud-director-ccm
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        - mountPath: /etc/kubernetes/vcloud
          name: vcloud-ccm-config-volume
        - mountPath: /etc/kubernetes/vcloud/basic-auth
          name: vcloud-ccm-vcloud-basic-auth-volume
      dnsPolicy: Default
      hostNetwork: true
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: cloud-controller-manager
      serviceAccountName: cloud-controller-manager
      terminationGracePeriodSeconds: 30
      - effect: NoSchedule
        value: "true"
      - key: CriticalAddonsOnly
        operator: Exists
      - effect: NoSchedule
      - configMap:
          defaultMode: 420
          name: vcloud-ccm-configmap
        name: vcloud-ccm-config-volume
      - name: vcloud-ccm-vcloud-basic-auth-volume
          defaultMode: 420
          secretName: vcloud-basic-auth
  availableReplicas: 1
  - lastTransitionTime: "2022-01-31T17:02:50Z"
    lastUpdateTime: "2022-01-31T17:02:50Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2022-01-31T17:00:35Z"
    lastUpdateTime: "2022-01-31T17:02:50Z"
    message: ReplicaSet "vmware-cloud-director-ccm-b5d58cd57" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

Search for:

Version 1.1.0.latest is needed for the following steps.

A little bit of background on SSL load Balancers

When creating a Load Balancer of encrypted traffic, you have to decide where your encryption endpoint should be hosted. 

We differentiate between two possible architecture:

  • SSL Termination on the Load Balancer
  • SSL Passthrough

Here you can find more details on the different SSL Load Balancer Architectures. 

In our use-case, exposing SSL workloads, running on a TKGm cluster created by CSE, SSL termination is the supported architecture.  

We need to create a NSX Advanced Load Balancer with an SSL certificate for the endpoint. The traffic will be forwarded from the Load Balancer as http traffic to the containers.

How to configure a service using SSL termination and a custom SSL-certificate

I will show in the following, how to expose a NGINX deployment using https.

First of all, you have to create a deployment:

 kubectl create deployment nginx --image=nginx --replicas=2

To expose a service using SSL termination you need to add the following annotation to your service definition:

annotations: "443" "my-service-cert"

You need to replace my-service-cert by the name of your certificate.

The easiest way to create a service accordingly, is to run kubectlwith the --dry-runoption: 

kubectl expose deployment nginx --type=LoadBalancer --port=443 --targetPort=80 --dry-run -o yaml > nginx-svc.yaml

After adding the annotations, your nginx-svc.yaml should like look the following:

apiVersion: v1
kind: Service
annotations: "443" "my-service-cert"
app: nginx
name: nginx
- port: 443
protocol: TCP
targetPort: 80
app: nginx
type: LoadBalancer
loadBalancer: {}

Excute kubectl apply -f nginx-svc.yaml and you are done:



Change virtual IP´s for CSE K8s services

The VMware Container Service extensions offers a nice integration of Kubernetes into the NSX Advanced Loadbalancer (formerly known as AVI LoadBalancer).

With the follwoing steps, you can create a demo nginx and expose it to the VMware Cloud director external network:

$ kubectl expose deplyoment nginx --type=LoadBalancer --port=80

You might have noticed that an internal virtual IP address within the 192.168.8.x range is assigned!

Loadbalancer virtual IP address

I was quite often asked, if and how you can change the IP address range!

Yes, it is possible to change the IP address range but some Kubernetes magic is needed!

Disclaimer: You are executing the steps described below on your own responsibility!

First of all, you have to backup the original config! If you do not backup your config, there is a high risk of destroying your K8s cluster!

We have to figure out which configmap needs to be backed up. Look out for ccm.

$ kubectl get configmaps -n kube-system
NAME                                      DATA   AGE
antrea-ca                                 1      27h
antrea-config-9c7h568bgf                  3      27h
cert-manager-cainjector-leader-election   0      26h
cert-manager-controller                   0      26h
coredns                                   1      27h
extension-apiserver-authentication        6      27h
kube-proxy                                2      27h
kube-root-ca.crt                          1      27h
kubeadm-config                            2      27h
kubelet-config-1.21                       1      27h
vcloud-ccm-configmap                      1      27h
vcloud-csi-configmap                      1      27h

You need to backup the vcloud-ccm-configmap!

$ kubectl get configmap vcloud-ccm-configmap -o yaml -n kube-system > ccm-configmap-backup.yaml

As a next and more important step, you have to backup the ccm deployment config.

Use kubectl to figure out which pod needs to be backed up. Typically the pod is deployed in the namespace kube-system. Look out for a pod containing vmware-cloud-director-ccm-*.

$ kubectl get pods -n kube-system
NAME                                         READY   STATUS    RESTARTS   AGE
antrea-agent-4t9wb                           2/2     Running   0          27h
antrea-agent-5dhz9                           2/2     Running   0          27h
antrea-agent-tfrqv                           2/2     Running   0          27h
antrea-controller-5456b989f5-fz45d           1/1     Running   0          27h
coredns-76c9c76db4-dz7sl                     1/1     Running   0          27h
coredns-76c9c76db4-tlggh                     1/1     Running   0          27h
csi-vcd-controllerplugin-0                   3/3     Running   0          27h
csi-vcd-nodeplugin-7w9k5                     2/2     Running   0          27h
csi-vcd-nodeplugin-bppmr                     2/2     Running   0          27h
etcd-mstr-7byg                               1/1     Running   0          27h
kube-apiserver-mstr-7byg                     1/1     Running   0          27h
kube-controller-manager-mstr-7byg            1/1     Running   0          27h
kube-proxy-5kk9j                             1/1     Running   0          27h
kube-proxy-psxlr                             1/1     Running   0          27h
kube-proxy-sh68t                             1/1     Running   0          27h
kube-scheduler-mstr-7byg                     1/1     Running   0          27h
vmware-cloud-director-ccm-669599b5b5-z572s   1/1     Running   0          27h
$ kubectl get pod vmware-cloud-director-ccm-669599b5b5-z572s -n kube-system -o yaml > ccm-deployment-backup.yaml

Copy the ccm-configmap-backup.yaml to antoher file like ccm-configmap-new.yaml. Open the ccm-configmap-new.yaml, you created before, in a text editor like vim. Change the startIP and endIP according to your needs!

apiVersion: v1
  vcloud-ccm-config.yaml: |
      host: ""
      org: "next-gen"
      vdc: "next-gen-ovdc"
      vAppName: ClusterAPI-MGMT
      network: "next-gen-int"
      vipSubnet: ""
        startIP: ""
        endIP: ""
        http: 80
        https: 443
      certAlias: urn:vcloud:entity:cse:nativeCluster:58ae31cd-17b6-4702-bf79-7777b401eb32-cert
    clusterid: urn:vcloud:entity:cse:nativeCluster:58ae31cd-17b6-4702-bf79-7777b401eb32
immutable: true
kind: ConfigMap
  annotations: |
      {"apiVersion":"v1","data":{"vcloud-ccm-config.yaml":"vcd:\n  host: \"\"\n  org: \"next-gen\"\n  vdc: \"next-gen-ovdc\"\n  vAppName: ClusterAPI-MGMT\n  network: \"next-gen-int\"\n  vipSubnet: \"\"\nloadbalancer:\n  oneArm:\n    startIP: \"\"\n    endIP: \"\"\n  ports:\n    http: 80\n    https: 443\n  certAlias: urn:vcloud:entity:cse:nativeCluster:58ae31cd-17b6-4702-bf79-7777b401eb32-cert\nclusterid: urn:vcloud:entity:cse:nativeCluster:58ae31cd-17b6-4702-bf79-7777b401eb32\n"},"immutable":true,"kind":"ConfigMap","metadata":{"annotations":{},"name":"vcloud-ccm-configmap","namespace":"kube-system"}}
  creationTimestamp: "2022-01-27T09:46:18Z"
  name: vcloud-ccm-configmap
  namespace: kube-system
  resourceVersion: "440"
  uid: db3e8894-5060-44e2-b20f-1eda812f84a4

PLEASE DOUBLE_CHECK that you have backed up your original config before continuing!

After applying the config, the new virtual IP address range is used:

kubectl delete -f ccm-deployment-backup.yaml
kubectl delete -f ccm-configmap-backup.yaml
kubectl apply -f ccm-configmap-new.yaml
kubectl apply -f ccm-deployment-backup.yaml