I was quite often asked, if I could describe how to add a certificate into Cloud Director so that can be used as described in my blog post on How to use custom SSL certificates.
Here we go:
First of all, you have give you tenant the permission to manage their own certificates:
Edit the GlobalRole OrgAdmin or any other rule you want to grant CertificateLibrary permissions under “Administration->TenantAccesControl->GlobalRoles->OrganisationAdministrator->Edit”
Afterwards you can log in as Org user that has the rights assigned accordingly and add the certificate to your CertLibrary:
Go to Administration->CertificateManagement->Certificates Library->Import.
Afterwards give a name for the cert:
Import the full-chain of the certificate:
and the private key:
Now the certificate is ready to use! You can refer to the cert using the name you have given.
Using VMware Cloud Director and the Container Service extension you can use kubectl expose to create a service of type Load Balancer.
The Kubernetes clusters, created by the Container Service extension, can leverage the NSX Advanced Load Balancer (formerly know as AVI Load Balancer). The integration, that is done via the Cloud Controller Manager, support L4 load balancing. With the latest version of CCM, you are now be able to define a certificate per service created.
Version 1.1.0.latest is needed for the following steps.
A little bit of background on SSL load Balancers
When creating a Load Balancer of encrypted traffic, you have to decide where your encryption endpoint should be hosted.
We differentiate between two possible architecture:
SSL Termination on the Load Balancer
SSL Passthrough
Here you can find more details on the different SSL Load Balancer Architectures.
In our use-case, exposing SSL workloads, running on a TKGm cluster created by CSE, SSL termination is the supported architecture.
We need to create a NSX Advanced Load Balancer with an SSL certificate for the endpoint. The traffic will be forwarded from the Load Balancer as http traffic to the containers.
How to configure a service using SSL termination and a custom SSL-certificate
I will show in the following, how to expose a NGINX deployment using https.
The VMware Container Service extensions offers a nice integration of Kubernetes into the NSX Advanced Loadbalancer (formerly known as AVI LoadBalancer).
With the follwoing steps, you can create a demo nginx and expose it to the VMware Cloud director external network:
As a next and more important step, you have to backup the ccm deployment config.
Use kubectl to figure out which pod needs to be backed up. Typically the pod is deployed in the namespace kube-system. Look out for a pod containing vmware-cloud-director-ccm-*.
$ kubectl get pod vmware-cloud-director-ccm-669599b5b5-z572s -n kube-system -o yaml > ccm-deployment-backup.yaml
Copy the ccm-configmap-backup.yaml to antoher file like ccm-configmap-new.yaml. Open the ccm-configmap-new.yaml, you created before, in a text editor like vim. Change the startIP and endIP according to your needs!
VMware Cloud Director 10.2 is there! This is a big release and a big step forward.
I played already with Cloud Director 10.2 for a while and this is a big release with lots of improvements:
NSX-T integration: The NSX-T integration was significantly improved so that NSX-V and NSX-T reached feature parity! One of my personal highlights are the support of VRF´s and the AVI load integration
Support of vSphere with Tanzu in Cloud Director: VMware Cloud Director supports as of now vSphere with Tanzu integration. It is possible to enable self-service creation of TKG clusters and management out of VMware Cloud Director 10.2
Please stay tuned, I will publish a series of blog posts on the integration of vSphere with Tanzu in VMware Cloud Director very soon!
Disclaimer: All changes that you might do are your own responsibility! Please backup your configuration files before changing them. I am not liable for any damages you might create following this blog post!
With CSE2.6 an encryption of configuration files was introduced to protect confidential information like RabbitMQ, vCenter and VMware Cloud director password.
If you want to execute the Container Service extension as a Service you need to ensure the highest level of security possible, particularly in production, so let´s start at the beginning
The CSE configuration file
You should not leave any configuration information not encrypted, otherwise your server maybe more openly accessibly .
During the encryption of the file you are asked to define a password. This password is needed whenever you are using the configuration file.
You have to keep in mind that CSE does not have an internal possibility to store information like connections, credentials for needed connections or even for a state.
This means that CSE is completely stateless, with all advantages and disadvantages. One of the advantages is that you can redeploy CSE on any server as long as CSE is installed and you have the configuration file. One of the disadvantages is that you need to provide the configuration file and the password during startup of the service. This means you have to find a way to provide the password during the boot time.
To start CSE you will be prompted during the startup of the service to provide the password for the encrypted configuration file:
In a real world installation of CSE you do not want to start CSE manually, whenever your server reboots or during startup of a server. Therefore you need to have a systemd unit file to automatically start CSE. If you followed my post carefully, you might have an idea what the challenge might be: You have to provide a password during the boot time of the server. You have to provide the password during boot time of the server in an environment variable $CSE_CONFIG_PASSWORD. There are two ways to declare this variable:
in the init script
in an environment script referenced in the startup script
The init script contains of several sections. I will just explain section relevant for CSE that are the absolute basis:
Unit
Service
The Unit section looks like follows:
[Unit]
Description=Container Service Extension for VMware vCloud Director
Wants=network-online.target
After=network-online.target
In the Unit section you describe the service itself as well as dependencies that need to be fulfilled to start your service successfully. Wants means that the network service is a soft requirement meaning that the network service should be started but the init script will be executed even if the service was not started successfully. After means that the service will be started after the network service was started.
In the section service you define the startup of the service like which executable should be started, which user to use for the startup of the service and which WorkingDirectory. In our example of CSE,you will execute a shell script to start CSE. ExecStart defines the executable which to use and how to call the executable. User defines the user under which the service will be started. WorkingDirectory defines which directory to use to store temporary information like log files. EnviromentFile defines a file, in which you can set the environment that is used when the service is started. Environment defines variables that are available during execution of the script. In our usecase you need either Environment, EnvironmentFile or none of them.
Define the variable in the init script
You can define CSE_CONFIG_PASSWORD directly in the shell script that is called to start CSE. You have to add export CSE_CONFIG_PASSWORD=’mysecretpassword’ to the shell script like:
export CSE_CONFIG_PASSWORD='mysecretpassword'
/home/stefan/.local/bin/cse run -c /home/stefan/encrypted-config.yaml
To ensure that that only and only the service user has read/write and execute permission ont the file because you have to store the password in clear text!
The environment variable is just valid for the time of execution and cannot be read by other users than the service user and root if somebody has unauthorized root access, you have another problem.
Define CSE_CONFIG_PASSWORD in the environment script
To define the config in the init script, you need to add EnvironmentFile=/path/to/file to the cse.service systemd unit file. A systemd unit file could look like follows:
[Unit]
Description=Container Service Extension for VMware vCloud Director
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/home/stefan/cse.sh
User=stefan
WorkingDirectory=/home/stefan
EnvironmentFile=/path/to/file
Type=simple
Restart=always
[Install]
WantedBy=default.target
Please be aware if the EnvironemntFile is not available during execution of the script, your script will fail and CSe will not start. The environment file is quite easy:
CSE_CONFIG_PASSWORD='MySecretPassword'
You have to ensure that that only and only the service user has read/write and execute permission on the file because you have to store the password in clear text!
Definition in systemd unit file
To define the environment variable in the systemd unit file you just have to add Environment=CSE_CONFIG_PASSWORD='Mysecretpassword' to the systemd unit file that defines the CSE startup. You can find several examples of the unit file above.
You have to ensure that that only and only the service user has read/write and execute permission on the file because you have to store the password in clear text!
Sources
During the creation of this Blog entry I used several sources, that might be interested for further reading:
We are working on a daily basis on so many topics that would be interesting for a broader audience. Therefore, I thought that it would be a great idea to start a blog.
So stay tuned, there will be a lot of interesting material posted in the next days.