The purpose of this post is to explain the need of adding encryption to the network communications between containers, and how to achieve it at application level, creating Transport Layer Security (TLS) certificates with the already provided Kubernetes APIs.
Is it necessary to add encryption between containers?
When running applications on containerized infrastructures like Docker or Kubernetes, the security of the network communication between the containers inside the cluster usually isn't seen as very important. The reason is that containers run on an additional network level on top of their hosts' networks, and for being able to communicate with the containers from outside of the cluster, specific rules have to be defined (e.g. port-forwarding can be used to assign a port on the host which will forward packets to a service running in a container). But while it's true, the traffic between containers is unencrypted (with the exception of requests to the Kubernetes API server, which are encrypted by default), and if the containers run on different hosts, there's a possibility to sniff that traffic. To add to the previous issue, there are features like Kubernetes Federation that allow to manage together multiple clusters, which can even include clusters running in different cloud providers (e.g. Google Cloud, AWS) and other clusters running on-premises (e.g. OpenStack, which is the case for services provided by CERN IT department). In the last case, the traffic between containers would not be local between the machines, but could span over the entire internet, and it's definitely not a good idea to run without any encryption over general internet.
In the case of the Kubernetes clusters we use in CERN IT, they are created using the OpenStack Magnum component, and the network driver is flannel. The easiest way to add encryption would be to rely on the network driver itself for adding such feature. In the case of flannel, they're working to add IPsec to achieve an additional level of encapsulation and encryption at the network level but at the time of writing this post, it's still at experimental stage. So the recommended way to add encryption between containers for now is at application level, configuring the TLS protocol. In the next section I'll explain how to create TLS certificates using the Kubernetes built-in tools.
Create TLS certificates using Kubernetes API
This section will guide you through all the necessary steps to create TLS certificates using Kubernetes API.
Prerequisites
- A running Kubernetes cluster (tested with version 1.9 but should also be compatible with versions 1.5 and above)
- The Kubernetes cluster must have enabled a certificate signer which would serve the certificates API and issue the certificates (check the last note of this Kubernetes guide)
- kubectl client (minimum version 1.8.2) configured for the Kubernetes cluster that will be used
- openssl client (it's usually already installed on most Unix machines, but also tools like cfssl can be used, as proposed by the official Kubernetes guide for creating TLS certificates). Tested with version 1.0.2k-fips
Create a private key and a Certificate Signing Request
First, we will create a configuration file for the openssl command we'll use. The creation of this file is optional, because you could create a Certificate Signing Request (CSR) and its private key providing all the attributes directly on the command line. But it would only make sense for a very simple certificate. A practical TLS certificate in a Kubernetes cluster will probably include the Subject Alternative Name (SAN) feature, so it could be used by multiple containers of the cluster. And if it's necessary to define SAN values, it's highly recommended to define the specific attributes in a configuration file following the openssl format (you can check the openssl client's arguments and its configuration file's format in the openssl manual page or online). It's also better suited for scripts, if you want to automatize the creation of certificates. For this example, I'll use a configuration file.
Create a new folder for this example and move to it. Then create a file named openssl.conf with the following content:
[ req ]
default_bits = 2048
prompt = no
encrypt_key = no
distinguished_name = req_dn
req_extensions = req_ext
[ req_dn ]
CN = server1.default.svc.cluster.local
[ req_ext ]
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = server1.default.svc.cluster.local
DNS.2 = server2.default.svc.cluster.local
DNS.3 = server3.default.svc.cluster.local
You can customize the contents of the file with any specific needs, replacing values or adding any of the remaining attributes that can be found in the previously mentioned openssl manual pages. Using the example file as it is, a CSR will be created for the servers server1, server2 and server3.
Although the containers running in a cluster can communicate between them using short DNS names (corresponding to the name of a Kubernetes Service created for their Kubernetes Pods, e.g. 'server1'), we need to specify the fully qualified domain name (FQDN) for the SAN values, so the hostname verification against the TLS certificate would work as expected. And FQDN for services associated with pods are in the form of "service.namespace.svc.cluster.local", where "service" is the name of the Kubernetes Service created for the pod, and "namespace" is the namespace where the pod is running.
Run the following openssl command in order to create a private key and the CSR file :
$ openssl req -new -config openssl.conf -out server.csr -keyout server.key
Now you should see two new files in the folder: the created private key server.key and the CSR file server.csr.
Send the CSR file to the Kubernetes API
Run the following command to create a Kubernetes CSR object :
$ cat <<EOF | kubectl create -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: server_csr
spec:
groups:
- system:authenticated
request: $(cat server.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
This command encodes the CSR file server.csr in base64 and sends a request to Kubernetes API to create a CSR object. It specifically requests a server certificate, but Kubernetes API supports all type of certificates (check the full list of values to put in the usages field of the command).
The Kubernetes CSR object should now be visible in the Kubernetes API in Pending state. You can check it with the following command :
$ kubectl get csr server_csr
NAME AGE REQUESTOR CONDITION
server_csr 11m Magnum User Pending
Approve the CSR object
Issue the following command to approve the CSR that is in pending state :
$ kubectl certificate approve server_csr
In order to run this command, you need administrative permissions over the Kubernetes cluster you are using. If you don't have them, you will have to ask the cluster's administrator to approve this CSR object.
Download the server certificate from Kubernetes API
Now that the CSR is approved, run the following command to download it :
$ kubectl get csr server_csr -o jsonpath='{.status.certificate}' | base64 -d > server.crt
If the CSR object was successfully approved (you can check again with the command kubectl get csr server_csr) but the downloaded file server.crt is empty, it could mean that the Kubernetes controller responsible of approving the certificates could be disabled. It was not enabled by default in Kubernetes clusters created by OpenStack Magnum (you can check the merge request in Magnum project which enables it).
You can validate the certificate by checking its details with the following command :
$ openssl x509 -text -in server.crt
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
7c:5e:70:fe:bd:0c:1b:59:f4:f1:fd:41:b0:68:e1:ac:42:6b:b8:b4
Signature Algorithm: ecdsa-with-SHA256
Issuer: C=US, ST=CA, L=San Francisco, CN=your-kubernetes-cluster-name
Validity
Not Before: Feb 21 09:35:00 2018 GMT
Not After : Feb 21 09:35:00 2019 GMT
Subject: CN=server1.default.svc.cluster.local
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
[...]
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Subject Key Identifier:
80:A5:5A:96:94:C9:B9:3D:97:11:B3:E9:D6:93:56:76:BC:B7:B0:08
X509v3 Authority Key Identifier:
keyid:81:11:48:CA:4F:C9:75:85:24:D8:F0:21:7D:D0:FC:CD:3B:E2:E1:0E
X509v3 Subject Alternative Name:
DNS:server1.default.svc.cluster.local, DNS:server2.default.svc.cluster.local, DNS:server3.default.svc.cluster.local
Signature Algorithm: ecdsa-with-SHA256
[...]
-----BEGIN CERTIFICATE-----
[...]
-----END CERTIFICATE-----
As you can see, it includes all the SAN values we provided in the openssl.conf file. Also notice that the issuer of the certificate is specific to your Kubernetes cluster. In order to trust certificates created following this procedure, it's necessary to trust the Certification Authority (CA) of the cluster. Every container has automatically mounted its cluster's dedicated CA certificate at the path /var/run/secrets/kubernetes.io/serviceaccount/ca.crt, so you only need to add this certificate to the trusted certificates of the server where you want to install the certificate you just created, and it will start trusting them.
And that's it. At this point, you have a private key file server.key and a server certificate file server.crt in the folder you created for this example. You can use these files to setup TLS encryption for your containers and increase the networking security in your Kubernetes clusters.
Convert the certificates to Java KeyStore format for Java servers
The certificates you obtained are in PEM format and ready to use. But if you are developing in a Java environment, chances are that your environment doesn't support PEM certificates directly (which is the case for Oracle WebLogic Server for example). But it will not prevent you from using these certificates. You just need to convert them to a Java KeyStore in JKS format. The easiest way to perform this conversion, and the one that is explained below, is just using the standard tools: openssl and Java keytool.
There are more ways to do it, like using the ImportPrivateKey tool provided by the WebLogic Server Java Utilities or the orapki utility that both come together with WebLogic Server and other Oracle software, but in this case, standard tools will be used.
Prerequisites
- openssl client (tested with version 1.0.2k-fips)
- Java SE Development Kit (JDK). Tested with version 1.8.0_152 but should work with JDK versions 1.6 and above.
Steps for the conversion
For the following instructions, it will be assumed that the value of your environment variable $PATH includes the paths for openssl and JDK binaries. If that's not the case, you will have to use the full paths for the given commands.
The first step is to convert the server certificate and private key in PEM format to PKCS #12 format (it's necessary because the keytool commands doesn't accept PEM certificates directly, but it does accept PKCS #12). Run this command :
$ openssl pkcs12 -export \
-in server.crt \
-inkey server.key \
-out server.pkcs12 \
-name cert_alias \
-password pass:pass123
In the previous command:
- name is the alias under which the certificate and private key will be stored
- password is the password for exporting the file, and will need to be provided to import its contents. It needs to be prefixed with 'pass:' if its value is provided directly in the argument (you can check the different options in the documentation).
The second and last step is to convert the file from PKCS #12 format to a Java KeyStore in JKS format with the following command :
$ keytool -importkeystore \
-srckeystore server.pkcs12 \
-srcstoretype PKCS12 \
-srcstorepass pass123 \
-srcalias cert_alias \
-destalias cert_alias \
-destkeystore server.jks \
-deststoretype JKS \
-destkeypass keypass123 \
-deststorepass storepass123
In the given command :
- srcstorepass is the password used to import the contents of the PKCS12 file and has to match with the value provided for the 'password' argument of the previous command
- srcalias has to match with the value provided for the 'name' argument of the previous command
- destalias is the alias under which your certificate and private key will be stored in the created keystore
- destkeypass is a password to protect the private key inside the keystore
- deststorepass is a passphrase to protect the keystore itself
Now the server's private key and certificate were converted into a keystore in JKS format stored in the file server.jks. You can check the created keystore with this command :
$ keytool -list -keystore server.jks -alias cert_alias -storepass storepass123
cert_alias, Feb 21, 2018, PrivateKeyEntry,
Certificate fingerprint (SHA1): 49:F0:09:0E:91:1C:F4:68:C9:D4:3C:DD:E4:13:56:9A:D1:B7:06:EE
If you also need a keystore with the CA certificate to trust the server's certificate, you can use the following command (this command must be executed inside any container that are part of the cluster, because the CA certificate is automatically mounted in the containers at a fixed path):
keytool -import -file /var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
-keystore trust.jks \
-storepass storepass123 \
-alias kubernetes_ca -noprompt
In the previous command :
- storepass is a passphrase to protect the keystore
- alias is the alias under which the CA certificate will be stored in the created keystore
Now you have the server's certificate and private key in the keystore server.jks, and also the Kubernetes CA certificate in the keystore trust.jks. You are ready to enable TLS encryption also for your Java applications running in Kubernetes clusters.
Useful links :
- Guide to get Kubernetes - if you don't have it already and you don't use OpenStack Magnum
- Kubernetes guide for creating TLS certificates
- openssl - for documentation and client binaries
Hi,
thank you for this tutorial, but how can i enable Kubernetes controller for approving the certificates? the link dont help me really..
Thanks