»Run Vault on Kubernetes

Vault works with Kubernetes in various modes: dev, standalone, ha, and external.

»Helm Chart

The Vault Helm chart is the recommended way to install and configure Vault on Kubernetes. In addition to running Vault itself, the Helm chart is the primary method for installing and configuring Vault to integrate with other services such as Consul for High Availability (HA) deployments.

While the Helm chart automatically sets up complex resources and exposes the configuration to meet your requirements, it does not automatically operate Vault. You are still responsible for learning how to monitor, backup, upgrade, etc. the Vault cluster.

»How-To

»Install Vault

Helm must be installed and configured on your machine. Please refer to the Helm documentation or the Vault Installation to Minikube via Helm guide.

To use the Helm chart, add the Hashicorp helm repository and check that you have access to the chart:

$ helm repo add hashicorp https://helm.releases.hashicorp.com
"hashicorp" has been added to your repositories

$ helm search repo hashicorp/vault
NAME            CHART VERSION   APP VERSION DESCRIPTION
hashicorp/vault 0.13.0          1.7.3       Official HashiCorp Vault Chart
$ helm repo add hashicorp https://helm.releases.hashicorp.com"hashicorp" has been added to your repositories
$ helm search repo hashicorp/vaultNAME            CHART VERSION   APP VERSION DESCRIPTIONhashicorp/vault 0.13.0          1.7.3       Official HashiCorp Vault Chart

Use helm install to install the latest release of the Vault Helm chart.

$ helm install vault hashicorp/vault
$ helm install vault hashicorp/vault

Or install a specific version of the chart.

# List the available releases
$ helm search repo hashicorp/vault -l
NAME            CHART VERSION   APP VERSION DESCRIPTION
hashicorp/vault 0.13.0          1.7.3       Official HashiCorp Vault Chart
hashicorp/vault 0.12.0          1.7.2       Official HashiCorp Vault Chart
hashicorp/vault 0.11.0          1.7.0       Official HashiCorp Vault Chart
hashicorp/vault 0.10.0          1.7.0       Official HashiCorp Vault Chart
hashicorp/vault 0.9.1           1.6.2       Official HashiCorp Vault Chart
hashicorp/vault 0.9.0           1.6.1       Official HashiCorp Vault Chart
hashicorp/vault 0.8.0           1.5.4       Official HashiCorp Vault Chart
hashicorp/vault 0.7.0           1.5.2       Official HashiCorp Vault Chart
hashicorp/vault 0.6.0           1.4.2       Official HashiCorp Vault Chart

# Install version 0.13.0
$ helm install vault hashicorp/vault --version 0.13.0
# List the available releases$ helm search repo hashicorp/vault -lNAME            CHART VERSION   APP VERSION DESCRIPTIONhashicorp/vault 0.13.0          1.7.3       Official HashiCorp Vault Charthashicorp/vault 0.12.0          1.7.2       Official HashiCorp Vault Charthashicorp/vault 0.11.0          1.7.0       Official HashiCorp Vault Charthashicorp/vault 0.10.0          1.7.0       Official HashiCorp Vault Charthashicorp/vault 0.9.1           1.6.2       Official HashiCorp Vault Charthashicorp/vault 0.9.0           1.6.1       Official HashiCorp Vault Charthashicorp/vault 0.8.0           1.5.4       Official HashiCorp Vault Charthashicorp/vault 0.7.0           1.5.2       Official HashiCorp Vault Charthashicorp/vault 0.6.0           1.4.2       Official HashiCorp Vault Chart
# Install version 0.13.0$ helm install vault hashicorp/vault --version 0.13.0

The helm install command accepts parameters to override default configuration values inline or defined in a file.

Override the server.dev.enabled configuration value:

$ helm install vault hashicorp/vault \
    --set "server.dev.enabled=true"
$ helm install vault hashicorp/vault \    --set "server.dev.enabled=true"

Override all the configuration found in a file:

$ cat override-values.yml
server:
  ha:
    enabled: true
    replicas: 5
##
$ helm install vault hashicorp/vault \
    --values override-values.yml
$ cat override-values.ymlserver:  ha:    enabled: true    replicas: 5##$ helm install vault hashicorp/vault \    --values override-values.yml

»Dev mode

The Helm chart may run a Vault server in development. This installs a single Vault server with a memory storage backend.

Install the latest Vault Helm chart in development mode.

$ helm install vault hashicorp/vault \
    --set "server.dev.enabled=true"
$ helm install vault hashicorp/vault \    --set "server.dev.enabled=true"

»Standalone mode

The Helm chart defaults to run in standalone mode. This installs a single Vault server with a file storage backend.

Install the latest Vault Helm chart in standalone mode.

$ helm install vault hashicorp/vault
$ helm install vault hashicorp/vault

»HA mode

The Helm chart may be run in high availability (HA) mode. This installs three Vault servers with an existing Consul storage backend. It is suggested that Consul is installed via the Consul Helm chart.

Install the latest Vault Helm chart in HA mode.

$ helm install vault hashicorp/vault \
    --set "server.ha.enabled=true"
$ helm install vault hashicorp/vault \    --set "server.ha.enabled=true"

»External mode

The Helm chart may be run in external mode. This installs no Vault server and relies on a network addressable Vault server to exist.

Install the latest Vault Helm chart in external mode.

$ helm install vault hashicorp/vault \
    --set "injector.externalVaultAddr=http://external-vault:8200"
$ helm install vault hashicorp/vault \    --set "injector.externalVaultAddr=http://external-vault:8200"

»View the Vault UI

The Vault UI is enabled but NOT exposed as service for security reasons. The Vault UI can also be exposed via port-forwarding or through a ui configuration value.

Expose the Vault UI with port-forwarding:

$ kubectl port-forward vault-0 8200:8200
Forwarding from 127.0.0.1:8200 -> 8200
Forwarding from [::1]:8200 -> 8200
##...
$ kubectl port-forward vault-0 8200:8200Forwarding from 127.0.0.1:8200 -> 8200Forwarding from [::1]:8200 -> 8200##...

»Initialize and unseal Vault

After the Vault Helm chart is installed in standalone or ha mode one of the Vault servers need to be initialized. The initialization generates the credentials necessary to unseal all the Vault servers.

»CLI initialize and unseal

View all the Vault pods in the current namespace:

$ kubectl get pods -l app.kubernetes.io/name=vault
NAME                                    READY   STATUS    RESTARTS   AGE
vault-0                                 0/1     Running   0          1m49s
vault-1                                 0/1     Running   0          1m49s
vault-2                                 0/1     Running   0          1m49s
$ kubectl get pods -l app.kubernetes.io/name=vaultNAME                                    READY   STATUS    RESTARTS   AGEvault-0                                 0/1     Running   0          1m49svault-1                                 0/1     Running   0          1m49svault-2                                 0/1     Running   0          1m49s

Initialize one Vault server with the default number of key shares and default key threshold:

$ kubectl exec -ti vault-0 -- vault operator init
Unseal Key 1: MBFSDepD9E6whREc6Dj+k3pMaKJ6cCnCUWcySJQymObb
Unseal Key 2: zQj4v22k9ixegS+94HJwmIaWLBL3nZHe1i+b/wHz25fr
Unseal Key 3: 7dbPPeeGGW3SmeBFFo04peCKkXFuuyKc8b2DuntA4VU5
Unseal Key 4: tLt+ME7Z7hYUATfWnuQdfCEgnKA2L173dptAwfmenCdf
Unseal Key 5: vYt9bxLr0+OzJ8m7c7cNMFj7nvdLljj0xWRbpLezFAI9

Initial Root Token: s.zJNwZlRrqISjyBHFMiEca6GF
##...
$ kubectl exec -ti vault-0 -- vault operator initUnseal Key 1: MBFSDepD9E6whREc6Dj+k3pMaKJ6cCnCUWcySJQymObbUnseal Key 2: zQj4v22k9ixegS+94HJwmIaWLBL3nZHe1i+b/wHz25frUnseal Key 3: 7dbPPeeGGW3SmeBFFo04peCKkXFuuyKc8b2DuntA4VU5Unseal Key 4: tLt+ME7Z7hYUATfWnuQdfCEgnKA2L173dptAwfmenCdfUnseal Key 5: vYt9bxLr0+OzJ8m7c7cNMFj7nvdLljj0xWRbpLezFAI9
Initial Root Token: s.zJNwZlRrqISjyBHFMiEca6GF##...

The output displays the key shares and initial root key generated.

Unseal the Vault server with the key shares until the key threshold is met:

## Unseal the first vault server until it reaches the key threshold
$ kubectl exec -ti vault-0 -- vault operator unseal # ... Unseal Key 1
$ kubectl exec -ti vault-0 -- vault operator unseal # ... Unseal Key 2
$ kubectl exec -ti vault-0 -- vault operator unseal # ... Unseal Key 3
## Unseal the first vault server until it reaches the key threshold$ kubectl exec -ti vault-0 -- vault operator unseal # ... Unseal Key 1$ kubectl exec -ti vault-0 -- vault operator unseal # ... Unseal Key 2$ kubectl exec -ti vault-0 -- vault operator unseal # ... Unseal Key 3

Repeat the unseal process for all Vault server pods. When all Vault server pods are unsealed they report READY 1/1.

$ kubectl get pods -l app.kubernetes.io/name=vault
NAME                                    READY   STATUS    RESTARTS   AGE
vault-0                                 1/1     Running   0          1m49s
vault-1                                 1/1     Running   0          1m49s
vault-2                                 1/1     Running   0          1m49s
$ kubectl get pods -l app.kubernetes.io/name=vaultNAME                                    READY   STATUS    RESTARTS   AGEvault-0                                 1/1     Running   0          1m49svault-1                                 1/1     Running   0          1m49svault-2                                 1/1     Running   0          1m49s

»Google KMS Auto Unseal

The Helm chart may be run with Google KMS for Auto Unseal. This enables Vault server pods to auto unseal if they are rescheduled.

Vault Helm requires the Google Cloud KMS credentials stored in credentials.json and mounted as a secret in each Vault server pod.

»Create the Secret

First, create the secret in Kubernetes:

kubectl create secret generic kms-creds --from-file=credentials.json
kubectl create secret generic kms-creds --from-file=credentials.json

Vault Helm mounts this to /vault/userconfig/kms-creds/credentials.json.

»Config Example

This is a Vault Helm configuration that uses Google KMS:

global:
  enabled: true

server:
  extraEnvironmentVars:
    GOOGLE_REGION: global
    GOOGLE_PROJECT: <PROJECT NAME>
    GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/kms-creds/credentials.json

  extraVolumes:
    - type: 'secret'
      name: 'kms-creds'

  ha:
    enabled: true
    replicas: 3

    config: |
      ui = true

      listener "tcp" {
        tls_disable = 1
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      }

      seal "gcpckms" {
        project     = "<NAME OF PROJECT>"
        region      = "global"
        key_ring    = "<NAME OF KEYRING>"
        crypto_key  = "<NAME OF KEY>"
      }

      storage "consul" {
        path = "vault"
        address = "HOST_IP:8500"
      }
global:  enabled: true
server:  extraEnvironmentVars:    GOOGLE_REGION: global    GOOGLE_PROJECT: <PROJECT NAME>    GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/kms-creds/credentials.json
  extraVolumes:    - type: 'secret'      name: 'kms-creds'
  ha:    enabled: true    replicas: 3
    config: |      ui = true
      listener "tcp" {        tls_disable = 1        address = "[::]:8200"        cluster_address = "[::]:8201"      }
      seal "gcpckms" {        project     = "<NAME OF PROJECT>"        region      = "global"        key_ring    = "<NAME OF KEYRING>"        crypto_key  = "<NAME OF KEY>"      }
      storage "consul" {        path = "vault"        address = "HOST_IP:8500"      }

»Amazon EKS Auto Unseal

The Helm chart may be run with AWS EKS for Auto Unseal. This enables Vault server pods to auto unseal if they are rescheduled.

Vault Helm requires the AWS credentials stored as environment variables that are defined in each Vault server pod.

»Create the Secret

First, create a secret with your EKS access key/secret:

$ kubectl create secret generic eks-creds \
    --from-literal=AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID?}" \
    --from-literal=AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY?}"
$ kubectl create secret generic eks-creds \    --from-literal=AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID?}" \    --from-literal=AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY?}"
»Config Example

This is a Vault Helm configuration that uses AWS EKS:

global:
  enabled: true

server:
  extraSecretEnvironmentVars:
    - envName: AWS_ACCESS_KEY_ID
      secretName: eks-creds
      secretKey: AWS_ACCESS_KEY_ID
    - envName: AWS_SECRET_ACCESS_KEY
      secretName: eks-creds
      secretKey: AWS_SECRET_ACCESS_KEY

  ha:
    enabled: true
    config: |
      ui = true

      listener "tcp" {
        tls_disable = 1
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      }

      seal "awskms" {
        region     = "KMS_REGION_HERE"
        kms_key_id = "KMS_KEY_ID_HERE"
      }

      storage "consul" {
        path = "vault"
        address = "HOST_IP:8500"
      }
global:  enabled: true
server:  extraSecretEnvironmentVars:    - envName: AWS_ACCESS_KEY_ID      secretName: eks-creds      secretKey: AWS_ACCESS_KEY_ID    - envName: AWS_SECRET_ACCESS_KEY      secretName: eks-creds      secretKey: AWS_SECRET_ACCESS_KEY
  ha:    enabled: true    config: |      ui = true
      listener "tcp" {        tls_disable = 1        address = "[::]:8200"        cluster_address = "[::]:8201"      }
      seal "awskms" {        region     = "KMS_REGION_HERE"        kms_key_id = "KMS_KEY_ID_HERE"      }
      storage "consul" {        path = "vault"        address = "HOST_IP:8500"      }

»Probes

Probes are essential for detecting failures, rescheduling and using pods in Kubernetes. The helm chart offers configurable readiness and liveliness probes which can be customized for a variety of use cases.

Vault's /sys/health` endpoint can be customized to change the behavior of the health check. For example, we can change the Vault readiness probe to show the Vault pods are ready even if they're still uninitialized and sealed using the following probe:

server:
  readinessProbe:
    enabled: true
    path: '/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204'
server:  readinessProbe:    enabled: true    path: '/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204'

Using this customized probe, a postStart script could automatically run once the pod is ready for additional setup.

»Upgrading Vault on Kubernetes

To upgrade Vault on Kubernetes, we follow the same pattern as generally upgrading Vault, except we can use the Helm chart to update the Vault server StatefulSet. It is important to understand how to generally upgrade Vault before reading this section.

The Vault StatefulSet uses OnDelete update strategy. It is critical to use OnDelete instead of RollingUpdate because standbys must be updated before the active primary. A failover to an older version of Vault must always be avoided.

»Upgrading Vault Servers

To initiate the upgrade, set the server.image values to the desired Vault version, either in a values yaml file or on the command line. For illustrative purposes, the example below uses vault:123.456.

server:
  image:
    repository: 'vault'
    tag: '123.456'
server:  image:    repository: 'vault'    tag: '123.456'

Next, list the Helm versions and choose the desired version to install.

helm search repo hashicorp/vault
NAME            CHART VERSION   APP VERSION DESCRIPTION
hashicorp/vault 0.12.0          1.7.2       Official HashiCorp Vault Chart
helm search repo hashicorp/vaultNAME            CHART VERSION   APP VERSION DESCRIPTIONhashicorp/vault 0.12.0          1.7.2       Official HashiCorp Vault Chart

Next, test the upgrade with --dry-run first to verify the changes sent to the Kubernetes cluster.

$ helm upgrade vault hashicorp/vault --version=0.12.0 \
    --set='server.image.repository=vault' \
    --set='server.image.tag=123.456' \
    --dry-run
$ helm upgrade vault hashicorp/vault --version=0.12.0 \    --set='server.image.repository=vault' \    --set='server.image.tag=123.456' \    --dry-run

This should cause no changes (although the resources are updated). If everything is stable, helm upgrade can be run.

The helm upgrade command should have updated the StatefulSet template for the Vault servers, however, no pods have been deleted. The pods must be manually deleted to upgrade. Deleting the pods does not delete any persisted data.

If Vault is not deployed using ha mode, the single Vault server may be deleted by running:

$ kubectl delete pod <name of Vault pod>
$ kubectl delete pod <name of Vault pod>

If Vault is deployed using ha mode, the standby pods must be upgraded first. To identify which pod is currently the active primary, run the following command on each Vault pod:

$ kubectl exec -ti <name of pod> -- vault status | grep "HA Mode"
$ kubectl exec -ti <name of pod> -- vault status | grep "HA Mode"

Next, delete every pod that is not the active primary:

$ kubectl delete pod <name of Vault pods>
$ kubectl delete pod <name of Vault pods>

If auto-unseal is not being used, the newly scheduled Vault standby pods needs to be unsealed:

$ kubectl exec -ti <name of pod> -- vault operator unseal
$ kubectl exec -ti <name of pod> -- vault operator unseal

Finally, once the standby nodes have been updated and unsealed, delete the active primary:

$ kubectl delete pod <name of Vault primary>
$ kubectl delete pod <name of Vault primary>

Similar to the standby nodes, the former primary also needs to be unsealed:

$ kubectl exec -ti <name of pod> -- vault operator unseal
$ kubectl exec -ti <name of pod> -- vault operator unseal

After a few moments the Vault cluster should elect a new active primary. The Vault cluster is now upgraded!

»Protecting Sensitive Vault Configurations

Vault Helm renders a Vault configuration file during installation and stores the file in a Kubernetes configmap. Some configurations require sensitive data to be included in the configuration file and would not be encrypted at rest once created in Kubernetes.

The following example shows how to add extra configuration files to Vault Helm to protect sensitive configurations from being in plaintext at rest using Kubernetes secrets.

First, create a partial Vault configuration with the sensitive settings Vault loads during startup:

$ cat <<EOF >>config.hcl
storage "mysql" {
username = "user1234"
password = "secret123!"
database = "vault"
}
EOF
$ cat <<EOF >>config.hclstorage "mysql" {username = "user1234"password = "secret123!"database = "vault"}EOF

Next, create a Kubernetes secret containing this partial configuration:

$ kubectl create secret generic vault-storage-config \
    --from-file=config.hcl
$ kubectl create secret generic vault-storage-config \    --from-file=config.hcl

Finally, mount this secret as an extra volume and add an additional -config flag to the Vault startup command:

$ helm install vault hashicorp/vault \
  --set='server.extraVolumes[0].type=secret' \
  --set='server.extraVolumes[0].name=vault-storage-config' \
  --set='server.extraArgs=-config=/vault/userconfig/vault-storage-config/config.hcl'
$ helm install vault hashicorp/vault \  --set='server.extraVolumes[0].type=secret' \  --set='server.extraVolumes[0].name=vault-storage-config' \  --set='server.extraArgs=-config=/vault/userconfig/vault-storage-config/config.hcl'

»Architecture

We recommend running Vault on Kubernetes with the same general architecture as running it anywhere else. There are some benefits Kubernetes can provide that eases operating a Vault cluster and we document those below. The standard production deployment guide is still an important read even if running Vault within Kubernetes.

»Production Deployment Checklist

End-to-End TLS. Vault should always be used with TLS in production. If intermediate load balancers or reverse proxies are used to front Vault, they should not terminate TLS. This way traffic is always encrypted in transit to Vault and minimizes risks introduced by intermediate layers. See the official documentation for example on configuring Vault Helm to use TLS.

Single Tenancy. Vault should be the only main process running on a machine. This reduces the risk that another process running on the same machine is compromised and can interact with Vault. This can be accomplished by using Vault Helm's affinity configurable. See the official documentation for example on configuring Vault Helm to use affinity rules.

Enable Auditing. Vault supports several auditing backends. Enabling auditing provides a history of all operations performed by Vault and provides a forensics trail in the case of misuse or compromise. Audit logs securely hash any sensitive data, but access should still be restricted to prevent any unintended disclosures. Vault Helm includes a configurable auditStorage option that provisions a persistent volume to store audit logs. See the official documentation for an example on configuring Vault Helm to use auditing.

Immutable Upgrades. Vault relies on an external storage backend for persistence, and this decoupling allows the servers running Vault to be managed immutably. When upgrading to new versions, new servers with the upgraded version of Vault are brought online. They are attached to the same shared storage backend and unsealed. Then the old servers are destroyed. This reduces the need for remote access and upgrade orchestration which may introduce security gaps. See the upgrade section for instructions on upgrading Vault on Kubernetes.

Upgrade Frequently. Vault is actively developed, and updating frequently is important to incorporate security fixes and any changes in default settings such as key lengths or cipher suites. Subscribe to the Vault mailing list and GitHub CHANGELOG for updates.

Restrict Storage Access. Vault encrypts all data at rest, regardless of which storage backend is used. Although the data is encrypted, an attacker with arbitrary control can cause data corruption or loss by modifying or deleting keys. Access to the storage backend should be restricted to only Vault to avoid unauthorized access or operations.