Volumes
On-disk files in a container are ephemeral, which presents some problems for
non-trivial applications when running in containers. One problem
is the loss of files when a container crashes. The kubelet restarts the container
but with a clean state. A second problem occurs when sharing files
between containers running together in a Pod
.
The Kubernetes volume abstraction
solves both of these problems.
Familiarity with Pods is suggested.
Background
Docker has a concept of
Kubernetes supports many types of volumes. A Pod
can use any number of volume types simultaneously.
Ephemeral volume types have a lifetime of a pod, but persistent volumes exist beyond
the lifetime of a pod. When a pod ceases to exist, Kubernetes destroys ephemeral volumes;
however, Kubernetes does not destroy persistent volumes.
For any kind of volume in a given pod, data is preserved across container restarts. At its core, a volume is a directory, possibly with some data in it, which
is accessible to the containers in a pod. How that directory comes to be, the
medium that backs it, and the contents of it are determined by the particular
volume type used. To use a volume, specify the volumes to provide for the Pod in Volumes cannot mount within other volumes (but see Using subPath
for a related mechanism). Also, a volume cannot contain a hard link to anything in
a different volume. Kubernetes supports several types of volumes. An There are some restrictions when using an Before you can use an EBS volume with a pod, you need to create it. Make sure the zone matches the zone you brought up your cluster in. Check that the size and EBS volume
type are suitable for your use. If the EBS volume is partitioned, you can supply the optional field The To disable the The For more details, see the . The To disable the The For more details, see the . The Azure File CSI driver does not support using same volume with different fsgroups, if Azurefile CSI migration is enabled, using same volume with different fsgroups won't be supported at all. To disable the A See the
The The A ConfigMap
provides a way to inject configuration data into pods.
The data stored in a ConfigMap can be referenced in a volume of type
When referencing a ConfigMap, you provide the name of the ConfigMap in the
volume. You can customize the path to use for a specific
entry in the ConfigMap. The following configuration shows how to mount
the The A See the downward API example for more details. An Some uses for an Depending on your environment, An See the
A See the
A There are some restrictions when using a One feature of GCE persistent disk is concurrent read-only access to a persistent disk.
A Using a GCE persistent disk with a Pod controlled by a ReplicaSet will fail unless
the PD is read-only or the replica count is 0 or 1. Before you can use a GCE persistent disk with a Pod, you need to create it. Dynamic provisioning is possible using a
StorageClass for GCE PD.
Before creating a PersistentVolume, you must create the persistent disk: The To disable the A Here is an example of a .spec.volumes
and declare where to mount those volumes into containers in .spec.containers[*].volumeMounts
.
A process in a container sees a filesystem view composed from the initial contents of
the container image, plus volumes
(if defined) mounted inside the container.
The process sees a root filesystem that initially matches the contents of the container
image.
Any writes to within that filesystem hierarchy, if allowed, affect what that process views
when it performs a subsequent filesystem access.
Volumes mount at the specified paths within
the image.
For each container defined within a Pod, you must independently specify where
to mount each volume that the container uses.Types of Volumes
awsElasticBlockStore
awsElasticBlockStore
volume mounts an Amazon Web Services (AWS)
emptyDir, which is erased when a pod is removed, the contents of an EBS
volume are persisted and the volume is unmounted. This means that an
EBS volume can be pre-populated with data, and that data can be shared between pods.aws ec2 create-volume
or the AWS API before you can use it.
awsElasticBlockStore
volume:
Creating an AWS EBS volume
aws ec2 create-volume --availability-zone=eu-west-1a --size=10 --volume-type=gp2
AWS EBS configuration example
apiVersion: v1
kind: Pod
metadata:
name: test-ebs
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-ebs
name: test-volume
volumes:
- name: test-volume
# This AWS EBS volume must already exist.
awsElasticBlockStore:
volumeID: "<volume id>"
fsType: ext4
partition: "<partition number>"
to specify which parition to mount on.AWS EBS CSI migration
Kubernetes v1.17 [beta]
CSIMigration
feature for awsElasticBlockStore
, when enabled, redirects
all plugin operations from the existing in-tree plugin to the ebs.csi.aws.com
Container
Storage Interface (CSI) driver. In order to use this feature, the
must be installed on the cluster and the CSIMigration
and CSIMigrationAWS
beta features must be enabled.AWS EBS CSI migration complete
Kubernetes v1.17 [alpha]
awsElasticBlockStore
storage plugin from being loaded by the controller manager
and the kubelet, set the InTreePluginAWSUnregister
flag to true
.azureDisk
azureDisk
volume type mounts a Microsoft Azure
azureDisk CSI migration
Kubernetes v1.19 [beta]
CSIMigration
feature for azureDisk
, when enabled, redirects all plugin operations
from the existing in-tree plugin to the disk.csi.azure.com
Container
Storage Interface (CSI) Driver. In order to use this feature, the
must be installed on the cluster and the CSIMigration
and CSIMigrationAzureDisk
features must be enabled.azureDisk CSI migration complete
Kubernetes v1.21 [alpha]
azureDisk
storage plugin from being loaded by the controller manager
and the kubelet, set the InTreePluginAzureDiskUnregister
flag to true
.azureFile
azureFile
volume type mounts a Microsoft Azure File volume (SMB 2.1 and 3.0)
into a pod.azureFile CSI migration
Kubernetes v1.21 [beta]
CSIMigration
feature for azureFile
, when enabled, redirects all plugin operations
from the existing in-tree plugin to the file.csi.azure.com
Container
Storage Interface (CSI) Driver. In order to use this feature, the
must be installed on the cluster and the CSIMigration
and CSIMigrationAzureFile
feature gates must be enabled.azureFile CSI migration complete
Kubernetes v1.21 [alpha]
azureFile
storage plugin from being loaded by the controller manager
and the kubelet, set the InTreePluginAzureFileUnregister
flag to true
.cephfs
cephfs
volume allows an existing CephFS volume to be
mounted into your Pod. Unlike emptyDir
, which is erased when a pod is
removed, the contents of a cephfs
volume are preserved and the volume is merely
unmounted. This means that a cephfs
volume can be pre-populated with data, and
that data can be shared between pods. The cephfs
volume can be mounted by multiple
writers simultaneously.cinder
cinder
volume type is used to mount the OpenStack Cinder volume into your pod.Cinder volume configuration example
apiVersion: v1
kind: Pod
metadata:
name: test-cinder
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-cinder-container
volumeMounts:
- mountPath: /test-cinder
name: test-volume
volumes:
- name: test-volume
# This OpenStack volume must already exist.
cinder:
volumeID: "<volume id>"
fsType: ext4
OpenStack CSI migration
Kubernetes v1.21 [beta]
CSIMigration
feature for Cinder is enabled by default in Kubernetes 1.21.
It redirects all plugin operations from the existing in-tree plugin to the
cinder.csi.openstack.org
Container Storage Interface (CSI) Driver.
must be installed on the cluster.
You can disable Cinder CSI migration for your cluster by setting the CSIMigrationOpenStack
feature gate to false
.
If you disable the CSIMigrationOpenStack
feature, the in-tree Cinder volume plugin takes responsibility
for all aspects of Cinder volume storage management.configMap
configMap
and then consumed by containerized applications running in a pod.log-config
ConfigMap onto a Pod called configmap-pod
:apiVersion: v1
kind: Pod
metadata:
name: configmap-pod
spec:
containers:
- name: test
image: busybox:1.28
volumeMounts:
- name: config-vol
mountPath: /etc/config
volumes:
- name: config-vol
configMap:
name: log-config
items:
- key: log_level
path: log_level
log-config
ConfigMap is mounted as a volume, and all contents stored in
its log_level
entry are mounted into the Pod at path /etc/config/log_level
.
Note that this path is derived from the volume's mountPath
and the path
keyed with log_level
.downwardAPI
downwardAPI
volume makes downward API data available to applications.
It mounts a directory and writes the requested data in plain text files.subPath
volume mount will not
receive downward API updates.
emptyDir
emptyDir
volume is first created when a Pod is assigned to a node, and
exists as long as that Pod is running on that node. As the name says, the
emptyDir
volume is initially empty. All containers in the Pod can read and write the same
files in the emptyDir
volume, though that volume can be mounted at the same
or different paths in each container. When a Pod is removed from a node for
any reason, the data in the emptyDir
is deleted permanently.emptyDir
volume
is safe across container crashes.
emptyDir
are:
emptyDir
volumes are stored on whatever medium that backs the
node such as disk or SSD, or network storage. However, if you set the emptyDir.medium
field
to "Memory"
, Kubernetes mounts a tmpfs (RAM-backed filesystem) for you instead.
While tmpfs is very fast, be aware that unlike disks, tmpfs is cleared on
node reboot and any files you write count against your container's
memory limit.SizeMemoryBackedVolumes
feature gate is enabled,
you can specify a size for memory backed volumes. If no size is specified, memory
backed volumes are sized to 50% of the memory on a Linux host.
emptyDir configuration example
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
fc (fibre channel)
fc
volume type allows an existing fibre channel block storage volume
to mount in a Pod. You can specify single or multiple target world wide names (WWNs)
using the parameter targetWWNs
in your Volume configuration. If multiple WWNs are specified,
targetWWNs expect that those WWNs are from multi-path connections.flocker (deprecated)
flocker
volume allows a Flocker dataset to be mounted into a Pod. If the
dataset does not already exist in Flocker, it needs to be first created with the Flocker
CLI or by using the Flocker API. If the dataset already exists it will be
reattached by Flocker to the node that the pod is scheduled. This means data
can be shared between pods as required.gcePersistentDisk
gcePersistentDisk
volume mounts a Google Compute Engine (GCE)
emptyDir, which is erased when a pod is removed, the contents of a PD are
preserved and the volume is merely unmounted. This means that a PD can be
pre-populated with data, and that data can be shared between pods.gcloud
or the GCE API or UI before you can use it.
gcePersistentDisk
:
gcePersistentDisk
volume permits multiple consumers to simultaneously
mount a persistent disk as read-only. This means that you can pre-populate a PD with your dataset
and then serve it in parallel from as many Pods as you need. Unfortunately,
PDs can only be mounted by a single consumer in read-write mode. Simultaneous
writers are not allowed.Creating a GCE persistent disk
gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk
GCE persistent disk configuration example
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
# This GCE PD must already exist.
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
Regional persistent disks
Manually provisioning a Regional PD PersistentVolume
gcloud compute disks create --size=500GB my-data-disk
--region us-central1
--replica-zones us-central1-a,us-central1-b
Regional persistent disk configuration example
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-volume
spec:
capacity:
storage: 400Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
# failure-domain.beta.kubernetes.io/zone should be used prior to 1.21
- key: topology.kubernetes.io/zone
operator: In
values:
- us-central1-a
- us-central1-b
GCE CSI migration
Kubernetes v1.17 [beta]
CSIMigration
feature for GCE PD, when enabled, redirects all plugin operations
from the existing in-tree plugin to the pd.csi.storage.gke.io
Container
Storage Interface (CSI) Driver. In order to use this feature, the
must be installed on the cluster and the CSIMigration
and CSIMigrationGCE
beta features must be enabled.GCE CSI migration complete
Kubernetes v1.21 [alpha]
gcePersistentDisk
storage plugin from being loaded by the controller manager
and the kubelet, set the InTreePluginGCEUnregister
flag to true
.gitRepo (deprecated)
gitRepo
volume type is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container.
gitRepo
volume is an example of a volume plugin. This plugin
mounts an empty directory and clones a git repository into this directory
for your Pod to use.gitRepo
volume:apiVersion: v1
kind: Pod
metadata:
name: server
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /mypath
name: git-volume
volumes:
- name: git-volume
gitRepo:
repository: "git@somewhere:me/my-git-repository.git"
revision: "22f1d8406d464b0c0874075539c1f2e96c253775"
glusterfs
Value | Behavior |
---|---|
Empty string (default) is for backward compatibility, which means that no checks will be performed before mounting the hostPath volume. | |
DirectoryOrCreate |
If nothing exists at the given path, an empty directory will be created there as needed with permission set to 0755, having the same group and ownership with Kubelet. |
Directory |
A directory must exist at the given path |
FileOrCreate |
If nothing exists at the given path, an empty file will be created there as needed with permission set to 0644, having the same group and ownership with Kubelet. |
File |
A file must exist at the given path |
Socket |
A UNIX socket must exist at the given path |
CharDevice |
A character device must exist at the given path |
BlockDevice |
A block device must exist at the given path |
Watch out when using this type of volume, because:
- HostPaths can expose privileged system credentials (such as for the Kubelet) or privileged APIs (such as container runtime socket), which can be used for container escape or to attack other parts of the cluster.
- Pods with identical configuration (such as created from a PodTemplate) may behave differently on different nodes due to different files on the nodes
- The files or directories created on the underlying hosts are only writable by root. You
either need to run your process as root in a
privileged Container or modify the file
permissions on the host to be able to write to a
hostPath
volume
hostPath configuration example
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
FileOrCreate
mode does not create the parent directory of the file. If the parent directory
of the mounted file does not exist, the pod fails to start. To ensure that this mode works,
you can try to mount directories and files separately, as shown in the
FileOrCreate
configuration.
hostPath FileOrCreate configuration example
apiVersion: v1
kind: Pod
metadata:
name: test-webserver
spec:
containers:
- name: test-webserver
image: k8s.gcr.io/test-webserver:latest
volumeMounts:
- mountPath: /var/local/aaa
name: mydir
- mountPath: /var/local/aaa/1.txt
name: myfile
volumes:
- name: mydir
hostPath:
# Ensure the file directory is created.
path: /var/local/aaa
type: DirectoryOrCreate
- name: myfile
hostPath:
path: /var/local/aaa/1.txt
type: FileOrCreate
iscsi
An iscsi
volume allows an existing iSCSI (SCSI over IP) volume to be mounted
into your Pod. Unlike emptyDir
, which is erased when a Pod is removed, the
contents of an iscsi
volume are preserved and the volume is merely
unmounted. This means that an iscsi volume can be pre-populated with data, and
that data can be shared between pods.
A feature of iSCSI is that it can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a volume with your dataset and then serve it in parallel from as many Pods as you need. Unfortunately, iSCSI volumes can only be mounted by a single consumer in read-write mode. Simultaneous writers are not allowed.
See the
A Local volumes can only be used as a statically created PersistentVolume. Dynamic
provisioning is not supported. Compared to However, The following example shows a PersistentVolume using a You must set a PersistentVolume PersistentVolume When using local volumes, it is recommended to create a StorageClass with
An external static provisioner can be run separately for improved management of
the local volume lifecycle. Note that this provisioner does not support dynamic
provisioning yet. For an example on how to run an external local provisioner,
see the . An See the
A See the information about PersistentVolumes for more
details. A A For more details, see the
A projected volume maps several existing volume sources into the same
directory. For more details, see projected volumes. A Quobyte supports the Container Storage Interface.
CSI is the recommended plugin to use Quobyte volumes inside Kubernetes. Quobyte's
GitHub project has
A feature of RBD is that it can be mounted as read-only by multiple consumers
simultaneously. This means that you can pre-populate a volume with your dataset
and then serve it in parallel from as many pods as you need. Unfortunately,
RBD volumes can only be mounted by a single consumer in read-write mode.
Simultaneous writers are not allowed. See the
for more details. The As a Kubernetes cluster operator that administers storage, here are the
prerequisites that you must complete before you attempt migration to the
RBD CSI driver: A For more details, see Configuring Secrets. A StorageOS runs as a container within your Kubernetes environment, making local
or attached storage accessible from any node within the Kubernetes cluster.
Data can be replicated to protect against node failure. Thin provisioning and
compression can improve utilization and reduce cost. At its core, StorageOS provides block storage to containers, accessible from a file system. The StorageOS Container requires 64-bit Linux and has no additional dependencies.
A free developer license is available. The following example is a Pod configuration with StorageOS: For more information about StorageOS, dynamic provisioning, and PersistentVolumeClaims, see the
. A Choose one of the following methods to create a VMDK. First ssh into ESX, then use the following command to create a VMDK: Use the following command to create a VMDK: For more information, see the
The This also requires minimum vSphere vCenter/ESXi Version to be 7.0u1 and minimum HW Version to be VM version 15. The following StorageClass parameters from the built-in Existing volumes created using these parameters will be migrated to the vSphere CSI driver,
but new volumes created by the vSphere CSI driver will not be honoring these parameters. To turn off the The Sometimes, it is useful to share one volume for multiple uses in a single pod.
The The following example shows how to configure a Pod with a LAMP stack (Linux Apache MySQL PHP)
using a single, shared volume. This sample The PHP application's code and assets map to the volume's Use the In this example, a The storage media (such as Disk or SSD) of an To learn about requesting space using a resource specification, see
how to manage resources. The out-of-tree volume plugins include
Container Storage Interface (CSI), and also FlexVolume (which is deprecated). These plugins enable storage vendors to create custom storage plugins
without adding their plugin source code to the Kubernetes repository. Previously, all volume plugins were "in-tree". The "in-tree" plugins were built, linked, compiled,
and shipped with the core Kubernetes binaries. This meant that adding a new storage system to
Kubernetes (a volume plugin) required checking code into the core Kubernetes code repository. Both CSI and FlexVolume allow volume plugins to be developed independent of
the Kubernetes code base, and deployed (installed) on Kubernetes clusters as
extensions. For storage vendors looking to create an out-of-tree volume plugin, please refer
to the . Please read the
Once a CSI compatible volume driver is deployed on a Kubernetes cluster, users
may use the A The following fields are available to storage administrators to configure a CSI
persistent volume: Vendors with external CSI drivers can implement raw block volume support
in Kubernetes workloads. You can set up your
PersistentVolume/PersistentVolumeClaim with raw block volume support as usual, without any CSI specific changes. You can directly configure CSI volumes within the Pod
specification. Volumes specified in this way are ephemeral and do not
persist across pod restarts. See Ephemeral
Volumes
for more information. For more information on how to develop a CSI driver, refer to the
The The operations and features that are supported include:
provisioning/delete, attach/detach, mount/unmount and resizing of volumes. In-tree plugins that support FlexVolume is an out-of-tree plugin interface that uses an exec-based model to interface
with storage drivers. The FlexVolume driver binaries must be installed in a pre-defined
volume plugin path on each node and in some cases the control plane nodes as well. Pods interact with FlexVolume drivers through the FlexVolume is deprecated. Using an out-of-tree CSI driver is the recommended way to integrate external storage with Kubernetes. Maintainers of FlexVolume driver should implement a CSI Driver and help to migrate users of FlexVolume drivers to CSI.
Users of FlexVolume should move their workloads to use the equivalent CSI Driver. Mount propagation allows for sharing volumes mounted by a container to
other containers in the same pod, or even to other pods on the same node. Mount propagation of a volume is controlled by the This mode is equal to In other words, if the host mounts anything inside the volume mount, the
container will see it mounted there. Similarly, if any Pod with This mode is equal to A typical use case for this mode is a Pod with a FlexVolume or CSI driver or
a Pod that needs to mount something on the host using a This mode is equal to Before mount propagation can work properly on some deployments (CoreOS,
RedHat/Centos, Ubuntu) mount share must be configured correctly in
Docker as shown below. Edit your Docker's Or, remove Follow an example of deploying WordPress and MySQL with Persistent Volumes.local
local
volume represents a mounted local storage device such as a disk,
partition or directory.hostPath
volumes, local
volumes are used in a durable and
portable manner without manually scheduling pods to nodes. The system is aware
of the volume's node constraints by looking at the node affinity on the PersistentVolume.local
volumes are subject to the availability of the underlying
node and are not suitable for all applications. If a node becomes unhealthy,
then the local
volume becomes inaccessible by the pod. The pod using this volume
is unable to run. Applications using local
volumes must be able to tolerate this
reduced availability, as well as potential data loss, depending on the
durability characteristics of the underlying disk.local
volume and
nodeAffinity
:apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node
nodeAffinity
when using local
volumes.
The Kubernetes scheduler uses the PersistentVolume nodeAffinity
to schedule
these Pods to the correct node.volumeMode
can be set to "Block" (instead of the default
value "Filesystem") to expose the local volume as a raw block device.volumeBindingMode
set to WaitForFirstConsumer
. For more details, see the
local StorageClass example.
Delaying volume binding ensures that the PersistentVolumeClaim binding decision
will also be evaluated with any other node constraints the Pod may have,
such as node resource requirements, node selectors, Pod affinity, and Pod anti-affinity.nfs
nfs
volume allows an existing NFS (Network File System) share to be
mounted into a Pod. Unlike emptyDir
, which is erased when a Pod is
removed, the contents of an nfs
volume are preserved and the volume is merely
unmounted. This means that an NFS volume can be pre-populated with data, and
that data can be shared between pods. NFS can be mounted by multiple
writers simultaneously.persistentVolumeClaim
persistentVolumeClaim
volume is used to mount a
PersistentVolume into a Pod. PersistentVolumeClaims
are a way for users to "claim" durable storage (such as a GCE PersistentDisk or an
iSCSI volume) without knowing the details of the particular cloud environment.portworxVolume
portworxVolume
is an elastic block storage layer that runs hyperconverged with
Kubernetes.
portworxVolume
can be dynamically created through Kubernetes or it can also
be pre-provisioned and referenced inside a Pod.
Here is an example Pod referencing a pre-provisioned Portworx volume:apiVersion: v1
kind: Pod
metadata:
name: test-portworx-volume-pod
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /mnt
name: pxvol
volumes:
- name: pxvol
# This Portworx volume must already exist.
portworxVolume:
volumeID: "pxvol"
fsType: "<fs-type>"
pxvol
before using it in the Pod.
projected
quobyte (deprecated)
quobyte
volume allows an existing
rbd
RBD CSI migration
Kubernetes v1.23 [alpha]
CSIMigration
feature for RBD
, when enabled, redirects all plugin
operations from the existing in-tree plugin to the rbd.csi.ceph.com
CSI driver. In order to use this
feature, the
must be installed on the cluster and the CSIMigration
and csiMigrationRBD
feature gates
must be enabled.
rbd.csi.ceph.com
), v3.5.0 or above,
into your Kubernetes cluster.clusterID
field is a required parameter for CSI driver for
its operations, but in-tree StorageClass has monitors
field as a required
parameter, a Kubernetes storage admin has to create a clusterID based on the
monitors hash ( ex:#echo -n '<monitors_string>' | md5sum
) in the CSI config map and keep the monitors
under this clusterID configuration.adminId
in the in-tree Storageclass is different from
admin
, the adminSecretName
mentioned in the in-tree Storageclass has to be
patched with the base64 value of the adminId
parameter value, otherwise this
step can be skipped.secret
secret
volume is used to pass sensitive information, such as passwords, to
Pods. You can store secrets in the Kubernetes API and mount them as files for
use by pods without coupling to Kubernetes directly. secret
volumes are
backed by tmpfs (a RAM-backed filesystem) so they are never written to
non-volatile storage.subPath
volume mount will not
receive Secret updates.
storageOS (deprecated)
storageos
volume allows an existing
volume to mount into your Pod.apiVersion: v1
kind: Pod
metadata:
labels:
name: redis
role: master
name: test-storageos-redis
spec:
containers:
- name: master
image: kubernetes/redis:v1
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
volumeMounts:
- mountPath: /redis-master-data
name: redis-data
volumes:
- name: redis-data
storageos:
# The `redis-vol01` volume must already exist within StorageOS in the `default` namespace.
volumeName: redis-vol01
fsType: ext4
vsphereVolume
vsphereVolume
is used to mount a vSphere VMDK volume into your Pod. The contents
of a volume are preserved when it is unmounted. It supports both VMFS and VSAN datastore.Creating a VMDK volume
vmkfstools -c 2G /vmfs/volumes/DatastoreName/volumes/myDisk.vmdk
vmware-vdiskmanager -c -t 0 -s 40GB -a lsilogic myDisk.vmdk
vSphere VMDK configuration example
apiVersion: v1
kind: Pod
metadata:
name: test-vmdk
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-vmdk
name: test-volume
volumes:
- name: test-volume
# This VMDK volume must already exist.
vsphereVolume:
volumePath: "[DatastoreName] volumes/myDisk"
fsType: ext4
vSphere CSI migration
Kubernetes v1.19 [beta]
CSIMigration
feature for vsphereVolume
, when enabled, redirects all plugin operations
from the existing in-tree plugin to the csi.vsphere.vmware.com
CSI driver. In order to use this feature, the
must be installed on the cluster and the CSIMigration
and CSIMigrationvSphere
feature gates must be enabled.vsphereVolume
plugin are not supported by the vSphere CSI driver:
diskformat
hostfailurestotolerate
forceprovisioning
cachereservation
diskstripes
objectspacereservation
iopslimit
vSphere CSI migration complete
Kubernetes v1.19 [beta]
vsphereVolume
plugin from being loaded by the controller manager and the kubelet, you need to set InTreePluginvSphereUnregister
feature flag to true
. You must install a csi.vsphere.vmware.com
CSI driver on all worker nodes.Portworx CSI migration
Kubernetes v1.23 [alpha]
CSIMigration
feature for Portworx has been added but disabled by default in Kubernetes 1.23 since it's in alpha state.
It redirects all plugin operations from the existing in-tree plugin to the
pxd.portworx.com
Container Storage Interface (CSI) Driver.
must be installed on the cluster.
To enable the feature, set CSIMigrationPortworx=true
in kube-controller-manager and kubelet.Using subPath
volumeMounts.subPath
property specifies a sub-path inside the referenced volume
instead of its root.subPath
configuration is not recommended
for production use.html
folder and
the MySQL database is stored in the volume's mysql
folder. For example:apiVersion: v1
kind: Pod
metadata:
name: my-lamp-site
spec:
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "rootpasswd"
volumeMounts:
- mountPath: /var/lib/mysql
name: site-data
subPath: mysql
- name: php
image: php:7.0-apache
volumeMounts:
- mountPath: /var/www/html
name: site-data
subPath: html
volumes:
- name: site-data
persistentVolumeClaim:
claimName: my-lamp-site-data
Using subPath with expanded environment variables
Kubernetes v1.17 [stable]
subPathExpr
field to construct subPath
directory names from
downward API environment variables.
The subPath
and subPathExpr
properties are mutually exclusive.Pod
uses subPathExpr
to create a directory pod1
within
the hostPath
volume /var/log/pods
.
The hostPath
volume takes the Pod
name from the downwardAPI
.
The host directory /var/log/pods/pod1
is mounted at /logs
in the container.apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: container1
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
image: busybox:1.28
command: [ "sh", "-c", "while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt" ]
volumeMounts:
- name: workdir1
mountPath: /logs
# The variable expansion uses round brackets (not curly brackets).
subPathExpr: $(POD_NAME)
restartPolicy: Never
volumes:
- name: workdir1
hostPath:
path: /var/log/pods
Resources
emptyDir
volume is determined by the
medium of the filesystem holding the kubelet root dir (typically
/var/lib/kubelet
). There is no limit on how much space an emptyDir
or
hostPath
volume can consume, and no isolation between containers or between
pods.Out-of-tree volume plugins
csi
csi
volume type to attach or mount the volumes exposed by the
CSI driver.csi
volume can be used in a Pod in three different ways:
driver
: A string value that specifies the name of the volume driver to use.
This value must correspond to the value returned in the GetPluginInfoResponse
by the CSI driver as defined in the .
It is used by Kubernetes to identify which CSI driver to call out to, and by
CSI driver components to identify which PV objects belong to the CSI driver.volumeHandle
: A string value that uniquely identifies the volume. This value
must correspond to the value returned in the volume.id
field of the
CreateVolumeResponse
by the CSI driver as defined in the .
The value is passed as volume_id
on all calls to the CSI volume driver when
referencing the volume.readOnly
: An optional boolean value indicating whether the volume is to be
"ControllerPublished" (attached) as read only. Default is false. This value is
passed to the CSI driver via the readonly
field in the
ControllerPublishVolumeRequest
.fsType
: If the PV's VolumeMode
is Filesystem
then this field may be used
to specify the filesystem that should be used to mount the volume. If the
volume has not been formatted and formatting is supported, this value will be
used to format the volume.
This value is passed to the CSI driver via the VolumeCapability
field of
ControllerPublishVolumeRequest
, NodeStageVolumeRequest
, and
NodePublishVolumeRequest
.volumeAttributes
: A map of string to string that specifies static properties
of a volume. This map must correspond to the map returned in the
volume.attributes
field of the CreateVolumeResponse
by the CSI driver as
defined in the .
The map is passed to the CSI driver via the volume_context
field in the
ControllerPublishVolumeRequest
, NodeStageVolumeRequest
, and
NodePublishVolumeRequest
.controllerPublishSecretRef
: A reference to the secret object containing
sensitive information to pass to the CSI driver to complete the CSI
ControllerPublishVolume
and ControllerUnpublishVolume
calls. This field is
optional, and may be empty if no secret is required. If the Secret
contains more than one secret, all secrets are passed.nodeStageSecretRef
: A reference to the secret object containing
sensitive information to pass to the CSI driver to complete the CSI
NodeStageVolume
call. This field is optional, and may be empty if no secret
is required. If the Secret contains more than one secret, all secrets
are passed.nodePublishSecretRef
: A reference to the secret object containing
sensitive information to pass to the CSI driver to complete the CSI
NodePublishVolume
call. This field is optional, and may be empty if no
secret is required. If the secret object contains more than one secret, all
secrets are passed.CSI raw block volume support
Kubernetes v1.18 [stable]
CSI ephemeral volumes
Kubernetes v1.16 [beta]
Migrating to CSI drivers from in-tree plugins
Kubernetes v1.17 [beta]
CSIMigration
feature, when enabled, directs operations against existing in-tree
plugins to corresponding CSI plugins (which are expected to be installed and configured).
As a result, operators do not have to make any
configuration changes to existing Storage Classes, PersistentVolumes or PersistentVolumeClaims
(referring to in-tree plugins) when transitioning to a CSI driver that supersedes an in-tree plugin.CSIMigration
and have a corresponding CSI driver implemented
are listed in Types of Volumes.flexVolume
Kubernetes v1.23 [deprecated]
flexVolume
in-tree volume plugin.
For more details, see the FlexVolume
Mount propagation
mountPropagation
field
in Container.volumeMounts
. Its values are:
None
- This volume mount will not receive any subsequent mounts
that are mounted to this volume or any of its subdirectories by the host.
In similar fashion, no mounts created by the container will be visible on
the host. This is the default mode.private
mount propagation as described in the
HostToContainer
- This volume mount will receive all subsequent mounts
that are mounted to this volume or any of its subdirectories.Bidirectional
mount propagation to the same
volume mounts anything there, the container with HostToContainer
mount
propagation will see it.rslave
mount propagation as described in the
Bidirectional
- This volume mount behaves the same the HostToContainer
mount.
In addition, all volume mounts created by the container will be propagated
back to the host and to all containers of all pods that use the same volume.hostPath
volume.rshared
mount propagation as described in the
Bidirectional
mount propagation can be dangerous. It can damage
the host operating system and therefore it is allowed only in privileged
containers. Familiarity with Linux kernel behavior is strongly recommended.
In addition, any volume mounts created by containers in pods must be destroyed
(unmounted) by the containers on termination.
Configuration
systemd
service file. Set MountFlags
as follows:MountFlags=shared
MountFlags=slave
if present. Then restart the Docker daemon:sudo systemctl daemon-reload
sudo systemctl restart docker
What's next