Volume mount failures prevent pods from accessing storage. This occurs when persistent volumes are unavailable, mount permissions are wrong, or storage backends fail. Pods remain Pending until the issue is resolved.
When a pod requests a PersistentVolumeClaim (PVC), the kubelet must: 1. Attach the storage volume to the node 2. Format the volume (if new) 3. Mount the volume to the node filesystem 4. Bind-mount the volume into the container Failure at any step results in "volume mount failed". Common causes: - PVC not provisioned (PVC stuck in Pending) - StorageClass misconfigured - Storage backend unavailable - Node cannot access storage (NFS server down, iSCSI unavailable) - Container filesystem permissions prevent mounting
List all PVCs:
kubectl get pvc -AFor the problematic pod's namespace:
kubectl get pvc -n <namespace>
kubectl describe pvc <pvc-name> -n <namespace>Look for:
- Status should be "Bound", not "Pending"
- Volume field should list a PV name
- Events section for provisioning errors
If status is Pending:
kubectl describe pvc <pvc-name> -n <namespace> | grep -A5 Events:Common events:
- "no persistent volumes available"
- "provision failed"
- "timeout"
Check if StorageClass exists:
kubectl get storageclass
kubectl describe storageclass <class-name>Inspect the StorageClass:
kubectl get sc <storage-class> -o yamlShould include:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: kubernetes.io/aws-ebs # or other provisioner
parameters:
type: gp2
iops: "100"
reclaimPolicy: Delete
allowVolumeExpansion: trueCommon issues:
- Provisioner does not exist or is misspelled
- Parameters invalid for the provisioner
- No default StorageClass (if pod doesn't specify one)
Create a StorageClass if missing:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2Apply:
kubectl apply -f storageclass.yamlStorage provisioners (local-path, NFS, AWS EBS, etc.) run as DaemonSets or Deployments:
kubectl get pods -n kube-system | grep -i storage
kubectl get pods -n kube-system | grep -i csi # CSI driversFor each provisioner pod:
kubectl describe pod <provisioner-pod> -n kube-system
kubectl logs <provisioner-pod> -n kube-system | tail -50Common provisioners:
- csi-*-plugin (cloud or third-party CSI drivers)
- local-path-provisioner (local storage)
- nfs-provisioner (NFS)
If provisioner pod is CrashLoopBackOff:
kubectl logs <provisioner-pod> -n kube-system --previousLook for:
- Image pull failures (see kubelet-temporary-failure article)
- Configuration errors
- RBAC permission denied
Test backend connectivity based on storage type:
For NFS:
# From a pod
kubectl run test-nfs --image=ubuntu -it --rm -- bash
# Inside pod:
apt-get update && apt-get install -y nfs-common
mountpoint /mnt/test || mount -t nfs <nfs-server>:/export /mnt/test
ls -la /mnt/testFor cloud storage (EBS, GCE Persistent Disk, Azure Disk):
# AWS EBS
aws ec2 describe-volumes --volume-ids <volume-id>
aws ec2 describe-volume-attachments --volume-ids <volume-id>
# GCP
gcloud compute disks list
gcloud compute disks describe <disk-name>
# Azure
az disk list
az disk show -g <group> -n <disk-name>For iSCSI:
# From node
sudo iscsiadm -m session
sudo iscsiadm -m discoverydb -t st -p <target-ip>SSH into the node where pod is scheduled:
kubectl describe pod <pod-name> -n <namespace> | grep Node:
kubectl get pod <pod-name> -n <namespace> -o wide # See assigned nodeOn the node:
ls -la /var/lib/kubelet/pods/*/volumes/ # List mounted volumes
mount | grep -i kubernetes # Show mounted volumes
df -h | grep /var/lib/kubelet # Disk usageFor specific volume:
kubectl get volumeattachment # Check if volume is attached
kubectl describe volumeattachment <attachment>If volume not attached:
kubectl get pvc <pvc-name> -o yaml | grep volumeName
# Get the PV name, then check:
kubectl describe pv <pv-name>For mount permission issues:
sudo mount | grep <pvc-name>
# Check mount options and uid/gid
ls -la /var/lib/kubelet/pods/<pod-id>/volumes/<volume-type>/Inspect the pod spec for volume mounts:
kubectl get pod <pod-name> -n <namespace> -o yaml | grep -A20 volumes:Should include both volumes section (declare PVC) and containers with volumeMounts:
spec:
containers:
- name: app
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: my-pvcCommon issues:
- claimName references non-existent PVC
- mountPath conflicts with application paths
- Volume mode mismatch (Filesystem vs Block)
Verify the PVC exists:
kubectl get pvc <claimName> -n <namespace>If volume mount is read-only:
volumeMounts:
- name: data
mountPath: /data
readOnly: true # May cause permission errorsIf volume is mounted but inaccessible (permission denied):
# Check mount options
mount | grep <volume-path>
# For NFS, check ownership
ls -la /var/lib/kubelet/pods/<pod-id>/volumes/kubernetes.io~nfs/<volume>/
# Change ownership if needed
sudo chown <uid>:<gid> <mount-path>
sudo chmod 755 <mount-path>For container mounts, use securityContext:
spec:
securityContext:
fsGroup: 1000 # Set group on mounted volume
containers:
- name: app
securityContext:
runAsUser: 1000
runAsGroup: 1000
volumeMounts:
- name: data
mountPath: /dataFor NFS mount options (e.g., anonuid):
volumes:
- name: data
nfs:
server: nfs.example.com
path: /export
mountOptions:
- vers=4.1
- anonuid=65534 # Map remote root to nobodyIf PVC remains Pending despite correct configuration:
# Check if volume provisioning is truly stuck
kubectl describe pvc <pvc-name> -n <namespace>If provisioner has given up or errors are permanent:
1. Delete the PVC (and its bound PV if needed):
kubectl delete pvc <pvc-name> -n <namespace>
kubectl delete pv <pv-name> # If not auto-deleted2. Optionally, pre-create the PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
storageClassName: standard
nfs:
server: nfs.example.com
path: /export3. Recreate the PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 10Gi4. Redeploy the pod:
kubectl delete pod <pod-name>
kubectl get pods -w # Watch for new pod to startVolume mount failures are among the most common Kubernetes issues in production. The root cause often lies outside Kubernetes (storage backend unavailable, NFS server down, cloud API rate-limited). Implement monitoring on storage provisioners and backends. For stateful workloads, use StatefulSets with volumeClaimTemplates to auto-create PVCs per replica. Consider storage quotas (ResourceQuota) to prevent runaway provisioning. Backup strategies should account for mounted volumes; use snapshots or persistent backup containers. For disaster recovery, implement volume replication at the storage layer. CSI drivers provide better error handling and logging than in-tree provisioners. Test storage failover scenarios: what happens if an NFS server is down for 5 minutes? Do pods recover automatically? Multi-zone clusters need storage replication or highly available backends. For local storage (local-path provisioner), ensure node failures don't lose data. Monitoring volume mount latency helps identify slow storage backends. Implement PVC garbage collection to clean up orphaned volumes.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm