The FailedMount error occurs when Kubernetes cannot mount a volume to a pod, often due to missing ConfigMaps/Secrets, NFS permission issues, or volume attachment problems.
The FailedMount error indicates that Kubernetes was unable to mount a volume at the specified path inside a container. This happens during pod startup after the volume has been attached to the node but before the container can start. Mount failures can occur for various volume types: ConfigMaps, Secrets, PersistentVolumes, NFS shares, and cloud provider disks. The error prevents the pod from transitioning to Running state, leaving it stuck in ContainerCreating. FailedMount often appears after FailedAttachVolume—the attach step must succeed before mounting can begin. If you see both errors, focus on fixing the attachment issue first.
View the detailed mount failure message:
kubectl describe pod <pod-name>Look for events like:
Warning FailedMount MountVolume.SetUp failed for volume "config"
: configmap "app-config" not foundIf mounting ConfigMaps or Secrets, ensure they exist in the same namespace:
# List ConfigMaps
kubectl get configmap -n <namespace>
# List Secrets
kubectl get secret -n <namespace>
# Create missing ConfigMap
kubectl create configmap app-config --from-file=config.yaml -n <namespace>Verify PV is bound and available:
# Check PVC status (should be Bound)
kubectl get pvc -n <namespace>
# Check PV status
kubectl get pv
# Check VolumeAttachments
kubectl get volumeattachmentsFor NFS mounts, ensure the server is configured correctly:
# On NFS server, check exports
cat /etc/exports
# Should include: /exports *(rw,sync,no_root_squash,no_subtree_check)
# Apply changes
exportfs -raIn your pod, you may need to set securityContext:
securityContext:
fsGroup: 1000
runAsUser: 1000Mount operations can fail if the node is out of disk space:
# SSH to node or use debug container
kubectl debug node/<node-name> -it --image=busybox
# Check disk space
df -hClean up if needed:
# Prune unused container images
crictl rmi --pruneIf a volume is stuck attached to a dead node:
# Find the stuck attachment
kubectl get volumeattachments
# Delete it (after confirming the node is truly down)
kubectl delete volumeattachment <attachment-name>FailedMount warnings often self-resolve after retries. Kubernetes retries mounts with exponential backoff, so a brief warning during pod startup may not indicate a persistent problem.
For high-concurrency scenarios (>400 pods mounting PVCs simultaneously), you may hit kubelet limits. Increase the kubelet's volume-related settings:
--volume-plugin-dir=/var/lib/kubelet/volumeplugins
--max-parallel-image-pulls=10Common mount timeout: 2 minutes. If your storage backend is slow, pods may fail before mount completes. Check CSI driver logs for backend-specific issues:
kubectl logs -n kube-system -l app=csi-driver --tail=100Static pods (defined in /etc/kubernetes/manifests) cannot use ConfigMaps or Secrets—this is a kubelet limitation. Use hostPath volumes instead for static pods.
No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes
default backend - 404
How to fix "default backend - 404" in Kubernetes Ingress
serviceaccount cannot list resource
How to fix "serviceaccount cannot list resource" in Kubernetes