The FailedAttachVolume error occurs when Kubernetes cannot attach a persistent volume to a node, typically because the volume is already attached elsewhere or the cloud provider operation failed.
The FailedAttachVolume error indicates that Kubernetes cannot attach a PersistentVolume to the node where your pod is scheduled. This is a cloud provider or storage backend operation that must complete before the volume can be mounted. The most common cause (~90% of cases) is the volume being attached to another node. Cloud provider volumes (AWS EBS, Azure Disk, GCP Persistent Disk) can only attach to one node at a time. If a previous pod crashed or its node failed, the volume may still be attached to the old node. Kubernetes waits up to 6 minutes for attachment before giving up. It does not automatically force-detach volumes to prevent data corruption.
Get details about the attachment failure:
kubectl describe pod <pod-name>Look for:
Warning FailedAttachVolume AttachVolume.Attach failed for volume "pvc-xxx"
: rpc error: volume is already attached to another nodeSee which node the volume thinks it's attached to:
kubectl get volumeattachments
# Get details
kubectl describe volumeattachment <attachment-name>The output shows the node name in the spec.
Check if the node the volume is attached to still exists:
kubectl get nodes
kubectl describe node <old-node-name>If the node is NotReady or doesn't exist, the volume is orphaned.
If the old node is confirmed down, force detach using your cloud provider:
AWS:
aws ec2 detach-volume --volume-id vol-xxx --forceAzure:
az disk update --name <disk> --resource-group <rg> --set diskState=UnattachedGCP:
gcloud compute instances detach-disk <instance> --disk=<disk-name>If the VolumeAttachment is stuck, remove its finalizers:
# Edit and remove finalizers
kubectl edit volumeattachment <attachment-name>Remove the finalizers section:
finalizers:
- external-attacher/xxx-csi-driver # Remove thisOr delete the attachment directly:
kubectl delete volumeattachment <attachment-name>If the node is truly gone and nothing else works:
# This releases all volumes from the dead node
kubectl delete node <dead-node-name>Warning: Only do this after confirming the node will not come back.
Kubernetes has a 6-minute timeout for volume attachment. It won't force-detach because doing so risks data corruption if the volume is being written to.
Managed Kubernetes services (GKE, EKS, AKS) handle this better through out-of-band health checks that can confirm a node is truly down and safely detach volumes.
For self-managed clusters, consider enabling the VolumeAttachment garbage collection controller. You can also adjust the attach-detach controller's reconciliation:
--attach-detach-reconcile-sync-period=1mReadWriteMany (RWX) volumes don't have this problem since they support multiple attachments. Consider using RWX storage classes (EFS, NFS, CephFS) for workloads that may need to reschedule quickly.
If you frequently see this issue after node failures, implement Pod Disruption Budgets and use StatefulSets with proper terminationGracePeriodSeconds to ensure clean detachment before pod termination.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm