A PersistentVolume with ReadWriteOnce access mode is already exclusively attached to one node, preventing a new pod on a different node from mounting it. This commonly occurs after node failures, failed pod restarts, or Deployment rolling updates. Fix by cleaning up stale VolumeAttachments, scaling the Deployment, or switching to StatefulSets.
ReadWriteOnce (RWO) volumes in Kubernetes can only be mounted by one pod at a time, and only pods on the same node where the volume is attached. When Kubernetes tries to schedule a new pod on a different node but the volume is still attached to the old node (because the old pod hasn't cleanly terminated or the node failed), the attach controller refuses to detach and reattach the volume. This multi-attach protection exists to prevent data corruption from simultaneous writes.
First, understand what's happening:
kubectl get pod <pod-name> -n <namespace> -o wide
kubectl get pvc -n <namespace>
kubectl describe pod <pod-name> -n <namespace>Look for which node the pod is trying to use, and which node the volume is attached to (visible in events).
VolumeAttachment resources track which nodes have volumes attached. Stale ones block reattachment:
kubectl get volumeattachmentsFor each attachment pointing to a dead or non-existent node, delete it:
kubectl delete volumeattachment <attachment-name>If deletion hangs with finalizers, force-remove them:
kubectl patch volumeattachment <attachment-name> -p '{"metadata":{"finalizers":null}}' --type=merge
kubectl delete volumeattachment <attachment-name>If the old pod is stuck in termination (Terminating state for >5 minutes), force-delete it:
kubectl delete pod <pod-name> -n <namespace> --grace-period=0 --forceThis immediately removes the pod object from the cluster, allowing the volume controller to detach the volume from the old node and attach it to the new pod.
If a Deployment is stuck with rolling updates, reset it:
kubectl scale deployment <deployment-name> -n <namespace> --replicas=0Wait 30 seconds for all pods to terminate and volumes to detach:
kubectl get pod -n <namespace> | grep <deployment-name>Then scale back up:
kubectl scale deployment <deployment-name> -n <namespace> --replicas=<desired-count>This forces fresh pod scheduling and volume attachment.
If using a Deployment with RWO volumes, this is the root cause. StatefulSets guarantee ordered termination and startup, preventing the deadlock:
Convert Deployment to StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-app
spec:
serviceName: my-app # Requires headless service
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: my-image
volumeMounts:
- name: data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: data
spec:
storageClassName: standard
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10GiStatefulSets handle termination and attachment sequentially, eliminating multi-attach errors.
If using cloud storage (AWS EBS, GCP PD, Azure Disk), configure the StorageClass with zone-aware binding:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-gp2
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: topology.kubernetes.io/zone
values:
- us-east-1a
- us-east-1b
parameters:
type: gp2
iops: "100"This ensures volumes are provisioned in the same zone as the pod, preventing cross-zone attachment failures.
Check that the pod is now Running and the PVC is Bound:
kubectl get pod,pvc -n <namespace>
kubectl describe pod <pod-name> -n <namespace>If still Pending, check pod events and VolumeAttachment status again. If the node is dead, Kubernetes may need 5-10 minutes to mark it as NotReady before allowing volume reattachment.
For cloud providers: AWS EBS volumes cannot attach to multiple instances; use multi-attach EBS volumes (io1/io2) only if your application truly supports concurrent writes. GCP Persistent Disks are zone-locked; use allowedTopologies in StorageClass to enforce zone affinity. Azure Disk supports ReadWriteOnce only by default; for multi-writer scenarios, use Azure Shared Disks or NFS instead. For on-premises storage (Longhorn, Ceph), the controller may auto-recover after 6 minutes; manual VolumeAttachment cleanup is faster. Kubernetes 1.25+ includes fixes for non-graceful node shutdown that automatically detach volumes from failed nodes more reliably. For CI/CD pipelines, set appropriate pod termination grace periods (terminationGracePeriodSeconds) to allow clean shutdown before force-kill.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm