A PersistentVolume is already bound to another pod and cannot be accessed by a new pod due to access mode restrictions. ReadWriteOnce (RWO) volumes permit only one pod to mount them. Fix by using ReadWriteMany access mode, limiting pods to single replicas, or switching to StatefulSets.
Kubernetes PersistentVolumes support three access modes: ReadWriteOnce (RWO) allows read-write access from a single node only; ReadWriteMany (RWX) allows read-write access from multiple nodes; ReadOnlyMany (ROX) allows read-only access from multiple nodes. When you attempt to mount a RWO volume to a second pod, Kubernetes blocks the mount to prevent data corruption. Additionally, ReadWriteOncePod (RWOP, Kubernetes 1.22+) restricts access to a single pod across the entire cluster, not just a single node. The error reflects a fundamental constraint: the volume's access mode doesn't support the requested pod topology.
First, understand what access mode is configured:
kubectl get pvc -n <namespace> -o wide
kubectl describe pvc <pvc-name> -n <namespace>Look for the Access Modes field. If it shows RWO (ReadWriteOnce), that's the constraint. Then check which pod(s) are actually using it:
kubectl get pods -n <namespace> -o wide
kubectl describe pod <pod-name> -n <namespace>If your storage backend supports RWX (ReadWriteMany), update the PVC:
kubectl patch pvc <pvc-name> -n <namespace> -p '{"spec":{"accessModes":["ReadWriteMany"]}}' --type=mergeOr update the manifest:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteMany # Changed from ReadWriteOnce
storageClassName: nfs-storage
resources:
requests:
storage: 10GiNote: Not all storage backends support RWX. NFS and shared filesystems do; cloud block storage (EBS, GCP PD, Azure Disk) typically support only RWO. For cloud storage, consider:
- AWS EFS (Elastic File System) for RWX
- GCP Filestore for RWX
- Azure Files (NFS protocol) for RWX
- Ceph or Longhorn for on-premises RWX
Deployments don't guarantee ordered pod startup/shutdown. StatefulSets do, preventing the deadlock where old pods hold RWO volumes. Convert to StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-app
spec:
serviceName: my-app
replicas: 1 # Only one pod can use RWO volume
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: my-image:latest
volumeMounts:
- name: data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 10GiStatefulSets ensure orderly pod termination before new pods start, eliminating the multi-attach error. They're designed for stateful workloads with RWO volumes.
If using RWO volumes with a StatefulSet, you can only have 1 replica per volume. Each pod needs its own PVC:
# StatefulSet with volumeClaimTemplates creates one PVC per pod
# Pod 0 gets pvc data-0
# Pod 1 gets pvc data-1
# etc.
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 10GiEach pod ordinal (0, 1, 2...) automatically gets its own PVC, allowing multiple pods without the multi-attach conflict.
For cloud storage, use topology-aware binding to prevent cross-zone scheduling issues:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-gp2
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: topology.kubernetes.io/zone
values:
- us-east-1a
- us-east-1b
parameters:
type: gp2
iops: "100"This ensures volumes are provisioned in the same zone as the pod, eliminating topology mismatches.
After applying changes, verify the pod is running and the PVC is bound:
kubectl get pvc,pod -n <namespace>
kubectl describe pvc <pvc-name> -n <namespace>
kubectl describe pod <pod-name> -n <namespace>Pod should be in Running state and PVC should show Bound. If still Pending, check pod events:
kubectl logs <pod-name> -n <namespace>
kubectl events --for pod/<pod-name> -n <namespace>Access mode selection depends on both storage capability and application design. Cloud-provided block storage (AWS EBS, GCP Persistent Disk, Azure Disk) supports only RWO; use RWX alternatives (EFS, Filestore, Azure Files) for multi-pod scenarios. NFS and shared filesystems natively support RWX. For databases and stateful applications, StatefulSets with RWO volumes per pod is the standard pattern. Kubernetes 1.22+ introduced ReadWriteOncePod (RWOP) for strict single-pod access (even if multiple pods are on same node). Performance tip: When using RWX storage for many concurrent writers, ensure network bandwidth is sufficient for all pods. For Helm deployments, pass storage class and access mode as values to support different cluster environments.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm