This error occurs when a PersistentVolumeClaim cannot find any matching PersistentVolume in the cluster. Common causes include missing StorageClass configuration, no static PVs available, or mismatched capacity/access modes between PVC and PV.
When Kubernetes reports "no persistent volumes available for this claim," your PVC is stuck in Pending state because the cluster cannot find or create a suitable PersistentVolume. This is a blocking error—pods depending on this PVC will not start. This commonly happens when moving from managed Kubernetes (GKE, EKS, AKS) to self-hosted clusters (kubeadm, microk8s, k3s) where dynamic provisioning isn't automatically configured. Managed services provide default StorageClasses, but bare-metal or local clusters require manual setup. The Kubernetes storage controller needs either a dynamic provisioner to create PVs on-demand, or pre-created static PVs that match your PVC's requirements (storage class, capacity, access modes).
Inspect why the PVC is pending:
kubectl get pvc
kubectl describe pvc <pvc-name>Look at the Events section for the specific error message. Also check if any PVs exist:
kubectl get pv
kubectl get storageclassCheck if your cluster has a default StorageClass:
kubectl get storageclassThe default should have (default) next to its name. If no default exists, create one or mark an existing class as default:
kubectl patch storageclass <name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'For local/self-hosted clusters, install a storage provisioner:
Minikube:
minikube addons enable storage-provisionermicrok8s:
microk8s enable storageK3s (local-path):
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yamlKind:
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yamlIf using static provisioning, create a PV that matches your PVC:
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 10Gi # Must be >= PVC request
accessModes:
- ReadWriteOnce # Must match PVC
storageClassName: manual # Must match PVC
hostPath:
path: /mnt/data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: manual
resources:
requests:
storage: 5GiEnsure your PVC requirements can be satisfied:
# Check what PVC is requesting
kubectl get pvc <name> -o yaml | grep -A5 "spec:"
# Check what PVs offer
kubectl get pv -o custom-columns=NAME:.metadata.name,CAPACITY:.spec.capacity.storage,ACCESS:.spec.accessModes,CLASS:.spec.storageClassName,STATUS:.status.phaseCommon issues:
- PVC requests 10Gi but PV only has 5Gi
- PVC needs ReadWriteMany but PV only supports ReadWriteOnce
- PVC specifies storageClassName but PV has a different class
Some StorageClasses delay binding until a pod is scheduled:
kubectl get storageclass <name> -o yaml | grep volumeBindingModeIf it shows WaitForFirstConsumer, the PVC will stay Pending until you create a pod that uses it. This is normal behavior for topology-aware storage.
Create a test pod to trigger binding:
apiVersion: v1
kind: Pod
metadata:
name: test-pvc
spec:
containers:
- name: test
image: busybox
command: ["sleep", "3600"]
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: <your-pvc-name>Check that the storage provisioner pods are healthy:
# Check for common provisioner pods
kubectl get pods -n kube-system | grep -E 'provisioner|csi|storage'
# For cloud providers
kubectl get pods -n kube-system | grep -E 'ebs-csi|gce-pd|azure-disk'If the provisioner pod is crashing or not present, the storage controller cannot dynamically create volumes.
Migrating from Managed to Self-Hosted Clusters:
When moving workloads from GKE/EKS/AKS to kubeadm or microk8s, you lose the automatic storage provisioning. You must either:
1. Install a CSI driver and StorageClass for your storage backend
2. Use local-path-provisioner for development
3. Set up NFS provisioner for shared storage
StorageClass Empty String vs Unset:
Kubernetes treats storageClassName: "" (empty string) differently from omitting the field entirely:
- Empty string: Explicitly requests no StorageClass (static binding only)
- Omitted: Uses the cluster's default StorageClass
Pre-binding with volumeName:
To force a specific PV-to-PVC binding, use the volumeName field:
spec:
volumeName: my-specific-pv
storageClassName: "" # Required for pre-bindingNFS for ReadWriteMany:
If you need ReadWriteMany access mode, most local provisioners won't work. Consider NFS-based provisioners like nfs-subdir-external-provisioner.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm