A PersistentVolumeClaim references a StorageClass that doesn't exist in the cluster. This prevents dynamic volume provisioning and leaves PVCs in a Pending state indefinitely. Fix by creating the missing StorageClass or correcting the PVC's storageClassName reference.
When Kubernetes encounters a PersistentVolumeClaim (PVC) that references a StorageClass by name, it attempts to use that StorageClass to dynamically provision a PersistentVolume. If the named StorageClass does not exist in the cluster, the provisioning fails. This error is fundamental—without the StorageClass, Kubernetes has no instruction set for how to create the underlying storage (which cloud provider API to call, what parameters to use, etc.). Bare metal and on-premises clusters don't have default StorageClasses, while managed cloud Kubernetes services (EKS, AKS, GKE) typically provide cloud-native ones.
Run this command to see what StorageClasses actually exist:
kubectl get storageclassIf this returns nothing or doesn't include the class your PVC expects, that's the problem.
Check the PVC specification and recent events:
kubectl describe pvc <pvc-name> -n <namespace>Look at the Storage Class field in the output and the Events section at the bottom.
If the StorageClass name from the PVC doesn't appear in the list from step 1, it doesn't exist.
For cloud providers, verify the CSI driver pods are running:
kubectl get pods -A | grep -E "(ebs-csi|azure-disk|gce-pd|csi-driver)"You have two options:
Option A: Create the StorageClass if it's the right provisioner but missing:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs # or your cloud provider
parameters:
type: gp2
iops: "100"
volumeBindingMode: ImmediateThen apply it:
kubectl apply -f storageclass.yamlOption B: Fix the PVC manifest to reference an existing StorageClass:
Edit the PVC YAML and set spec.storageClassName to a name from step 1, or remove the field to use the default StorageClass (if one exists):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
storageClassName: standard # Match an existing class
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10GiBare metal clusters don't come with storage provisioners. Install one of these options:
Local volumes (simplest, data-local, no HA):
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumerNFS (network storage, multi-node access):
Deploy NFS provisioner via Helm:
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm install nfs-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=192.168.1.100 \
--set nfs.path=/exportsCeph/Longhorn (distributed storage, HA):
Use Helm charts for production deployments.
After creating/fixing the StorageClass, the PVC should bind automatically:
kubectl get pvc -n <namespace>Look for STATUS = Bound instead of Pending. This may take a few seconds. If still Pending, describe the PVC again to check for new errors:
kubectl describe pvc <pvc-name> -n <namespace>On managed Kubernetes services (EKS, AKS, GKE), StorageClasses are usually auto-created by cloud provider controllers. If missing, install the CSI driver: AWS EBS CSI (aws-ebs-csi-driver Helm chart), Azure Disk CSI (azure-csi-disk Helm chart), or GCP PD CSI. For mixed-cloud or multi-region deployments, use volumeBindingMode: WaitForFirstConsumer in StorageClass to ensure PVs provision in the same availability zone as the pod. On Helm deployments, use --set persistence.storageClass=<name> to override the default storage class during installation. Note that some legacy provisioners (kubernetes.io/aws-ebs, kubernetes.io/gce-pd) are deprecated; use CSI drivers instead.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm