This error occurs when a pod references a volume that doesn't exist, typically due to a deleted PersistentVolume, unbound PVC, or failed dynamic provisioning.
The "volume not found" error indicates that the kubelet is trying to mount a volume for your pod, but the referenced storage resource doesn't exist or cannot be accessed. This happens after pod scheduling but before the container can start. Unlike attachment failures (which involve cloud provider operations), this error means the volume object itself is missing or in an invalid state. The PersistentVolume may have been deleted, the PVC may not be bound, or dynamic provisioning may have failed silently. This is a fundamental problem—the storage your pod needs simply isn't there.
Verify the PVC exists and is bound:
kubectl get pvc -n <namespace>Expected output shows Bound status:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
my-pvc Bound pv-xxx 10Gi RWO standardIf status is Pending, the PVC hasn't found a matching PV.
View PersistentVolumes and their states:
kubectl get pvLook for:
- Available: Can be bound to a PVC
- Bound: Attached to a PVC
- Released: PVC was deleted but PV retains data
- Failed: Reclamation failed
Check that the storage class referenced by the PVC exists:
# List storage classes
kubectl get storageclass
# Check PVC's requested storage class
kubectl get pvc <pvc-name> -o jsonpath='{.spec.storageClassName}'If the storage class doesn't exist, create it or update the PVC.
If a PV is in Released state and you want to reuse it:
# Clear the claimRef to make PV available again
kubectl patch pv <pv-name> -p '{"spec":{"claimRef": null}}'The PV will transition to Available and can be bound to a new PVC.
Warning: This may expose data from the previous claim.
For dynamic provisioning failures, check the CSI driver or provisioner logs:
# Find the provisioner pods
kubectl get pods -n kube-system | grep -E 'csi|provisioner'
# Check logs
kubectl logs -n kube-system <provisioner-pod> --tail=100Look for quota errors, permission issues, or backend connectivity problems.
If the PVC is fundamentally broken, recreate it:
# Backup the PVC spec
kubectl get pvc <pvc-name> -o yaml > pvc-backup.yaml
# Delete the broken PVC
kubectl delete pvc <pvc-name>
# Remove any claimRef and status from backup, then recreate
kubectl apply -f pvc-backup.yamlNote: With Delete reclaim policy, deleting PVC also deletes the PV and data.
PV reclaim policies determine what happens when a PVC is deleted:
- Delete: PV and underlying storage are deleted (default for dynamic provisioning)
- Retain: PV is kept but marked Released, requiring manual cleanup
For production workloads, consider using Retain to prevent accidental data loss:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: retain-storage
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: RetainKnown issue: Kubernetes 1.22.0–1.22.8 and 1.23.0–1.23.5 have a race condition where pods can fail with "kube-root-ca.crt not registered" during termination. This manifests as a volume not found error for projected service account volumes. Upgrade to patched versions if you see this pattern.
Static provisioning requires manual PV creation and claimRef management. For simpler operations, use dynamic provisioning with appropriate storage classes.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm