A PVC has entered the Lost phase, meaning it lost its binding to the underlying PersistentVolume. This is a critical error indicating data may be inaccessible or lost. It occurs when the PV is deleted while PVC still references it, storage backend fails, or finalizers block cleanup. Recovery is difficult; prevention through backups is essential.
When a PersistentVolume is deleted or becomes inaccessible while a PersistentVolumeClaim (PVC) still references it, Kubernetes marks the PVC as Lost. This is a final-state error indicating that Kubernetes can no longer guarantee data availability. The PVC object exists but points to a volume that no longer exists or is unreachable. Pods depending on the Lost PVC cannot start. In most scenarios, data on the lost volume is permanently unavailable without manual storage backend recovery (which is often impossible).
First, confirm the Lost state and check for the backing volume:
kubectl get pvc -n <namespace>
kubectl describe pvc <pvc-name> -n <namespace>
kubectl get pv
kubectl describe pv <pv-name>If the PV doesn't appear in the list or its status shows "Released" but no volume is accessible, the underlying storage is likely gone.
Depending on your storage type, check if the physical storage exists:
For cloud volumes (AWS EBS, GCP PD, Azure Disk):
# AWS EBS
aws ec2 describe-volumes --volume-ids vol-xxxxx
# GCP Persistent Disk
gcloud compute disks describe <disk-name> --zone <zone>
# Azure Managed Disk
az disk show --resource-group <rg> --name <disk-name>For NFS/network storage:
kubectl exec -it <pod> -- mount | grep <volume-path>
kubectl exec -it <pod> -- df -h | grep <volume-path>For local volumes:
SSH to the node and check if the path exists and is accessible:
ssh <node>
ls -la <local-volume-path>If the underlying storage device still exists (visible in cloud console or mounted on node) but the PV is Lost:
1. Create a recovery pod that directly accesses the storage:
apiVersion: v1
kind: Pod
metadata:
name: recovery-pod
spec:
nodeSelector:
kubernetes.io/hostname: <node-with-storage>
containers:
- name: recovery
image: ubuntu:latest
command: ["/bin/bash"]
args: ["-c", "sleep 3600"]
volumeMounts:
- name: recovery-vol
mountPath: /data
volumes:
- name: recovery-vol
hostPath:
path: <path-to-storage> # Local path to volume
type: Directory2. Copy data to persistent storage (e.g., cloud object storage, external NFS):
kubectl cp recovery-pod:/data /tmp/recovered-data
# Then copy to external storage:
gsutil -m cp -r /tmp/recovered-data/* gs://backup-bucket/3. Restore from backup if available (see Prevention section).
Once you've attempted recovery, remove the Lost resources to avoid confusion:
kubectl delete pvc <pvc-name> -n <namespace>
kubectl delete pv <pv-name>If deletion hangs due to finalizers, force-remove them:
kubectl patch pvc <pvc-name> -n <namespace> -p '{"metadata":{"finalizers":null}}' --type=merge
kubectl patch pv <pv-name> -p '{"metadata":{"finalizers":null}}' --type=merge
kubectl delete pvc <pvc-name> -n <namespace> --force --grace-period=0
kubectl delete pv <pv-name> --force --grace-period=0Deploy fresh persistent storage:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp3
iops: "3000"
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-claim
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50GiApply and verify binding:
kubectl apply -f storage.yaml
kubectl get pvc,pvUse your backup solution to restore the latest good copy:
Using Velero (Kubernetes backup framework):
velero restore create --from-backup <backup-name>
velero restore logs <restore-name>Using Kasten K10:
Use the K10 dashboard to restore PVCs from snapshots.
Using manual snapshots:
If you took AWS EBS snapshots, GCP disk snapshots, or other backups before loss:
# AWS EBS
aws ec2 create-volume --snapshot-id snap-xxxxx --availability-zone <zone>
# GCP
gcloud compute disks create <new-disk> --source-snapshot=<snapshot-name>Once the volume is restored, mount it in a new PVC and restore your application.
Prevention is critical since Lost PVCs usually result in permanent data loss. Implement automated backup solutions (Velero, Kasten K10) that snapshot PVC data independent of cluster lifecycle. Enable Kubernetes VolumeSnapshot API and regularly test snapshot recovery. Use storage replication at the infrastructure layer (AWS multi-AZ EBS, GCP regional persistent disks, Azure zone-redundant storage). Configure RBAC strictly to prevent accidental PV/PVC deletion. Monitor PVC health with alerts on phase changes, binding failures, or capacity issues. For critical applications, use ReadWrite Many (RWX) access modes with replicated storage (NFS with HA, Ceph) so pod rescheduling doesn't lose data. Document PVC dependencies and backup schedules for all stateful services. Regularly test disaster recovery procedures and cluster migration scenarios. Note: Once a PVC reaches Lost state, data recovery is extremely difficult; focus resources on prevention.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes