The MinimumReplicasUnavailable condition indicates your Kubernetes Deployment cannot maintain the minimum required number of available replicas. This blocks rolling updates and signals underlying issues with pod scheduling, resource availability, or container health.
MinimumReplicasUnavailable is a Deployment condition that appears when fewer replicas are running and ready than the configured minimum needed for availability. A pod is considered "available" only when it has been ready for at least `minReadySeconds` (default 0). This condition prevents rolling updates and indicates your deployment is unhealthy. Unlike pod errors (CrashLoopBackOff), this is a deployment-level health indicator. It means the Deployment controller cannot schedule or run the required number of pods, blocking new deployments until the issue is resolved.
Run kubectl describe deployment <deployment-name> to see conditions and recent events. Look for the Conditions section which shows MinimumReplicasUnavailable details. Also check pod status:
kubectl get pods -o wide
kubectl get events --sort-by='.lastTimestamp'These show which pods are failing and why.
Verify cluster has available resources:
kubectl top nodes # Shows CPU/memory usage
kubectl describe nodes # Shows resource allocations and pressures
kubectl get resourcequota -A # Check namespace quotasIf nodes show CPU/memory pressure or pods are Pending, your cluster is resource-constrained. Either increase node capacity or reduce resource requests in the deployment spec.
For pods in ImagePullBackOff, CrashLoopBackOff, or other failure states:
kubectl describe pod <pod-name>
kubectl logs <pod-name> # Application logs
kubectl logs <pod-name> --previous # Logs from crashCommon issues:
- ImagePullBackOff: Verify image name, tag, registry credentials
- CrashLoopBackOff: Check application logs for startup errors
- Pending: Check events—usually resource constraints or node affinity conflicts
If pods are crashing due to probe failures, make probes more lenient:
spec:
containers:
- name: app
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30 # Wait 30s before first check
timeoutSeconds: 5 # Allow 5s for response
periodSeconds: 10 # Check every 10s
failureThreshold: 3 # Fail after 3 failures
livenessProbe:
httpGet:
path: /live
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 5
periodSeconds: 15
failureThreshold: 3Increase initialDelaySeconds if your app takes time to start.
If using a private registry:
spec:
imagePullSecrets:
- name: regcred # Must exist: kubectl create secret docker-registry regcred ...
containers:
- name: app
image: private-registry.com/myapp:latestFor public images, verify the tag exists:
docker pull myapp:tagCheck imagePullSecrets exist: kubectl get secret regcred (output should show the secret).
Check for overly restrictive PDBs:
kubectl get pdb -A
kubectl describe pdb <pdb-name>If minAvailable is too high, pods cannot be evicted during cluster operations. Consider using maxUnavailable instead:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: app-pdb
spec:
maxUnavailable: 1 # Allow 1 pod unavailable during disruptions
selector:
matchLabels:
app: myappCheck for hardcoded node constraints:
kubectl get pod <pod-name> -o yaml | grep -A5 nodeSelectorIf nodeName is set to a non-existent node, remove it. For affinity rules, verify they're feasible:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd # Verify at least one node has this labelCheck available node labels: kubectl get nodes --show-labels.
For PVC binding issues in stateful deployments, verify the storage class exists and provisioner is healthy: kubectl get storageclass and kubectl describe pvc <pvc-name>. In multi-tenant clusters, namespace resource quotas may prevent pod creation—use kubectl describe quota to check. For Kubernetes on Docker Desktop or Minikube, resources are extremely limited; reduce replica counts or allocate more memory to the Docker daemon. On GCP/EKS/AKS, check cloud-specific quota limits (project quotas, service quotas). CI/CD deployments should ensure image pull secrets are mounted and registry timeouts are configured. WSL2 may have file system sync delays blocking PVC binding—ensure mounted volumes have sufficient permissions.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm