A StatefulSet shows "not ready" status when its pods fail to reach the Running and Ready state. This blocks orderly pod deployment since StatefulSets wait for each pod to be fully ready before starting the next one. The issue typically stems from storage failures, resource constraints, health check misconfigurations, or application startup problems.
A StatefulSet maintains a sticky identity for each pod and requires strict ordering: each pod (e.g., mysql-0, mysql-1, mysql-2) must become Running and Ready before the next pod is deployed. When you see "not ready" status, it means one or more pods are stuck in Pending, CrashLoopBackOff, ImagePullBackOff, or other non-Ready states. StatefulSets won't automatically proceed to deploy subsequent pods until earlier ones succeed. This is intentional—it prevents data corruption or initialization conflicts in stateful applications. Unlike Deployments, StatefulSets enforce strict ordering guarantees. If pod-0 fails, pod-1 won't start. This makes StatefulSet readiness failures block the entire application.
Get the overall status:
kubectl get statefulset <name>
kubectl get statefulset <name> -o yaml
kubectl get pods -l app=<name>
kubectl get events -n <namespace> --sort-by='.lastTimestamp'Look at the READY column (e.g., "0/3" means no pods ready). The Events section shows recent warnings or errors that blocked pod creation.
Since StatefulSets deploy sequentially, always start with pod-0:
kubectl describe pod <statefulset>-0
kubectl logs <statefulset>-0
kubectl logs <statefulset>-0 --previous # If pod crashedThe "Events" section of describe shows:
- "Pending": Pod waiting for resources or PVC binding
- "CrashLoopBackOff": Container exits immediately (check logs)
- "ImagePullBackOff": Image pull failed
- "Not Ready": Container running but readiness probe failing
StatefulSets require persistent storage. Check PVC status:
kubectl get pvc
kubectl describe pvc <statefulset>-<ordinal> # e.g., mysql-data-mysql-0Look for "Status: Bound". If Status is "Pending":
- No PersistentVolume available (check kubectl get pv)
- Storage provisioner (e.g., EBS, NFS) not configured or failing
- Storage class doesn't exist: kubectl get storageclass
If PVC is stuck, describe it:
kubectl describe pvc <name>The Events section shows provisioning failures.
Insufficient resources often prevent pod scheduling:
kubectl top nodes
kubectl top pods -n <namespace>
kubectl describe nodesLook for:
- High CPU/memory utilization
- "Insufficient cpu" or "Insufficient memory" in pod events
- Disk pressure on nodes
If constrained, either:
- Scale down other workloads: kubectl scale deployment <name> --replicas=1
- Add nodes to the cluster
- Reduce StatefulSet replica count for testing: kubectl scale statefulset <name> --replicas=1
StatefulSets require a headless service (clusterIP: None) for DNS:
kubectl get service <statefulset-name>
kubectl describe service <statefulset-name>The service selector should match the StatefulSet labels. Expected output:
- ClusterIP: None (headless)
- Selector: app=<statefulset-name> (or your label)
If missing, create it:
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
clusterIP: None
selector:
app: mysql
ports:
- port: 3306
name: mysqlMisconfigured probes often fail at startup. View current probe settings:
kubectl get statefulset <name> -o yaml | grep -A15 readinessProbe:Common issues:
- initialDelaySeconds too low (should give app time to start)
- timeoutSeconds too low (app is slow)
- Probe command/endpoint doesn't exist yet
Test the probe manually:
kubectl exec -it <statefulset>-0 -- <probe-command>
# Example: curl http://localhost:8080/healthIf it fails, the app isn't ready. Increase delays:
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 60 # Increase from default
timeoutSeconds: 10
periodSeconds: 10
failureThreshold: 3Image pull failures block StatefulSet startup:
kubectl get statefulset <name> -o yaml | grep image:
kubectl describe pod <statefulset>-0 | grep -i "image"Test locally:
docker pull <image:tag> # Fails if unavailable or credentials wrongFor private registries, ensure imagePullSecrets:
kubectl get secret
kubectl create secret docker-registry regcred \
--docker-server=myregistry.com \
--docker-username=user \
--docker-password=passAdd to StatefulSet:
spec:
imagePullSecrets:
- name: regcredIf debugging shows the issue is app-side (e.g., bad config), fix the underlying problem, then delete the pod:
kubectl delete pod <statefulset>-0The StatefulSet controller will recreate it. Since you fixed the issue (config, image, storage, probe settings), the new pod should become Ready.
Monitor recreation:
kubectl get pods -w # Watch modeOnce pod-0 is Ready, the StatefulSet will deploy pod-1, pod-2, etc.
For database-backed StatefulSets (PostgreSQL, MySQL, MongoDB, Cassandra), initialization scripts may hang on first pod startup—allow extra time with initialDelaySeconds: 120 or more. Persistent storage setup differs by cloud provider: AWS uses EBS (storage class required), GCP uses persistent disks, Azure uses managed disks. Helm charts for StatefulSets often have readiness probe overrides—check values.yaml for timing parameters. In minikube or Docker Desktop with limited resources, reduce replicas: kubectl scale statefulset <name> --replicas=1 for testing. Multi-zone clusters may see slower PVC binding—use local storage classes for faster provisioning. ArgoCD/Flux should use syncPolicy.syncOptions: [AllowEmpty] to prevent sync failures during sequential pod deployment. StatefulSet ordinal ordering is strict: failures in early pods (mysql-0) block later pods (mysql-1, mysql-2). For rolling updates, use partition: N to limit which pods update at once. Rollback stuck updates with kubectl rollout undo statefulset <name>.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm