A container fails startup probe checks repeatedly, causing it to restart continuously (CrashLoopBackOff). Startup probes verify the application has started before liveness and readiness probes begin. Fix by increasing failureThreshold, adjusting periodSeconds, or ensuring the health endpoint is accessible during startup.
A startup probe verifies that an application inside a container has started. It runs only at pod startup and blocks liveness and readiness probes from running until it succeeds. If the startup probe fails repeatedly (based on failureThreshold), kubelet restarts the container. Startup probes are designed for slow-starting applications (database migrations, cache warming, large initialization). Without startup probes, liveness probes may kill containers before they finish starting. The total startup window is failureThreshold × periodSeconds.
View repeated restarts:
kubectl get pod <pod-name> -n <namespace>
kubectl describe pod <pod-name> -n <namespace>Expect high restart count and reason "Unhealthy".
Check the probe settings:
kubectl get pod <pod-name> -n <namespace> -o yaml | grep -A10 startupProbeNote failureThreshold and periodSeconds.
Allow more time for startup:
startupProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 30 # ~300 seconds total = 30 * 10sFor slow apps (Java with migrations): use failureThreshold: 60-90.
Check more frequently if needed:
startupProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 5 # Check every 5 seconds instead of 10
failureThreshold: 60 # 300 seconds = 60 * 5sTest manually if possible:
kubectl logs <pod-name> -n <namespace>
grep -i health /var/log/app.log # Check app logs for endpoint startupEndpoint may not be listening immediately.
TCP probes are simpler for startup:
startupProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
failureThreshold: 30This just checks if port is listening, doesn't require HTTP endpoint.
Identify why startup is slow:
kubectl logs <pod-name> -n <namespace> | grep -i "starting|initializing|migration|timeout"Database migrations and dependency initialization often cause delays.
Resource starvation slows startup:
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 1000m
memory: 1GiCheck actual usage:
kubectl top pod <pod-name> -n <namespace>Total startup window = failureThreshold × periodSeconds. For 5-minute startup, use failureThreshold: 30, periodSeconds: 10. Startup probes block liveness and readiness probes; once startup succeeds, those take over. For slow-starting apps (Java, Python with heavy initialization), startup probes are essential to prevent premature container kills. Use TCP socket probes for connectivity checks (faster, no HTTP overhead). HTTP probes require application to expose health endpoint during startup. Startup probes with liveness and readiness create three independent checks, each with different failure thresholds. Monitor container startup time in logs to tune failureThreshold appropriately. For cloud deployments, check cloud platform documentation for default timeouts that may override your configuration.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm