Exit code 143 means the container received SIGTERM for graceful shutdown. This is normal during scaling, updates, or pod deletion. Ensure your application handles SIGTERM properly.
Exit code 143 indicates that a container was terminated by a SIGTERM signal (signal 15). The exit code is calculated as 128 + 15 = 143. SIGTERM is the standard signal for requesting graceful termination. Unlike exit code 137 (SIGKILL), code 143 is not an errorโit's the expected result of proper graceful shutdown. Kubernetes sends SIGTERM when scaling down, during rolling updates, or when explicitly deleting pods. The container has a grace period (default 30 seconds) to clean up before receiving SIGKILL.
Exit code 143 is normal during these operations:
# Check recent events
kubectl get events --sort-by='.lastTimestamp' | grep <pod-name>
# See what triggered termination
kubectl describe pod <pod-name>If the pod was terminated during a deployment, scale operation, or node maintenance, exit code 143 is expected and correct.
Ensure your application handles SIGTERM for graceful shutdown:
Python:
import signal
def shutdown_handler(signum, frame):
print("SIGTERM received, shutting down gracefully...")
# Close database connections
# Finish processing current requests
# Flush logs and metrics
sys.exit(0)
signal.signal(signal.SIGTERM, shutdown_handler)Node.js:
process.on('SIGTERM', async () => {
console.log('SIGTERM received, starting graceful shutdown');
await server.close();
await db.disconnect();
process.exit(0);
});If your application needs more time to shut down gracefully:
spec:
terminationGracePeriodSeconds: 60 # Default is 30
containers:
- name: app
image: myapp:latestDuring this period:
1. SIGTERM is sent to the container
2. Application has time to complete cleanup
3. If not terminated, SIGKILL is sent after grace period
Add a preStop lifecycle hook for cleanup without code changes:
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 10 && /app/cleanup.sh"]The preStop hook runs before SIGTERM is sent, giving additional time for:
- Draining connections from load balancers
- Completing in-flight requests
- Deregistering from service discovery
Exit code 143 is healthy; watch for these patterns instead:
Concerning:
- Exit code 137 (SIGKILL) - forced kill, possibly OOM
- Frequent terminations outside of deployments
- Short-lived pods constantly receiving SIGTERM
Check if eviction is the cause:
kubectl describe pod <pod-name> | grep -A 5 "Status:"
kubectl top pod <pod-name> # Check resource usageEviction due to resource pressure may indicate undersized limits.
Verify your application actually completes graceful shutdown:
# Watch logs during termination
kubectl logs -f <pod-name>
# Delete and observe shutdown
kubectl delete pod <pod-name>Look for your shutdown messages in logs. If the pod terminates with 143 but logs show incomplete cleanup, your grace period may be too short or shutdown logic has issues.
Exit code 143 vs 137: Both indicate signal-based termination, but they're very different:
- 143 (SIGTERM): Graceful request, application can handle it
- 137 (SIGKILL): Forced termination, application cannot catch or handle it
If you see 137 instead of 143, your application didn't exit within the grace period, or Kubernetes force-killed it due to OOM.
For zero-downtime deployments, combine proper SIGTERM handling with:
- Readiness probes that fail during shutdown
- preStop hooks that wait for connection draining
- PodDisruptionBudgets to limit simultaneous terminations
In service meshes like Istio, the sidecar also needs to handle termination. Istio 1.12+ supports native sidecar termination ordering to ensure the app shuts down before the proxy.
Service port already allocated
How to fix "Service port already allocated" in Kubernetes
minimum cpu usage per Container
How to fix "minimum cpu usage per Container" in Kubernetes
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA