An ArgoCD application shows "Degraded" status when one or more of its managed resources are failing their health checks. This is distinct from sync failures and indicates the deployed application is not running healthily. Fix by identifying which resources are unhealthy and addressing the underlying issues (failed deployments, unavailable pods, misconfigured services).
A "Degraded" status in ArgoCD means that one or more of the resources managed by your Application (Deployments, StatefulSets, Services, Ingresses, etc.) is failing its health check. Unlike "Sync failed" (which indicates problems during deployment), "Degraded" is about post-deployment application health. The status is continuously updated and reflects the worst health of all child resources. ArgoCD's health hierarchy is: Healthy > Suspended > Progressing > Missing > Degraded > Unknown. If any resource is degraded, the entire application is marked degraded.
View detailed health status in the ArgoCD UI or CLI:
argocd app get <app-name>Or in the web UI, navigate to Applications and click the degraded app. Look for the "Resources" tab showing each resource's health status. Identify which specific resource(s) are degraded (shown in red).
Check the degraded resource directly:
kubectl describe pod <pod-name> -n <namespace>
kubectl describe deployment <deployment-name> -n <namespace>
kubectl get <resource-type> -n <namespace> -o wideLook for:
- "Ready" column (replicas ready vs desired)
- "Status" field showing Pending, CrashLoopBackOff, or Error
- Events section with error messages or warnings
Check logs from the failing pod:
kubectl logs <pod-name> -n <namespace>
kubectl logs <pod-name> -n <namespace> --previous # For crashed containers
kubectl logs <pod-name> -c <container-name> -n <namespace> # Specific containerLook for:
- Stack traces or error messages
- Connection failures to dependencies
- Configuration errors
- Probe failures
Get detailed events and probe configuration:
kubectl describe pod <pod-name> -n <namespace>
# Look at "Events" section and "Readiness" / "Liveness" rows
# View probe configuration
kubectl get pod <pod-name> -o yaml | grep -A 10 "readinessProbe\|livenessProbe"If probes are failing, they may be misconfigured (wrong port, path, threshold) or the app needs more time to initialize. Consider adding a startupProbe.
Check for missing ConfigMaps, Secrets, PersistentVolumes:
kubectl describe pod <pod-name> -n <namespace> | grep -A 20 "Mounts\|Volumes"
kubectl get configmap,secret -n <namespace> -o wide
kubectl get pv,pvc -n <namespace>If volumes are not mounting or configs are missing, create them:
kubectl create configmap app-config --from-file=config.yaml -n <namespace>HorizontalPodAutoscaler can cause temporary degradation:
kubectl get hpa -n <namespace>
kubectl describe hpa <hpa-name> -n <namespace>When HPA scales up, new replicas take time to become ready. ArgoCD may mark the app degraded until all new replicas pass health checks. This usually resolves automatically. To reduce false positives, add a startup probe or adjust HPA scaling policies.
Once you've fixed the underlying resource issue (restarted pod, increased resource limits, fixed config), re-sync the application:
argocd app sync <app-name>
argocd app wait <app-name> # Wait for healthy statusOr use the web UI: click the app > "SYNC" button. Monitor the Resources tab until all turn green.
Degraded status is always tied to a specific resource failing its health checkβit's never ArgoCD itself that's degraded. Custom resources (CRDs) may need custom health checks defined in argocd-cm if they're stuck in Progressing. For controller-managed resources (e.g., Anthos Config Connector), expect transient degradation until the controller completes provisioning. Use kubectl top to check actual resource usage vs requests/limits. Consider setting up ArgoCD notifications/alerts (via Slack, email) to alert on degradation events. The difference between "Degraded" and "Sync Failed": Degraded = runtime issue (app not healthy), Sync Failed = deployment issue (kubectl apply failed).
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm