ArgoCD fails to connect to a managed cluster due to incorrect credentials, network blocking, or invalid kubeconfig. The cluster status shows "Unknown" or connection attempts timeout. Fix by verifying the cluster URL, updating kubeconfig credentials, checking firewall rules between ArgoCD and the cluster, and ensuring RBAC permissions.
When ArgoCD cannot establish a connection to a managed cluster, it indicates a problem with the cluster credentials stored in the argocd-manager Secret, network connectivity between the ArgoCD server and the target cluster API, or RBAC permissions preventing access. ArgoCD stores cluster configuration as a Kubernetes Secret in the argocd namespace containing the bearer token, CA certificate, and cluster URL. If any of these are invalid or if the network path is blocked, the cluster appears as "Unknown" and deployments fail.
Check that the cluster URL is correct and reachable from the ArgoCD server pod:
# Inside ArgoCD server pod
kubectl exec -it <argocd-server-pod> -n argocd -- /bin/sh
# Test DNS resolution and connectivity to cluster API
curl -k https://<cluster-api-url>:6443
ncstat -zv <cluster-api-host> 6443If curl returns a certificate error but connects, this confirms basic network access. The error is likely certificate validation, not connectivity.
View the current cluster configuration:
argocd cluster list
argocd cluster get <cluster-name>Look for:
- Correct API server URL
- Server address format (should be https://...)
- Cluster name matching your intention
Use CLI to inspect the Secret directly:
kubectl get secret <cluster-secret-name> -n argocd -o yamlUse the ArgoCD admin tool to export kubeconfig and test it:
# Extract kubeconfig from the cluster Secret
argocd admin cluster kubeconfig <cluster-name> > cluster.kubeconfig
# Test connection with extracted kubeconfig
kubectl --kubeconfig=cluster.kubeconfig cluster-info
kubectl --kubeconfig=cluster.kubeconfig auth can-i get pods --all-namespacesIf the exported kubeconfig works locally but ArgoCD still fails, the issue is in how ArgoCD is configured to use it.
Verify the argocd-manager account has cluster-admin or necessary RBAC on the target cluster:
# On the TARGET cluster (not ArgoCD cluster)
kubectl get serviceaccount argocd-manager -n argocd
kubectl get clusterrolebinding -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | grep argocd
kubectl describe clusterrolebinding argocd-manager-roleIf argocd-manager is missing or has insufficient permissions, re-add the cluster with correct permissions:
argocd cluster add <cluster-context> --name <cluster-name>Remove and re-add the cluster to reset credentials:
# From ArgoCD cluster
argocd cluster rm <cluster-name>
argocd cluster add <cluster-context>Make sure:
- You are logged into the target cluster context (check with: kubectl config current-context)
- The ServiceAccount argocd-manager is created on the target cluster
- Your user has admin permissions on the target cluster
For external clusters with certificate issues, use explicit flags:
argocd cluster add <context> --name <cluster-name> --skip-confirmationEnsure network connectivity between ArgoCD pod and cluster API:
# From ArgoCD server pod, test connectivity
kubectl exec -it <argocd-server-pod> -n argocd -- bash
nc -zv <api-server-host> 6443
# Check if there's a Network Policy blocking egress
kubectl get networkpolicies -n argocd
kubectl describe networkpolicy <policy-name> -n argocdFor cloud providers (AWS/GKE/AKS), check security groups:
- AWS: Security group of ArgoCD node must allow outbound on port 6443
- GKE: Firewall rule must allow traffic from ArgoCD node to target cluster API
- AKE: Network Security Group must permit argocd-server outbound traffic
Get detailed error messages from ArgoCD components:
# Check application-controller logs
kubectl logs -f deployment/argocd-application-controller -n argocd | grep -i "connection\|error\|cluster"
# Check server logs
kubectl logs -f deployment/argocd-server -n argocd | grep -i "cluster"
# Check repo-server (handles git access)
kubectl logs -f deployment/argocd-repo-server -n argocdLook for:
- "x509: certificate signed by unknown authority" (TLS issue)
- "connection refused" (network issue)
- "Unauthorized" (RBAC issue)
- "invalid token" (credential issue)
Certificate validation issues are common when using self-signed or internal CA certificates. You can disable TLS verification as a temporary diagnostic step with --insecure flag, but for production always fix the certificate chain. For multi-cluster setups, use declarative cluster management (ApplicationSet) instead of CLI. Some organizations use Rancher or other cluster managers that provide additional abstraction—ensure ArgoCD can reach the actual Kubernetes API endpoint, not just the management interface. If "argocd-manager" ServiceAccount is missing on the target cluster, it means the argocd cluster add command did not complete successfully; re-run it or manually create the account. Network policies inside Kubernetes can also block communication; consider using service mesh or explicit egress rules if needed.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm