ArgoCD cannot locate an Application resource because the YAML is in the wrong namespace, the manifest syntax is invalid, or the Application CRD hasn't been installed. Fix by ensuring the Application is created in the argocd namespace (or configured multi-namespace mode), validating the manifest structure, and verifying ArgoCD is fully deployed.
When you create an ArgoCD Application resource via kubectl or declaratively in Git, ArgoCD should automatically detect and reconcile it. The "Application not found" error means ArgoCD's controller is not watching that Application resource. This typically happens when: (1) the Application manifest is in a namespace other than the ArgoCD control plane namespace (default: argocd), (2) the Application CRD definition is missing or incomplete, (3) the manifest syntax is invalid and the resource never gets created, or (4) multi-namespace mode is not enabled but you're creating Applications outside the argocd namespace. ArgoCD requires all Application and AppProject resources to be in the control plane namespace by default.
Check if the Application resource was actually created and is in the correct namespace:
# List all Application resources across all namespaces
kubectl get applications -A
# List only in argocd namespace (default)
kubectl get applications -n argocd
# Describe the specific application
kubectl describe application my-app -n argocdIf the application doesn't appear in any namespace or only shows in a non-argocd namespace, it wasn't created or is in the wrong place. All Application resources must be in the namespace where ArgoCD is deployed (default: argocd).
Ensure ArgoCD components are active and healthy:
# Check if argocd namespace exists
kubectl get namespace argocd
# Check if ArgoCD pods are running
kubectl get pods -n argocd
# Expected pods should include:
# - argocd-application-controller-0
# - argocd-server-...
# - argocd-repo-server-...
# Check logs for errors in the application controller
kubectl logs -n argocd -l app.kubernetes.io/name=argocd-application-controller --tail=50
# Verify the application-controller is watching the argocd namespace
kubectl logs -n argocd deployment/argocd-application-controller | grep -i "argocd namespace"If any pods are not running (Pending, CrashLoopBackOff, etc.), investigate their logs. The application-controller must be healthy to watch Application resources.
Ensure your Application manifest explicitly specifies the argocd namespace:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd # CRITICAL: must be argocd namespace
spec:
project: default # Must match an AppProject in same namespace
source:
repoURL: https://github.com/my-org/my-repo
targetRevision: main
path: helm/my-chart # Path in Git repo
destination:
server: https://kubernetes.default.svc
namespace: production # Target namespace for deployed app (different from metadata.namespace)
syncPolicy:
automated:
prune: true
selfHeal: trueThe key point: metadata.namespace must be argocd, but spec.destination.namespace can be any target namespace where you want the application deployed.
Apply it:
kubectl apply -f application.yaml
# Verify creation
kubectl get application my-app -n argocdIf the application still doesn't appear, validate the manifest for syntax errors:
# Dry-run validation
kubectl apply -f application.yaml --dry-run=client
# Check for schema violations
kubectl apply -f application.yaml --dry-run=server
# Validate against CRD schema
kubectl explain application.spec
# Check if there are events/warnings
kubectl describe application my-app -n argocdCommon YAML mistakes:
- Missing metadata.namespace: argocd
- Incorrect field names (e.g., destination.name instead of destination.server)
- repoURL instead of correct format
- Missing required fields like source.repoURL
If kubectl apply shows "error validating data: unknown field" or similar, fix the YAML and try again.
If you need to create Applications in namespaces other than argocd, enable multi-namespace mode (Kubernetes 1.24+, ArgoCD v2.5+):
# Edit the argocd-cmd-params-cm ConfigMap
kubectl edit configmap argocd-cmd-params-cm -n argocd
# Add or modify these keys:
applicationInstanceLabelKey: argocd.argoproj.io/instance
application.instanceLabelKey: argocd.argoproj.io/instance
application.resourceTrackingMethod: annotationThen define allowed namespaces in the argocd namespace:
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: default
namespace: argocd
spec:
sourceRepos:
- '*'
destinations:
- namespace: '*'
server: '*'After updating config, restart the ArgoCD server and application controller:
kubectl rollout restart deployment/argocd-server -n argocd
kubectl rollout restart statefulset/argocd-application-controller -n argocdNow you can create Application resources in permitted namespaces. Still, each namespace's Application must reference an AppProject in the argocd namespace.
If the Application exists but ArgoCD still doesn't recognize it, check the controller logs:
# Stream logs in real-time while applying the manifest
kubectl logs -f -n argocd statefulset/argocd-application-controller
# In another terminal, apply the application
kubectl apply -f application.yaml
# Look for lines like:
# "error processing application"
# "failed to sync"
# "cannot access repository"Common log issues:
- "no such file or directory in repository path" → path in spec.source.path doesn't exist in Git
- "permission denied" → ServiceAccount lacks RBAC for Application operations
- "unknown error" → Check destination cluster connectivity
Increase log verbosity if needed:
kubectl patch deployment/argocd-application-controller -n argocd --type json -p '[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--loglevel=debug"}]'The Application repo must be accessible and configured in ArgoCD:
# Check configured repositories
kubectl get secret -n argocd | grep repository
# For public repos, ensure the repoURL is correct:
argocd repo list
# For private repos, verify credentials exist:
kubectl describe secret <repo-secret-name> -n argocd
# Test connectivity from repo-server pod
kubectl exec -it -n argocd argocd-repo-server-0 -- sh
# Inside the pod:
cd /tmp && git clone https://github.com/your-org/your-repoIf the repository secret is missing or credentials are wrong, the Application can't fetch manifests and may not sync properly. Add the repository to ArgoCD first:
argocd repo add https://github.com/your-org/your-repo --username <user> --password <token>In ArgoCD multi-namespace mode (v2.5+), each namespace with Applications requires a corresponding AppProject in the argocd namespace that authorizes access. Namespace isolation in AppProject.destinations controls which target clusters and namespaces an app can deploy to; if an Application tries to deploy to an unauthorized destination, sync fails even if the app exists. The Application controller watches all configured namespaces (by default just argocd) and performs separate list/watch operations per namespace; if you enable many namespaces, monitor API server connection limits (increase ARGOCD_K8S_CLIENT_MAX_IDLE_CONNECTIONS if hitting limits). ApplicationSet generates Applications dynamically; if sets don't produce apps, check AppProject permissions and template validation errors. ArgoCD doesn't delete Applications automatically when removed from Git in declarative mode unless you explicitly set the finalizer (resources-finalizer.argocd.argoproj.io); stuck finalizers may prevent cleanup. For GitOps workflows, define Application manifests in Git in a subdirectory (e.g., argocd/applications/) and use one bootstrap Application to deploy them, avoiding manual kubectl apply. Troubleshooting: if applications disappear after ArgoCD upgrade, check if the CRD version changed; some versions require schema migrations. Always backup Application manifests in Git before deleting via UI or CLI.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm