ArgoCD fails to generate application manifests due to missing dependencies, timeout issues, invalid configurations, or plugin failures. The error is cached to prevent repeated failures. Fix by checking Helm chart dependencies, validating file paths, clearing the cache, and reviewing repo-server logs. Manifest generation must complete within the 90-second default timeout.
A manifest generation error occurs when ArgoCD's repo-server cannot render your Kubernetes manifests from source (Helm, Kustomize, plain YAML, or plugins). The error message is cached to prevent runaway retries that would overload the cluster. This error blocks synchronization because ArgoCD cannot determine what state to apply. Common causes include missing Helm dependencies, incorrect file paths, timeout constraints, invalid resource schemas, and misconfigured plugins or config management tools.
The cached error hides the root cause. Find detailed logs:
kubectl logs -n argocd deployment/argocd-repo-server --tail=200 | grep <app-name>Look for the actual failure message (helm error, timeout, path not found, etc.). This is the real issue to fix.
Check that the path specified in your ArgoCD Application matches the Git structure:
# In your Application manifest
spec:
source:
repoURL: https://github.com/your-org/repo
path: apps/my-app # Must exist in repoClone the repo locally and verify:
git clone https://github.com/your-org/repo
ls apps/my-app
# Should see values.yaml, Chart.yaml, or kustomization.yamlIf using Helm, verify all dependencies are available:
# Check Chart.yaml for dependencies
cat Chart.yaml | grep -A10 "dependencies:"
# Verify Helm repositories are accessible
helm repo list
helm dependency updateIn ArgoCD ConfigMap, configure internal Helm repos only:
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
data:
helm.repositories: |
- name: stable
url: https://internal-helm-repo.example.comOnce you've fixed the underlying issue, clear the cache:
# Via CLI
argocd app get --hard-refresh <app-name>
# Or via kubectl patch
kubectl patch application <app-name> -n argocd --type merge \
-p '{"status":{"conditions":[]}}'ArgoCD will attempt manifest generation again. Check logs to confirm success.
If manifest generation times out (especially for large Helm charts), increase the timeout:
kubectl edit configmap argocd-cmd-params-cm -n argocdAdd or update:
argocd.exec.timeout: 3m # Default is 1m30s (90s)Restart repo-server to apply:
kubectl rollout restart deployment/argocd-repo-server -n argocdEnsure manifests have required Kubernetes fields:
# All manifests must have apiVersion, kind, metadata
grep -E "^apiVersion:|^kind:|^metadata:" your-manifest.yaml
# Use kubeval to validate
kubectl apply -f manifests/ --dry-run=client --validate=strictIf ArgoCD complains about unknown fields, your Kubernetes version may not support them. Check ArgoCD's hardcoded schema version.
If using custom plugins or plugins like argocd-vault-plugin:
# View CMP configuration
kubectl get configmap argocd-cmp-plugins -n argocd -o yaml
# Verify plugin script has execute permissions and valid syntax
# Test plugin locally if possibleEnsure the plugin spec in Application is correct:
spec:
source:
plugin:
name: my-plugin # Must match plugin defined in argocd-cmManifest generation errors are often environmental (missing repos, network timeouts, disk space). Check argocd-repo-server Pod resource limits and /tmp disk usage: kubectl exec -it pod/argocd-repo-server-xxx -n argocd -- df -h /tmp. If using monorepos with 50+ applications, single repo-server instance may serialize manifest generation, causing slowdowns. Add replicas or split repos. For CI/CD integration, always test manifest generation locally before pushing: kustomize build path/ or helm template release mychart -f values.yaml. URL schemes like "secrets://" are now restricted; use proper plugin configuration instead. Always check kubectl version compatibility with your manifests.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm