A Flux reconciliation failure occurs when the GitOps operator cannot apply manifests from Git to the cluster. Reconciliation failures prevent deployments from updating, leaving the cluster out of sync with Git. Common causes include invalid YAML, missing CRDs, authentication issues, and resource conflicts.
Flux is a GitOps tool that syncs Kubernetes manifests from a Git repository to the cluster. The reconciliation process: 1. Watches Git repository for changes 2. Pulls latest manifests 3. Renders Kustomize/Helm templates 4. Applies manifests to cluster 5. Stores status in CR (GitRepository, Kustomization, HelmRelease) When reconciliation fails: - Manifests are not applied - Cluster state diverges from Git - Service deployments stall - No error appears in kubectl until status is checked
View the reconciliation status:
kubectl get kustomization -A
kubectl get helmrelease -A
# Get detailed status:
kubectl describe kustomization <name> -n <namespace>
kubectl describe helmrelease <name> -n <namespace>
# Check events:
kubectl get events -n flux-system --sort-by=.metadata.creationTimestampLook for "Failed" status and error messages in Status.Conditions.
Check what went wrong:
kubectl logs -n flux-system deployment/kustomize-controller -f
kubectl logs -n flux-system deployment/helm-controller -f
kubectl logs -n flux-system deployment/source-controller -f
# Search for specific error:
kubectl logs -n flux-system -l app=kustomize-controller | grep -i error
kubectl logs -n flux-system -l app=helm-controller | grep -i error
# Previous logs if pod restarted:
kubectl logs -n flux-system deployment/kustomize-controller --previousLogs show exact failure reason (YAML parse error, missing resource, etc.).
Test manifests locally:
# Clone repository:
git clone <git-repo-url>
cd <repo-path>
# Validate YAML:
kubectl apply -f . --dry-run=client
kubectl apply -f kustomization.yaml --dry-run=client
# Or use kubeval:
kubectl apply -f . --dry-run=client 2>&1 | grep -i error
# For Kustomize:
kustomize build . | kubectl apply --dry-run=client -f -
# For Helm:
helm template <release> <chart> | kubectl apply --dry-run=client -f -Errors found here need to be fixed in Git.
Verify Git repository is accessible:
kubectl get gitrepository -A
kubectl describe gitrepository <name> -n <namespace>
# Check for authentication errors:
kubectl logs -n flux-system deployment/source-controller | grep -i auth
# Verify SSH key or token:
kubectl get secret <git-secret> -n flux-system -o yaml | grep ssh-privatekey
# Test SSH connectivity (if using SSH):
kubectl run -it --rm ssh-test --image=alpine/git -- sh
# Inside pod:
git clone <git-repo-url>If Git is not accessible, authentication is likely the issue.
Ensure CRDs are installed before resources that use them:
# Check which CRDs are installed:
kubectl get crd
# If CRD is missing, install first:
kubectl apply -f crds/ # Apply CRD manifests first
# In Flux manifests, use kustomization ordering:
# 1. First Kustomization: install CRDs
kustomization-crds.yaml:
- source: Git repo
path: ./crds
prune: true
# 2. Second Kustomization: install resources
kustomization-apps.yaml:
- source: Git repo
path: ./apps
dependsOn:
- kustomization-crds # Wait for CRDs firstDeclare dependencies explicitly in Flux.
Verify RBAC allows Flux to apply manifests:
# Check Flux service accounts:
kubectl get sa -n flux-system
# View ClusterRoleBinding for Flux:
kubectl get clusterrolebinding | grep flux
kubectl describe clusterrolebinding flux
# Test permissions:
kubectl auth can-i create deployments --as=system:serviceaccount:flux-system:kustomize-controller -n default
# If permissions missing, add ClusterRole:
kubectl create clusterrolebinding flux-admin --clusterrole=cluster-admin --serviceaccount=flux-system:kustomize-controllerFull cluster-admin is common in dev; restrict in production.
Ensure referenced resources exist:
# Check which ConfigMaps/Secrets are referenced:
grep -r "configMapRef\|secretRef" .
# Verify they exist:
kubectl get configmap <name> -n <namespace>
kubectl get secret <name> -n <namespace>
# If missing, create them:
kubectl create configmap app-config --from-file=config.yaml -n <namespace>
kubectl create secret generic db-secret --from-literal=password=xxx -n <namespace>
# Or add to Git:
kubectl create configmap app-config --from-file=config.yaml --dry-run=client -o yaml > configmap.yaml
# Commit configmap.yaml to GitFlux will fail if referenced resources don't exist.
Force Flux to sync after fixing issues:
# Trigger reconciliation:
flux reconcile kustomization <name> -n <namespace>
flux reconcile source git <name> -n <namespace>
# Watch status:
flux get kustomization --watch
flux get source git --watch
# Or kubectl:
kubectl patch kustomization <name> -p '{"spec":{"force":true}}' -n <namespace>
# Verify deployment:
kubectl get deployment <name> -n <namespace>
kubectl rollout status deployment <name> -n <namespace>Reconciliation should complete without errors after fixes.
Flux provides strong GitOps guarantees—the cluster always matches Git. Failed reconciliation means the cluster is diverged from source of truth. Common production issues: CRD ordering (install before use), missing secrets (auth credentials), and RBAC restrictions (Flux cannot apply). Use Flux suspend/resume to control deployments. Enable notifications to alert on failed reconciliations. For large clusters, split Kustomizations by team/namespace to isolate failures. Helm releases integrate tightly—chart rendering failures cascade. Use Flux patch expressions for environment-specific overrides instead of multiple repos. Monitor reconciliation lag via metrics (flux_reconcile_duration_seconds). Implement promotion pipelines (dev → staging → prod) with Flux.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm