Alertmanager config errors occur when the configuration YAML is invalid or contains syntax issues, preventing Alertmanager from parsing the configuration file. Common causes include YAML indentation errors, missing required fields, and invalid parameter values.
Alertmanager config errors happen when the alertmanager.yaml configuration file has syntax errors or validation failures. These errors prevent Alertmanager from starting or reloading the configuration. Unlike application errors, these are configuration parsing failures at the service layer—Alertmanager cannot load your alert routing rules, notification receivers, and global settings. In Kubernetes environments using kube-prometheus-stack or prometheus-operator, config errors are often surfaced through pod startup failures, CrashLoopBackOff states, or status messages in AlertmanagerConfig custom resources. The configuration is typically loaded from a ConfigMap or Secret that's mounted into the Alertmanager pod.
Get detailed error messages from the pod:
kubectl logs -n prometheus alertmanager-0
kubectl logs -n monitoring alertmanager-main-0 # For kube-prometheus-stackLook for lines containing "error", "unmarshal", or "parsing". Common messages:
- "cannot unmarshal into struct" → YAML format issue
- "field receivers not found" → Missing receivers section
- "key already set in map" → Duplicate YAML keys
Note the exact error message for diagnosing the root cause.
Use yamllint or online YAML validators to check syntax:
# Install yamllint
pip install yamllint
# Or via npm
npm install -g yamllint
# Validate your config file
yamllint alertmanager.yamlAlternatively, use an online validator: https://www.yamllint.com/
Common issues:
- Tabs instead of spaces (use spaces only)
- Inconsistent indentation (usually 2 spaces per level)
- Trailing spaces after lines
Ensure your config has the minimum required sections:
global:
resolve_timeout: 5m
# SMTP configuration for notifications
smtp_smarthost: 'smtp.example.com:587'
smtp_from: '[email protected]'
route:
receiver: 'null' # Default receiver
# Optionally add sub-routes here
receivers:
- name: 'null' # Null receiver discards alerts (good for testing)
- name: 'email'
email_configs:
- to: '[email protected]'Missing global, route, or receivers sections will cause parsing errors.
Alertmanager YAML is strict about indentation. Each level must be consistently indented:
route:
receiver: 'default' # 2-space indent
group_wait: 10s # 2-space indent
routes: # 2-space indent
- receiver: 'email' # 4-space indent
matchers: # 4-space indent
- severity =~ "warning" # 6-space indent
receivers:
- name: 'default' # 2-space indent
webhook_configs: # 4-space indent
- url: 'http://localhost:5001' # 6-space indentVerify each nested block increases indent by exactly 2 spaces.
YAML doesn't allow duplicate keys. If you see "key already set in map":
receivers:
- name: 'email'
email_configs: # ← Only one email_configs allowed
- to: '[email protected]'
email_configs: # ← ERROR: duplicate key!
- to: '[email protected]'Instead, combine into one:
receivers:
- name: 'email'
email_configs:
- to: '[email protected]'
- to: '[email protected]'For regex matchers, escape special characters:
matchers:
- alertname =~ "^(Disk|CPU).*"
- instance =~ "prod-.*"If using email notifications, verify SMTP settings:
global:
smtp_smarthost: 'smtp.gmail.com:587' # host:port
smtp_auth_username: '[email protected]'
smtp_auth_password: 'app-password' # Use app-specific password, not account password
smtp_require_tls: true
receivers:
- name: 'email'
email_configs:
- to: '[email protected]'
from: '[email protected]'
smarthost: 'smtp.gmail.com:587' # Can override global setting
auth_username: '[email protected]'
auth_password: 'app-password'Common issues:
- Missing auth credentials
- Port number doesn't match TLS requirement (25, 465, 587)
- Invalid email addresses
If using prometheus-operator's AlertmanagerConfig custom resource:
kubectl describe alertmanagerconfig -n monitoringCheck the Status field for errors. If using amtool to validate:
# Port-forward to Alertmanager
kubectl port-forward -n monitoring alertmanager-main-0 9093:9093
# Download current config
amtool config routes --alertmanager.url=http://localhost:9093
# Validate your config file
amtool config routes --config.file=alertmanager.yamlamtool will show validation errors in the new config before applying.
After fixing the configuration:
# Update ConfigMap directly
kubectl create configmap alertmanager-config --from-file=alertmanager.yaml -n monitoring --dry-run=client -o yaml | kubectl apply -f -
# Or update via Helm (for kube-prometheus-stack)
helm upgrade prometheus prometheus-community/kube-prometheus-stack \
--set alertmanager.config.global.slack_api_url=... -n monitoring
# Force pod restart to reload config
kubectl delete pod -l app.kubernetes.io/name=alertmanager -n monitoringThe pod will recreate and load the updated configuration. Monitor logs to confirm successful startup.
For complex routing rules across multiple environments, consider using external tools like yamllint in CI/CD pipelines to catch errors before deployment. If using Helm charts, validate with helm template before installing: helm template prometheus prometheus-community/kube-prometheus-stack | kubectl apply --dry-run=client -f -. For multi-receiver setups (PagerDuty, Slack, email), test each receiver independently first, then layer in complex routing. In airgapped environments, verify all external SMTP/webhook endpoints are reachable from the cluster. For debugging, temporarily add a null receiver to test configuration parsing without sending alerts. Keep alertmanager.yaml in version control and use GitOps (ArgoCD/Flux) to manage configuration as code—this prevents manual drift and simplifies rollbacks.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm