A pod references a specific key within a ConfigMap that doesn't exist. ConfigMap keys are case-sensitive and must match exactly. Fix by verifying key names in the ConfigMap match pod references, or marking the reference as optional.
When a pod references a ConfigMap, it can either mount the entire ConfigMap as a volume (all keys become files) or reference specific keys for environment variables. If the pod references a key that doesn't exist in the ConfigMap's data section, Kubernetes cannot satisfy the reference. This is distinct from a missing ConfigMap itself—the ConfigMap exists, but it doesn't contain the specific key the pod is requesting. ConfigMap key names are case-sensitive and must match exactly, including underscores, hyphens, and dots.
Examine the ConfigMap's actual data:
kubectl get configmap app-config -n <namespace> -o yamlThe output shows all keys in the data section. Example:
kind: ConfigMap
metadata:
name: app-config
data:
DATABASE_HOST: localhost
DATABASE_PORT: "5432"
APP_ENV: productionThis ConfigMap has exactly three keys: DATABASE_HOST, DATABASE_PORT, and APP_ENV. Any other key references will fail.
Alternatively, use:
kubectl get configmap app-config -n <namespace> -o jsonpath='{.data}'This displays keys without formatting, making exact names easier to verify.
Examine how the pod references ConfigMap keys:
kubectl get pod <pod-name> -n <namespace> -o yaml | grep -A10 configMapLook for:
Environment variable references:
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: DATABASE_HOST # This key must exist in the ConfigMapBulk import (envFrom):
envFrom:
- configMapRef:
name: app-config # All keys imported as env vars (invalid names skipped)Volume projections:
volumes:
- name: config
configMap:
name: app-config
items:
- key: DATABASE_HOST
path: db-host.confCompare all key values with those shown in step 1.
Key names are case-sensitive. These are three different keys:
- database_host
- DATABASE_HOST
- Database_Host
If the ConfigMap has DATABASE_HOST but the pod references database_host, it will fail.
Compare side-by-side:
# Show ConfigMap keys
echo "ConfigMap keys:"
kubectl get configmap app-config -n <namespace> -o jsonpath='{.data}\n' | jq 'keys'
# Show pod references
echo "Pod references:"
kubectl get pod <pod-name> -n <namespace> -o yaml | grep 'key:' | sed 's/.*key: //'The keys must match exactly, including capitalization and punctuation.
Fix the pod manifest to reference correct key names:
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: app
image: my-image
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: DATABASE_HOST # Match ConfigMap key exactly (case-sensitive)
- name: DB_PORT
valueFrom:
configMapKeyRef:
name: app-config
key: DATABASE_PORT # Not "database_port"
volumeMounts:
- name: config
mountPath: /etc/config
volumes:
- name: config
configMap:
name: app-config
items:
- key: DATABASE_HOST # Must exist in ConfigMap
path: db-host.txtApply:
kubectl apply -f pod.yamlIf the ConfigMap doesn't have required keys, update it:
Using kubectl patch:
kubectl patch configmap app-config -n <namespace> -p '{"data":{"NEW_KEY":"value"}}'Using kubectl edit:
kubectl edit configmap app-config -n <namespace>Add keys to the data section (no base64 encoding needed for ConfigMaps).
Using YAML update:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: production
data:
DATABASE_HOST: localhost
DATABASE_PORT: "5432"
APP_ENV: production
LOG_LEVEL: debug # Add missing keyApply: kubectl apply -f configmap.yaml
If ConfigMap was created incorrectly (e.g., from a file with wrong structure):
kubectl delete configmap app-config -n <namespace>Recreate with correct keys:
kubectl create configmap app-config \
--from-literal=DATABASE_HOST=localhost \
--from-literal=DATABASE_PORT=5432 \
--from-literal=APP_ENV=production \
-n <namespace>Verify:
kubectl get configmap app-config -n <namespace> -o yamlWhen using envFrom to import all ConfigMap keys as environment variables:
envFrom:
- configMapRef:
name: app-configKubernetes automatically imports all keys as environment variables. However, if a key name is invalid as an environment variable (contains hyphens, starts with number, etc.), it's silently skipped. Example:
ConfigMap with keys:
- DATABASE_HOST → imported as env var ✓
- database-port → skipped (contains hyphen) ✗
- 2FAST → skipped (starts with digit) ✗
If you need these keys as environment variables, reference them explicitly:
env:
- name: DB_PORT
valueFrom:
configMapKeyRef:
name: app-config
key: database-portIf the key is optional, mark it as optional to allow pod startup:
env:
- name: OPTIONAL_SETTING
valueFrom:
configMapKeyRef:
name: app-config
key: optional-key
optional: true # Pod starts even if key is missingWhen optional: true, the pod starts without that environment variable. The application must have defaults or handle missing values.
Key names are case-sensitive and whitespace-aware. Avoid spaces, hyphens at the start, or special characters in key names unless necessary. When creating ConfigMaps from files with --from-file, the filename becomes the key and the file contents become the value; useful for complex config (JSON, YAML, INI). ConfigMap data is limited to 1 MiB total; split large configs into multiple ConfigMaps. When using volume mounts with items[], only listed keys are projected as files; unlisted keys are ignored. ConfigMap updates propagate to running pods within ~1 minute (kubelt sync period); pod doesn't automatically reload config—application must handle reloading or pod must be recreated. For immutable configs, mark the ConfigMap as immutable after creation (immutable: true) to prevent accidental changes. When using envFrom, check pod events for skipped keys due to invalid variable names. For validation: use kubectl apply --dry-run=client before deploying to catch key mismatches early.
No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes
default backend - 404
How to fix "default backend - 404" in Kubernetes Ingress
serviceaccount cannot list resource
How to fix "serviceaccount cannot list resource" in Kubernetes