A pod references a Secret that doesn't exist or is in a different namespace. Secrets are namespace-scoped resources; pods can only reference Secrets in their own namespace. Fix by creating the missing Secret in the correct namespace or using External Secrets Operator for externalized secret management.
Kubernetes Secrets are namespace-scoped configuration objects used to store sensitive data like database credentials, API keys, and TLS certificates. When a pod spec references a Secret by name, Kubernetes expects to find that Secret in the pod's namespace. If the Secret doesn't exist or is in a different namespace, the kubelet cannot mount the secret volume or inject it as an environment variable. Unlike some other Kubernetes resources, there is no built-in mechanism to access Secrets across namespaces—each namespace is isolated by default. For externalized secret management, you can use the External Secrets Operator or Secrets Store CSI driver to pull secrets from external vaults.
List all Secrets in the namespace where the pod is running:
kubectl get secrets -n <pod-namespace>
kubectl describe secret <secret-name> -n <pod-namespace>The secret name and namespace must match exactly what's referenced in the pod spec. If the secret doesn't appear, move to the next step.
Examine the pod spec to see what Secret it's referencing:
kubectl describe pod <pod-name> -n <pod-namespace>
kubectl get pod <pod-name> -n <pod-namespace> -o yaml | grep -A5 -i secretLook for:
- Secret name in spec.volumes[].secret.secretName (for secret volumes)
- Secret name in spec.containers[].env[].valueFrom.secretKeyRef.name (for env vars)
Verify the pod is in the correct namespace.
kubectl config get-context # Check current context/namespace
kubectl get pods -A | grep <pod-name> # Find all pods with that nameIf the Secret doesn't exist, create it. Choose the appropriate Secret type:
Generic Secret (key-value pairs):
kubectl create secret generic <secret-name> \
--from-literal=username=admin \
--from-literal=password=mypassword \
-n <pod-namespace>From a file:
kubectl create secret generic <secret-name> \
--from-file=config.json \
-n <pod-namespace>Docker registry credentials:
kubectl create secret docker-registry <secret-name> \
--docker-server=docker.io \
--docker-username=<username> \
--docker-password=<password> \
-n <pod-namespace>TLS certificate:
kubectl create secret tls <secret-name> \
--cert=path/to/cert.crt \
--key=path/to/private.key \
-n <pod-namespace>Or use a YAML manifest:
apiVersion: v1
kind: Secret
metadata:
name: database-credentials
namespace: production
type: Opaque
stringData:
username: admin
password: secret-passwordThen apply: kubectl apply -f secret.yaml
Secrets are namespace-isolated. You have two options:
Option A: Copy the Secret to the pod's namespace
kubectl get secret <secret-name> -n <source-namespace> -o yaml | \
sed 's/namespace: <source-namespace>/namespace: <pod-namespace>/' | \
kubectl apply -f -Then verify in the new namespace:
kubectl get secrets -n <pod-namespace>Option B: Move pod to same namespace as Secret (if pod shouldn't be elsewhere)
kubectl get pod <pod-name> -n <source-namespace> -o yaml | \
sed 's/namespace: <source-namespace>/namespace: <pod-namespace>/' | \
kubectl apply -f -
kubectl delete pod <pod-name> -n <source-namespace>After creating the Secret, the pod should start automatically:
kubectl get pod <pod-name> -n <pod-namespace>
kubectl describe pod <pod-name> -n <pod-namespace>Pod status should change from Pending to Running. Check logs to confirm:
kubectl logs <pod-name> -n <pod-namespace>If still failing, check if there are other issues (like a missing key within the Secret).
If secrets are stored in an external vault (AWS Secrets Manager, Azure Key Vault, HashiCorp Vault), use External Secrets Operator:
Install via Helm:
helm repo add external-secrets https://charts.external-secrets.io
helm install external-secrets external-secrets/external-secrets \
-n external-secrets-system --create-namespaceCreate SecretStore (points to external vault):
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: aws-secrets
namespace: production
spec:
provider:
aws:
service: SecretsManager
region: us-east-1
auth:
jwt:
serviceAccountRef:
name: external-secrets-saCreate ExternalSecret (pulls secret from vault):
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: database-credentials
namespace: production
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secrets
kind: SecretStore
target:
name: database-credentials # Name of Kubernetes Secret to create
creationPolicy: Owner
data:
- secretKey: username
remoteRef:
key: prod/db/username
- secretKey: password
remoteRef:
key: prod/db/passwordExternal Secrets Operator automatically creates the Kubernetes Secret from vault data.
Secrets are stored unencrypted in etcd by default; enable encryption at rest in Kubernetes to protect sensitive data. Use RBAC to restrict Secret access—not all service accounts should read all secrets. For multi-namespace secret sharing without External Secrets, use a namespace-per-tenant model and handle replication in your deployment pipeline (Helm, Kustomize). Sealed Secrets and Sealed Secrets Operator provide GitOps-friendly encrypted secrets. External Secrets Operator supports 40+ external vaults and can aggregate secrets from multiple sources. Rotation: ensure external secret backends support TTL/rotation; External Secrets Operator can be configured to refresh periodically (refreshInterval). For CI/CD, inject secrets at pod creation time rather than committing to version control. Large binary secrets (certificates, key material) should use Secret type "Opaque" with base64 encoding. Monitor Secret access via audit logs; most breaches involve leaked secrets left in logs or config files.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm