This error occurs when Kubernetes cannot authenticate with a container registry to pull a private image. Fix it by creating imagePullSecrets with valid registry credentials and referencing them in your pod spec or service account.
The "Failed to pull image: unauthorized" error indicates that the kubelet on your Kubernetes node attempted to pull an image from a container registry but failed authentication. This manifests as ErrImagePull initially, then ImagePullBackOff as Kubernetes retries with exponential backoff. Container registries require authentication for private images. When a pod references a private image, Kubernetes needs valid credentials stored in an imagePullSecret. Without proper credentials—or with expired/incorrect ones—the registry rejects the pull request with an "unauthorized" response. This error commonly occurs with Docker Hub private repositories, AWS ECR, Google GCR, Azure ACR, and self-hosted registries. Each has specific authentication requirements and token expiration policies that must be managed.
Create a secret with your registry credentials:
kubectl create secret docker-registry regcred \
--docker-server=<REGISTRY_URL> \
--docker-username=<USERNAME> \
--docker-password=<PASSWORD> \
--docker-email=<EMAIL> \
--namespace=<YOUR_NAMESPACE>Common registry servers:
- Docker Hub: https://index.docker.io/v1/
- AWS ECR: <ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com
- Google GCR: gcr.io
- Azure ACR: <REGISTRY_NAME>.azurecr.io
Verify the secret was created:
kubectl get secret regcred -n <YOUR_NAMESPACE>Reference the secret in your pod or deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
imagePullSecrets:
- name: regcred
containers:
- name: app
image: myregistry.azurecr.io/myapp:latestApply and verify:
kubectl apply -f deployment.yaml
kubectl get pod <POD_NAME> -o jsonpath='{.spec.imagePullSecrets}'Instead of adding imagePullSecrets to every pod, patch the service account:
kubectl patch serviceaccount default \
-n <YOUR_NAMESPACE> \
-p '{"imagePullSecrets": [{"name": "regcred"}]}'Verify the patch:
kubectl get serviceaccount default -n <YOUR_NAMESPACE> -o yamlAll new pods using the default service account will automatically inherit the imagePullSecrets.
Decode and inspect the secret to ensure correct structure:
kubectl get secret regcred \
--output="jsonpath={.data.\.dockerconfigjson}" \
| base64 --decodeExpected output structure:
{
"auths": {
"myregistry.azurecr.io": {
"username": "myuser",
"password": "mypassword",
"auth": "base64-encoded-credentials"
}
}
}Check secret type (must be kubernetes.io/dockerconfigjson):
kubectl get secret regcred -o jsonpath='{.type}'Get detailed error messages from the pod:
kubectl describe pod <POD_NAME>Look for events showing:
- "Failed to pull image": Check if secret exists and credentials are correct
- "Repository does not exist": Verify image name/tag is correct
- "No pull access": Missing or invalid imagePullSecret
Save full details for analysis:
kubectl describe pod <POD_NAME> > pod-debug.txt
kubectl get pod <POD_NAME> -o yaml > pod-spec.yamlIf you've already run docker login locally:
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=~/.docker/config.json \
--type=kubernetes.io/dockerconfigjsonGenerate YAML for version control:
kubectl create secret docker-registry regcred \
--docker-server=<REGISTRY> \
--docker-username=<USER> \
--docker-password=<PASS> \
--dry-run=client \
-o yaml > secret.yamlAWS ECR Token Expiration: ECR tokens expire after 12 hours. For long-running clusters, set up a CronJob to refresh credentials every 6 hours, or use tools like k8s-ecr-login-renew.
GCR with Workload Identity: For GKE, use Workload Identity instead of manual secrets—it handles authentication automatically without credential rotation.
Azure ACR with Managed Identity: On AKS, enable Managed Identity and assign the ACRPull role to avoid manual secret management entirely.
Docker Hub Rate Limits: Authenticated pulls get 200 pulls/6 hours vs 100 for anonymous. Use a personal access token, not your password.
Namespace Isolation: Secrets cannot be shared across namespaces. Either create the secret in each namespace or use a controller like kubernetes-reflector to auto-sync secrets.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm