The ErrImagePull error occurs when kubelet fails to pull a container image from a registry. Common causes include incorrect image names, missing authentication credentials, network connectivity issues, or image availability problems.
ErrImagePull is a Kubernetes error event that occurs when the kubelet (node agent) attempts to pull a container image from a container registry and fails on the first attempt. This error indicates that the image specified in your Pod definition could not be retrieved from the registry for one of several reasons. When a Pod first fails to pull an image, Kubernetes records an ErrImagePull event. If the kubelet continues to retry pulling the image (which it does automatically with exponential backoff), subsequent retry failures are recorded as ImagePullBackOff events. The key difference is that ErrImagePull happens on the initial pull attempt, while ImagePullBackOff indicates ongoing retry attempts with increasingly longer delays (5s, 10s, 20s, up to 5 minutes between retries). The image pull process requires: correct image name/tag, valid authentication if the registry is private, network connectivity to the registry endpoint, and sufficient disk space on the node to store the pulled image.
Use kubectl describe to get detailed information about the pull failure:
kubectl describe pod <pod-name> -n <namespace>Look at the Events section for messages like:
- "Repository does not exist" - image not found in registry
- "unauthorized: authentication required" - credentials missing/invalid
- "connection refused" or "operation timed out" - network connectivity issue
- "no space left on device" - node disk is full
The exact error message will point to the root cause.
Check if the image exists and is spelled correctly:
# List the image in your pod spec
kubectl get pod <pod-name> -n <namespace> -o yaml | grep image:
# For public images, search Docker Hub
docker search <image-name>
# For private registries, use registry-specific commands
# Azure Container Registry
az acr repository list --name <registry-name>
# AWS ECR
aws ecr describe-repositories --region <region>
# Google Container Registry
gcloud container images list --repository=gcr.io/<project-id>Verify the exact image name, registry server, and tag match what exists in the registry.
If pulling from a private registry (not Docker Hub), create a Kubernetes secret with registry credentials:
kubectl create secret docker-registry my-registry-secret \
--docker-server=<registry-server> \
--docker-username=<username> \
--docker-password=<password> \
--docker-email=<email> \
-n <namespace>For private Azure Container Registry:
kubectl create secret docker-registry acr-secret \
--docker-server=<registry-name>.azurecr.io \
--docker-username=<username> \
--docker-password=<password> \
-n <namespace>For AWS ECR:
# Create secret from ECR credentials
kubectl create secret docker-registry ecr-secret \
--docker-server=<account-id>.dkr.ecr.<region>.amazonaws.com \
--docker-username=AWS \
--docker-password=$(aws ecr get-login-password --region <region>) \
-n <namespace>Add the imagePullSecrets field to your Pod or Deployment spec:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
namespace: default
spec:
imagePullSecrets:
- name: my-registry-secret # Must match the secret name created above
containers:
- name: my-container
image: <registry-server>/<image-name>:<tag>For Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
template:
spec:
imagePullSecrets:
- name: my-registry-secret
containers:
- name: my-app
image: <registry-server>/<image-name>:<tag>Apply the updated manifest:
kubectl apply -f deployment.yamlVerify that the Kubernetes nodes can reach the registry server:
# SSH into a node or use kubectl debug to create a debug container
kubectl debug node/<node-name> -it --image=ubuntu
# Inside the debug container
apt-get update && apt-get install -y curl
curl -v https://<registry-server>
# Test DNS resolution
nslookup <registry-server>
# Test connectivity on the registry port
nc -zv <registry-server> 443If the registry is unreachable:
- Check firewall rules and security groups
- Verify the registry server URL is correct
- Check if the node's network policies allow egress to the registry
- Verify DNS resolution works for the registry domain
Check the available disk space on the node:
kubectl top nodes
kubectl describe node <node-name>Look at the "Allocatable" section. If disk is full:
# SSH into the node
ssh <node-ip>
# Check disk usage
df -h
# Clean up old images and containers
docker image prune -a
docker container prune
# For containerd
ctr images ls
ctr images rm <image-id>After fixing the root cause, force Kubernetes to pull the image again by recreating the pod:
# Delete the pod to trigger a new pull attempt
kubectl delete pod <pod-name> -n <namespace>
# Or, for Deployments, trigger a rollout restart
kubectl rollout restart deployment/<deployment-name> -n <namespace>
# Watch the pod creation
kubectl get pods <pod-name> -n <namespace> --watchOnce the pod reaches "Running" status, the image pull was successful:
kubectl describe pod <pod-name> -n <namespace>Verify no "ErrImagePull" or "ImagePullBackOff" events appear in the Events section.
When Kubernetes encounters an ErrImagePull, it automatically enters an exponential backoff retry loop: 5 seconds, then 10, 20, 40, 80, 160, 320 seconds, up to a maximum of 5 minutes between retries. This is why pods with transient network issues may eventually recover even without manual intervention.
Image registry authentication is critical for private registries. Kubernetes stores imagePullSecrets as base64-encoded .dockerconfigjson files. For CI/CD systems, create service accounts with attached imagePullSecrets to avoid hardcoding credentials in pod specs.
Different registries have different authentication methods. Docker Hub uses basic auth (username:password). Azure Container Registry supports Azure CLI authentication. AWS ECR requires temporary tokens refreshed every 12 hours. Google Container Registry uses service account keys. Ensure your authentication method matches your registry type.
The image reference format matters: <registry-server>/<namespace>/<repository>:<tag>. For Docker Hub public images, the registry can be omitted (defaults to docker.io). For private registries, always include the full server URL.
Image digests (SHA256) provide stronger guarantees than tags. Use image digests in production to ensure the exact image is pulled: image: <registry>/<image>@sha256:abc123... This prevents issues where tags are reassigned or changed.
On Kubernetes 1.30+, kubelet supports per-pod image pull retry configuration via the .spec.restartPolicy and pod-level annotations to customize retry behavior beyond the default exponential backoff.
Error: Error installing helm release: cannot re-use a name that is still in use
How to fix "release name in use" error in Terraform with Helm
Error: Error creating GKE Cluster: BadRequest
BadRequest error creating GKE cluster in Terraform
Error: External program failed to produce valid JSON
External program failed to produce valid JSON
Error: Unsupported argument in child module call
How to fix "Unsupported argument in child module call" in Terraform
Error: network is unreachable
How to fix "network is unreachable" in Terraform