Deployment image pull fails due to invalid image references, missing credentials, network issues, or registry problems. Fix by verifying image name/tag, checking imagePullSecrets, testing network connectivity, and validating registry access.
When Kubernetes tries to create a pod, the kubelet must pull the container image from a registry. Image pull errors occur when: 1. **Wrong image reference**: Image doesn't exist or tag is incorrect 2. **No credentials**: Private registry requires authentication 3. **Network blocked**: Firewall or DNS prevents reaching registry 4. **Registry unavailable**: Registry is down or rate-limiting 5. **Wrong platform**: Image is for different CPU architecture The pod enters ErrImagePull (error) or ImagePullBackOff (temporary retry) state, preventing the deployment from progressing.
Get detailed error information:
# Check pod status:
kubectl describe pod -n <namespace> <pod-name>
# Look for:
# - Failed to pull image ...
# - Error response from daemon
# - Reason: ImagePullBackOff
# Check image reference in deployment:
kubectl get deployment -n <namespace> <name> -o yaml | grep -A 3 "image:"
# Check which registry is being used:
kubectl get pod -n <namespace> <pod-name> -o jsonpath='{.spec.containers[0].image}'Check if the image actually exists:
# For Docker Hub:
curl -s https://hub.docker.com/v2/repositories/<user>/<image>/tags | jq '.results[].name'
# For Docker Hub with specific tag:
docker pull <username>/<image>:<tag>
# For private registry:
podman login <registry>
podman pull <registry>/<image>:<tag>
# Or test from pod:
kubectl run -it --rm debug --image=alpine -- sh
apk add curl
curl -u <user>:<token> https://registry/v2/<image>/manifests/<tag>Correct the image name if it's wrong:
# Find the correct image reference:
# Format: [registry/]image:tag
# Common examples:
# - nginx:latest (Docker Hub)
# - gcr.io/project/image:v1.0
# - registry.example.com:5000/myapp:latest
# Update deployment:
kubectl set image deployment/<name> <container>=<correct-image> -n <namespace>
# Or edit directly:
kubectl edit deployment -n <namespace> <name>
# Change container.image field
# Verify the change:
kubectl get deployment -n <namespace> <name> -o yaml | grep image:New pods will pull the corrected image.
If using a private registry, set up authentication:
# Create Docker config credentials:
kubectl create secret docker-registry myregistry \
--docker-server=<registry> \
--docker-username=<user> \
--docker-password=<token> \
--docker-email=<email> \
-n <namespace>
# Or for Kubernetes API token:
kubectl create secret docker-registry myregistry \
--docker-server=gcr.io \
--docker-username=_json_key \
--docker-password="$(cat ~/key.json)" \
-n <namespace>Then add to deployment:
spec:
template:
spec:
imagePullSecrets:
- name: myregistry
containers:
- name: app
image: registry.example.com/app:v1Check if credentials are working:
# Get secret contents:
kubectl get secret <secret-name> -n <namespace> -o jsonpath='{.data..dockerconfigjson}' | base64 -d | jq .
# Test credentials manually:
kubectl run -it --rm debug \
--image-pull-policy=Never \
--image=alpine \
--overrides='{"spec":{"serviceAccount":"default"}}' \
-- sh
# Inside pod, test registry:
apk add curl
curl -u <user>:<password> https://<registry>/v2/_catalogIf credentials are wrong, delete and recreate the secret.
Verify network can reach the registry:
# From a node, test:
ping <registry-host>
nslookup <registry-host>
curl -v https://<registry>/v2/
# From a pod:
kubectl run -it --rm debug --image=alpine -- sh
apk add curl
curl -v https://docker.io/v2/
curl -v https://<private-registry>/v2/
# Check firewall rules:
# - Port 443 (HTTPS) must be open
# - No corporate proxies blocking
# - DNS must resolve registry hostnameIf connectivity fails, contact network team to unblock registry.
Ensure image matches node architecture:
# Check node architecture:
kubectl get nodes -o jsonpath='{.items[*].status.nodeInfo.architecture}'
# Check image architecture:
docker image inspect <image> | jq '.Architecture'
# Or for multi-arch images:
docker manifest inspect <image>:tag | jq '.manifests[].platform'
# If mismatch, use platform-specific image:
# AMD64: ubuntu:latest
# ARM64: ubuntu:latest-arm64
# ARM32: ubuntu:latest-arm32v7
# Update deployment with correct architecture:
kubectl set image deployment/<name> <container>=<image>-arm64Confirm the image pulls successfully:
# Force pod recreation:
kubectl rollout restart deployment -n <namespace> <name>
# Watch for image pull:
kubectl get pods -n <namespace> -w
# Check pod events:
kubectl describe pod -n <namespace> <new-pod-name> | tail -20
# Verify pod is Running:
kubectl get pods -n <namespace> | grep <deployment>
# Check actual image used:
kubectl get pods -n <namespace> <pod-name> -o jsonpath='{.spec.containers[0].image}'Pod should transition from ContainerCreating → Running.
### Image Pull Policies
Control when images are pulled:
imagePullPolicy: Always # Always pull (default for latest)
imagePullPolicy: IfNotPresent # Only pull if not cached
imagePullPolicy: Never # Never pull (local only)For reproducibility, explicit tags with IfNotPresent work well.
### Private Registry Setup
For self-hosted registries (Harbor, Nexus):
# Create secret with base64 auth:
echo -n user:password | base64
# Creates: dXNlcjpwYXNzd29yZA==
# Use in deployment:
imagePullSecrets:
- name: private-registry### Multi-arch Images
Modern best practice is multi-arch images:
FROM --platform=${BUILDPLATFORM} golang:latest as builder
# ... build code ...
FROM ubuntu:latest
COPY --from=builder /app /appBuild with: docker buildx build --platform linux/amd64,linux/arm64
### Rate Limiting
Docker Hub limits pulls (100/6h for anonymous):
# Solution: Authenticate even for public images
kubectl create secret docker-registry dockerhub \
--docker-server=docker.io \
--docker-username=<user> \
--docker-password=<token>Or use private registry mirror.
### Image Pull Timeout
Slow registries may timeout:
spec:
template:
spec:
containers:
- name: app
image: slow.registry.com/image:v1
imagePullPolicy: IfNotPresent # Reduces frequencyAlso check network bandwidth and latency.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm