Pods stuck in ContainerCreating cannot start containers due to image pull issues, volume mount failures, or CNI network problems. Check events for the specific cause.
A pod stuck in ContainerCreating status means Kubernetes is trying to set up the container environment but cannot complete the process. This phase includes pulling images, mounting volumes, and configuring the container network. Unlike CrashLoopBackOff (where containers start then fail), ContainerCreating indicates the container hasn't started at all. The issue is in the infrastructure or configuration required to create the container sandbox.
Get detailed information about what's blocking container creation:
kubectl describe pod <pod-name>Look at the Events section for messages like:
- "pulling image" - Image download in progress or stuck
- "FailedMount" - Volume mount issue
- "FailedCreatePodSandBox" - Network/CNI problem
- "configmap not found" - Missing ConfigMap
Events appear in chronological order; recent events show current blockers.
If events show image pull problems:
# Check if image exists and is accessible
docker pull <image>:<tag>
# Verify image pull secrets exist
kubectl get secrets | grep -i registry
kubectl get pod <pod-name> -o yaml | grep imagePullSecretsFor large images, pulling can take several minutes. Check the node's container runtime:
# On the node
crictl images
crictl pull <image>Volume issues are common causes of stuck ContainerCreating:
# Check PVC status
kubectl get pvc
# If PVC is Pending, check why
kubectl describe pvc <pvc-name>
# Check PV availability
kubectl get pvCommon fixes:
- Create a PersistentVolume matching the PVC
- Install the required CSI driver
- Fix storage class configuration
If events show sandbox or network errors:
# Check CNI plugin status
kubectl get pods -n kube-system | grep -E "calico|flannel|weave|cilium"
# Check node network
kubectl describe node <node-name> | grep -A 10 ConditionsOn the node:
# Check CNI configuration
ls /etc/cni/net.d/
cat /etc/cni/net.d/*.conf
# Restart CNI pods if needed
kubectl rollout restart daemonset <cni-daemonset> -n kube-systemMissing configuration resources block container creation:
# List ConfigMaps and Secrets in namespace
kubectl get configmaps
kubectl get secrets
# Check what the pod expects
kubectl get pod <pod-name> -o yaml | grep -A 5 configMapRef
kubectl get pod <pod-name> -o yaml | grep -A 5 secretRefCreate missing resources before the pod can start.
Node-level issues can block all pods:
# Check node conditions
kubectl describe node <node-name>
# Check disk space on node
ssh <node-ip> df -h
# Check container runtime
ssh <node-ip> systemctl status containerd
ssh <node-ip> journalctl -u containerd -n 50If the container runtime is unhealthy, restart it:
sudo systemctl restart containerdOn AWS EKS, ContainerCreating often indicates IP address exhaustion. Each pod requires an IP from the VPC subnet. Check available IPs:
aws ec2 describe-subnets --subnet-ids <subnet-id> --query 'Subnets[].AvailableIpAddressCount'Solutions include using smaller instance types (fewer IPs per node), enabling prefix delegation, or adding subnets.
For GKE and AKS, similar IP exhaustion can occur with VPC-native clusters. Consider expanding the pod CIDR range.
Image pull timeouts on large images (>1GB) may require adjusting kubelet's image-pull-progress-deadline. For air-gapped environments, pre-pull images to nodes or use a local registry mirror.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm