CreateContainerError occurs when the container runtime fails to create the container. Unlike config errors, this indicates runtime failures like duplicate containers, volume issues, or permission problems.
CreateContainerError occurs when Kubernetes validates the pod manifest successfully but the container runtime (containerd, Docker) fails to actually create the container. This is different from CreateContainerConfigError, which happens earlier when the configuration itself is invalid. The distinction is important: CreateContainerConfigError means the pod spec references missing resources (ConfigMaps, Secrets). CreateContainerError means the spec is valid but something prevents the runtime from creating the container—like volume access issues, duplicate container names, or missing entrypoints.
Get details about the failure:
kubectl describe pod <pod-name>Look for Events showing:
- "failed to create container"
- Specific error messages from the runtime
- Volume mount failures
- Permission denied errors
The error message usually indicates the exact problem.
Old containers may not have been cleaned up:
# On the node, list all containers
crictl ps -a | grep <pod-name>
# Or with docker
docker ps -a | grep <pod-name>Remove orphaned containers:
crictl rm <container-id>
# or
docker rm <container-id>This often happens after node issues or runtime crashes.
If the image has no ENTRYPOINT/CMD and the pod spec has no command:
# Check image configuration
docker inspect <image> | grep -A 10 "Cmd\|Entrypoint"Add a command to your pod spec:
spec:
containers:
- name: app
image: myimage
command: ["/app/start.sh"]
# or
args: ["--config", "/etc/config"]Volume issues can prevent container creation:
# Check PV/PVC status
kubectl get pv,pvc
# Verify mount points on node
ssh <node-ip>
ls -la /var/lib/kubelet/pods/*/volumes/Common issues:
- PV not bound to PVC
- NFS share inaccessible
- Host path doesn't exist
- SELinux blocking mount
If the runtime is malfunctioning:
# On the affected node
ssh <node-ip>
# For containerd
sudo systemctl restart containerd
# For docker (older clusters)
sudo systemctl restart docker
# Check status
sudo systemctl status containerdThis cleans up orphaned resources and resets runtime state.
Kubelet logs contain detailed runtime errors:
# On the node
journalctl -u kubelet -n 100 | grep -i error
# Or search for the specific pod
journalctl -u kubelet | grep <pod-name>Look for:
- Permission denied errors
- Missing directory/file errors
- Runtime communication failures
These logs often reveal the root cause that pod events summarize.
CreateContainerError can cascade across a node if the container runtime is unhealthy. If multiple pods on the same node show this error, focus on node-level debugging rather than individual pods.
For SELinux-enabled systems, container creation can fail with "permission denied" even when file permissions look correct. Check SELinux denials:
ausearch -m avc -ts recentWorking directory issues occur when the pod spec sets workingDir to a path that doesn't exist in the container image. Either remove the workingDir setting or ensure the directory exists in the image.
In rare cases, CreateContainerError results from kernel-level issues like cgroup configuration problems or resource controller failures. Check dmesg on the node for system-level errors.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm