The port 6443 in use error during kubeadm init means another process or container is already binding to the Kubernetes API server port. This typically happens from a previous cluster installation that wasn't fully cleaned up.
Kubeadm init performs preflight checks and fails if port 6443 (the standard API server port) is already in use by another process. This prevents kubeadm from starting the API server. The port is usually taken by stale containers or processes from a previous Kubernetes installation.
Check which process has the port:
sudo netstat -tulpn | grep 6443
# or
sudo lsof -Pi :6443Note the process name and PID. Common blockers:
- Docker container (stale kube-apiserver)
- HAproxy/nginx (load balancer in HA setup)
- Previous kubelet process
If from previous cluster:
sudo kubeadm reset -f
sudo rm -rf /etc/kubernetes /var/lib/kubernetes /var/lib/kubelet /etc/cni/net.d ~/.kube
sudo docker rm -f $(docker ps -a -q) # Remove all containers
sudo iptables -F && sudo iptables -X # Flush firewall rulesThis completely removes all Kubernetes state.
If kubeadm reset didn't free the port:
# From netstat/lsof output, get the PID and kill it:
sudo kill -9 <PID>
# Or force kill all docker containers:
sudo docker rm -f $(docker ps -a -q)Then verify port is free:
sudo netstat -tulpn | grep 6443
# Should show nothingCheck if Docker/containerd is working:
sudo systemctl status docker
# or
sudo systemctl status containerdIf not running, start it:
sudo systemctl start dockerTest connectivity:
sudo docker ps
sudo ctr containers list # For containerdIf using HAproxy/nginx for load balancing:
# HAproxy should forward to API servers, not bind to 6443:
listen api-server
bind <load-balancer-ip>:443 # NOT 6443
default_backend api-servers
backend api-servers
server master1 <master1-ip>:6443 check
server master2 <master2-ip>:6443 check
server master3 <master3-ip>:6443 checkThen run:
kubeadm init --control-plane-endpoint=<load-balancer-ip>:443Note: endpoint port (443) differs from API server port (6443).
Start fresh after cleanup:
sudo kubeadm init \
--pod-network-cidr=10.244.0.0/16 \
--control-plane-endpoint=<control-plane-ip>:6443If still fails with port in use, verify cleanup was complete:
sudo docker ps -a # Should be empty
sudo systemctl status etcd # Should not be runningAPI server takes 20-30 seconds to become ready. Don't panic if it's initially unreachable:
# Monitor startup:
kubectl get nodes -w
# Or check logs:
sudo journalctl -u kubelet -f
sudo docker logs -f <api-server-container>Give it 30-60 seconds after kubeadm init completes before testing kubectl commands.
Docker 1.24+ removed Kubernetes support. Need cri-dockerd adapter:
# Check if installed:
which cri-dockerd
# Install if missing:
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.6/cri-dockerd-0.2.6.amd64.tgz
tar -xzf cri-dockerd-0.2.6.amd64.tgz
sudo mv cri-dockerd/cri-dockerd /usr/local/bin/
# Then run kubeadm:
kubeadm init --cri-socket /run/cri-dockerd.sockWithout cri-dockerd, kubelet cannot start containers.
Run preflight checks separately:
sudo kubeadm init phase preflightThis shows all issues before trying full init. Fix each one:
- Ports: 6443, 10250, 10251, 10252, 10255, 2379, 2380
- Kernel modules: overlay, br_netfilter
- Swap: must be disabled
- System requirements: RAM, CPU
Port conflicts are usually from incomplete cleanup of previous installations. kubeadm reset removes most artifacts but sometimes leaves Docker containers or stale processes. For CI/CD environments where kubeadm init runs in containers, ensure privileged mode and proper volume mounts. In HA setups, the load balancer endpoint and API server port are differentโthe LB forwards external connections on one port to API servers on port 6443. For Docker Desktop, the built-in Kubernetes uses port 6443 too; use different VM or container for custom kubeadm. WSL2 may have port conflicts with Windows host; use network namespace isolation if needed.
No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes
default backend - 404
How to fix "default backend - 404" in Kubernetes Ingress
serviceaccount cannot list resource
How to fix "serviceaccount cannot list resource" in Kubernetes