Port conflicts occur when multiple services or pods attempt to bind to the same port on a node. This prevents pod startup and service exposure. Common with NodePort services, DaemonSets, and multi-host deployments.
In Kubernetes, port conflicts arise when: 1. Two services expose the same NodePort on the same node 2. A service attempts to bind to a port that's already in use by system services 3. Multiple DaemonSet replicas bind to the same port 4. A pod runs on multiple nodes, each trying to bind the same port Unlike cloud-native service discovery (ClusterIP), NodePort requires exclusive port binding on physical nodes, making conflicts possible.
Check the error in pod logs:
kubectl logs <pod-name> -n <namespace>Look for port number in the error message (e.g., ":8080").
For services, inspect the configuration:
kubectl get svc <service-name> -o yaml | grep -E "(port|targetPort|nodePort)"
kubectl describe svc <service-name>Note the NodePort (usually 30000+) and the targetPort (service port).
SSH into the node and check port usage:
sudo netstat -tlnp | grep <port>
sudo ss -tlnp | grep <port>Example for port 30080:
sudo netstat -tlnp | grep 30080
# Output: tcp 0 0 0.0.0.0:30080 0.0.0.0:* LISTEN 1234/dockerThe process ID (1234) shows what's holding the port. Check what it is:
ps aux | grep 1234If it's a previous pod's container, it may need to be force-killed:
sudo kill -9 <pid>Check for duplicate NodePort assignments across all services:
kubectl get svc -A -o jsonpath='{range .items[*]}{.metadata.name}\t{.spec.type}\t{.spec.ports[*].nodePort}\n{end}' | grep NodePortLook for duplicate nodePort values. Each NodePort must be unique across the cluster.
If you find duplicates:
kubectl edit svc <service1> # Change one service to a different NodePortOr delete the newer service and recreate:
kubectl delete svc <service2>
kubectl create -f service.yaml # Recreate with different NodePortNote: Kubernetes auto-assigns NodePorts if not specified (range 30000-32767).
If pods use hostPort, ensure no conflicts:
kubectl get pods -A -o jsonpath='{range .items[*]}{.metadata.name}\t{.spec.containers[*].ports[?(@.hostPort)]}\n{end}'If multiple pods have hostPort on the same node:
spec:
containers:
- ports:
- containerPort: 8080
hostPort: 8080 # Binds to node interfaceOnly one pod can use hostPort 8080 per node. Solutions:
1. Use different hostPorts for different instances:
hostPort: 8080 # Pod 1
hostPort: 8081 # Pod 22. Use ClusterIP (preferred) instead of hostPort
3. Use nodeAffinity to ensure only one pod per node:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostnameEnsure service ports don't conflict with system services:
sudo netstat -tlnp | grep LISTENCommon reserved ports:
- 22: SSH
- 53: DNS (CoreDNS on Kubernetes)
- 80: HTTP
- 443: HTTPS
- 6443: Kubernetes API server
- 10250-10260: kubelet endpoints
- 30000-32767: Default NodePort range
If your service uses a low port (< 1024), it may conflict with system services. Either:
1. Change service port: kubectl patch svc <name> -p '{"spec":{"ports":[{"port":8080}]}}'
2. Add firewall rule to only expose on specific interfaces
3. Use port > 1024 if possible
If a process is holding a port after container termination:
sudo systemctl restart containerd # If using containerd
sudo systemctl restart docker # If using Docker
sudo systemctl restart kubelet # Restart kubeletMonitor the restart:
sudo journalctl -u kubelet -fAfter restart, verify the port is released:
sudo netstat -tlnp | grep <port>Then retry pod scheduling:
kubectl delete pod <pod-name>
kubectl get pods -w # Watch new pod startNodePort is often unnecessary. Use ClusterIP (default) for internal communication:
Current (NodePort):
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30080
selector:
app: myappBetter (ClusterIP):
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
type: ClusterIP # Default, no port binding on node
ports:
- port: 8080
selector:
app: myappFor external access, use:
- LoadBalancer: Cloud provider manages external IP
- Ingress: HTTP/HTTPS routing with shared controller
- Port-forwarding: Dev/debugging only
ClusterIP avoids port conflicts entirely.
As a last resort, force-release a stuck port:
# Identify the process holding the port
sudo lsof -i :<port-number>
# Kill the process
sudo kill -9 <pid>
# Verify the port is free
sudo netstat -tlnp | grep <port>If the port remains stuck (CLOSE_WAIT state):
ss -tan | grep <port> # Check socket state
ip route flush cache # Clear routing cacheFor Linux kernel-level issues with TIME_WAIT sockets:
sysctl -w net.ipv4.tcp_fin_timeout=30 # Default: 60 seconds
sysctl -w net.ipv4.tcp_tw_reuse=1 # Reuse TIME_WAIT socketsMake permanent:
echo "net.ipv4.tcp_tw_reuse = 1" >> /etc/sysctl.d/99-network.conf
sysctl -pPort conflicts are a design consequence of NodePort—it requires exclusive binding on the physical node interface, unlike service mesh approaches (Istio, Linkerd) that use sidecar proxies. For production ingress, use Ingress resources with an Ingress Controller (nginx-ingress, AWS ALB) rather than direct NodePort exposure. The default NodePort range (30000-32767) can be customized in kube-apiserver: --service-node-port-range=25000-35000. DaemonSets should rarely bind NodePort; use CNI-level networking instead. For multi-cluster deployments, ensure each cluster uses different NodePort ranges to avoid conflicts. Stateful services (databases) using hostPort should pin pods to specific nodes via nodeAffinity. Container-level port binding (containerPort) never conflicts because containers have isolated network namespaces. WSL2 and Docker Desktop may expose ports differently—test locally before production.
No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes
default backend - 404
How to fix "default backend - 404" in Kubernetes Ingress
serviceaccount cannot list resource
How to fix "serviceaccount cannot list resource" in Kubernetes