Port conflicts between multiple services or pods prevent proper service routing and pod scheduling. This occurs when services attempt to use the same port or when hostPort bindings collide, disrupting traffic flow and application availability.
Port conflicts in Kubernetes manifest as: 1. Multiple services claiming the same port on the same protocol 2. hostPort bindings overlapping across pods on the same node 3. Services competing for container ports (containerPort) 4. Pod network ports not properly isolated by namespace The root cause differs depending on context: ClusterIP services share ports across pods (correct), but multiple ExternalName or LoadBalancer services cannot share ports. NodePort and hostPort are strictly exclusive per node.
List all services with their port configurations:
kubectl get svc -A -o wideFor detailed port mapping:
kubectl get svc -A -o jsonpath='{range .items[*]}{.metadata.namespace}{"\t"}{.metadata.name}{"\t"}{.spec.type}{"\t"}{.spec.ports[*].port}{"\t"}{.spec.ports[*].nodePort}{"\n"}{end}'Filter by specific service:
kubectl describe svc <service-name> -n <namespace>Check:
- Port (ClusterIP port)
- TargetPort (pod port)
- NodePort (external port if type=NodePort)
NodePort values must be unique:
kubectl get svc -A --no-headers | awk '{print $8}' | sort | uniq -dIf duplicates appear:
kubectl get svc -A -o jsonpath='{range .items[*]}{.metadata.namespace}{"\t"}{.metadata.name}{"\t"}{.spec.type}{"\t"}{.spec.ports[0].nodePort}{"\n"}{end}' | sortFor each duplicate NodePort:
1. Identify which service should change:
kubectl delete svc <old-service>2. Update service definition to use different port or let Kubernetes auto-assign:
spec:
type: NodePort
ports:
- port: 8080
# nodePort: omitted, auto-assigns in 30000-32767 range3. Reapply:
kubectl apply -f service.yamlFind all pods using hostPort:
kubectl get pods -A -o jsonpath='{range .items[*]}{.metadata.namespace}{"\t"}{.metadata.name}{"\t"}{.spec.containers[*].ports[?(@.hostPort)]}\n{end}'List by node to identify conflicts:
kubectl get pods -A -o wide | while read ns pod restarts status node _; do
if kubectl get pod $pod -n $ns -o jsonpath='{.spec.containers[*].ports[?(@.hostPort)]}' 2>/dev/null | grep -q hostPort; then
echo "$node: $ns/$pod"
fi
done | sortIf same hostPort on same node:
# Pod 1
spec:
containers:
- ports:
- containerPort: 8080
hostPort: 8080 # Conflicts on this node
# Pod 2 (same node)
spec:
containers:
- ports:
- containerPort: 8080
hostPort: 8080 # Conflict!Solution: Remove hostPort or assign different nodeAffinity.
ClusterIP services can share targetPort (pod port) via label selectors:
apiVersion: v1
kind: Service
metadata:
name: api
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080 # Pod port
selector:
app: api
---
apiVersion: v1
kind: Service
metadata:
name: web
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080 # Same pod port is OK
selector:
app: web # Different pod selectorConflict occurs only if same selectors + same port.
Check service selectors:
kubectl get svc -A -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.selector}{"\n"}{end}'Inspect pod specs for port assignments:
kubectl get pods -A -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[*].ports[*].containerPort}{"\n"}{end}'For specific pod:
kubectl get pod <pod-name> -n <namespace> -o yaml | grep -A5 ports:If multiple containers in same pod use same containerPort:
spec:
containers:
- name: app
ports:
- containerPort: 8080 # Container 1
- name: sidecar
ports:
- containerPort: 8080 # Conflict! Same container portFix by using different ports:
- containerPort: 8081 # Sidecar on different portFor each conflicting service, reassign ports:
Option 1: Edit service directly
kubectl edit svc <service-name> -n <namespace>Change spec.ports[0].port or .nodePort to an unused value.
Option 2: Patch service
kubectl patch svc <service-name> -n <namespace> -p '{"spec":{"ports":[{"port":8081,"targetPort":8080}]}}'Option 3: Delete and recreate
kubectl delete svc <service-name>Update YAML manifest with new port, then:
kubectl apply -f service.yamlAfter changes, verify endpoints are active:
kubectl get endpoints <service-name>Instead of exposing each service with a unique NodePort, use Ingress:
Old (multiple NodePorts):
apiVersion: v1
kind: Service
metadata:
name: api
spec:
type: NodePort
ports:
- nodePort: 30080
port: 80
selector:
app: api
---
apiVersion: v1
kind: Service
metadata:
name: web
spec:
type: NodePort
ports:
- nodePort: 30081
port: 80
selector:
app: webBetter (single Ingress):
apiVersion: v1
kind: Service
metadata:
name: api
spec:
type: ClusterIP
selector:
app: api
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: main
spec:
rules:
- host: api.example.com
http:
paths:
- path: /
backend:
service:
name: api
port:
number: 80
- host: web.example.com
http:
paths:
- path: /
backend:
service:
name: web
port:
number: 80Deploy Ingress Controller (nginx):
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/cloud/deploy.yamlAfter resolving conflicts, verify configuration:
# Check all services
kubectl get svc -A
# Verify service endpoints are ready
kubectl get endpoints -A
# Check pod logs for port binding errors
kubectl logs <pod-name> -n <namespace> | grep -i port
# Test connectivity from another pod
kubectl run test --image=busybox -- sh -c "nc -vz <service-name> <port>"
# Verify from node (if using NodePort)
curl -v http://<node-ip>:<nodePort>Expected output: Endpoints should list healthy pods, services show correct ports, and connectivity tests succeed.
Port conflicts are most common with NodePort services; using Ingress eliminates this class of issues entirely. ClusterIP services rarely have port conflicts because they do not require exclusive node-level binding. For stateful services (databases, message brokers), use StatefulSet with Headless Service (ClusterIP with clusterIP: None) and assign different node ports per replica. LoadBalancer type services are managed by cloud providers; port conflicts are rare because each replica gets its own IP. Service mesh (Istio, Linkerd) manages port allocation automatically, hiding port complexity from operators. For high-frequency service changes (CI/CD), use Helm with dynamic port assignment (template-based nodePort). Monitor ports with Prometheus: up{job="kubernetes-services"} tracks service health across port changes.
No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes
default backend - 404
How to fix "default backend - 404" in Kubernetes Ingress
serviceaccount cannot list resource
How to fix "serviceaccount cannot list resource" in Kubernetes