This error occurs when NetworkPolicy rules block traffic between pods. Fix it by verifying your CNI supports network policies, checking pod label selectors match, and ensuring both ingress and egress rules allow the required traffic paths.
The "network policy denied" error indicates that Kubernetes NetworkPolicy rules are blocking traffic between pods. Unlike traditional firewalls, network policies in Kubernetes are implicit deny when any policy selects a pod—traffic not explicitly allowed is dropped. NetworkPolicies use label selectors to identify target pods and define allowed ingress (incoming) and egress (outgoing) traffic. When connections hang or timeout without explicit error messages, network policies are often the cause. Kubernetes doesn't expose which policy blocked traffic, making debugging challenging. Important: NetworkPolicies require a CNI plugin that supports them. Flannel does NOT support network policies—policies will be created but silently ignored.
Check if NetworkPolicy API is available and CNI supports it:
# Check API availability
kubectl api-resources | grep networkpolicies
# Check which CNI is installed
kubectl get pods -n kube-system | grep -E 'calico|cilium|flannel|weave'
# Verify CNI plugin is running
kubectl logs -n kube-system -l k8s-app=calico-node --tail=20If Flannel detected (no NetworkPolicy support):
Install Calico for network policy support:
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yamlCheck if pod labels match the policy's podSelector:
# List pods with labels
kubectl get pods --show-labels -n production
# Check if pods match policy selector
kubectl get pods -l app=backend --show-labels -n production
# For cross-namespace policies, verify namespace labels
kubectl get namespace production --show-labels
# Add missing namespace label if needed
kubectl label namespace production tier=trustedStart with whitelist approach (block all, allow specific):
# Block all ingress traffic in namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
---
# Allow specific traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080Apply and verify:
kubectl apply -f network-policies.yaml
kubectl get networkpolicy -n productionWhen blocking all egress, pods can't resolve hostnames. Allow DNS:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns-egress
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
egress:
# Allow DNS to kube-system
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: UDP
port: 53
# Allow traffic within namespace
- to:
- podSelector: {}Test DNS resolution:
kubectl exec -it <pod-name> -n production -- nslookup kubernetes.defaultTest if network policies are blocking traffic:
# Use debug container with network tools
kubectl debug frontend-pod -n production -it \
--image=nicolaka/netshoot -- \
curl --connect-timeout 5 -v http://backend:8080/health
# Test egress to external service
kubectl debug frontend-pod -n production -it \
--image=curlimages/curl -- \
curl --connect-timeout 5 https://api.example.com
# Temporary test pod
kubectl run --rm -i --tty test-curl --image=curlimages/curl --restart=Never \
-n production -- curl http://backend:8080Hanging requests (timeout) indicate policy is blocking traffic.
Check CNI plugin logs for policy enforcement:
# Calico
kubectl logs -n kube-system -l k8s-app=calico-node | grep -i "denied\|policy"
# Cilium
kubectl logs -n kube-system -l k8s-app=cilium | grep -i "denied\|policy"
# Describe policy to see what it matches
kubectl describe networkpolicy <policy-name> -n productionCalico example denial log:
deny=all from=10.0.1.5 to=10.0.2.10 sport=45321 dport=8080 proto=TCPNetwork Policy Testing Tools:
- editor.networkpolicy.io: Visual YAML builder with validation
- nicolaka/netshoot: Debug image with curl, nslookup, tcpdump
- Cilium Hubble: Native observability showing allowed/denied flows
Protocol Limitations:
- NetworkPolicies guarantee TCP/UDP/SCTP blocking
- ICMP, ARP have undefined behavior across CNI plugins
- Layer 7 policies (HTTP method/path) only with Cilium's CiliumNetworkPolicy
Additive Policy Behavior:
- Multiple policies on same pod are OR'ed (union, not intersection)
- If Policy A allows Pod1 and Policy B allows Pod2, both can reach target
- No explicit deny policies—connections denied when covered by policies but none allow the traffic
Best Practices:
1. Start with default-deny ingress, then add explicit allows
2. Don't restrict egress first—usually breaks dependencies
3. Test in non-production before applying to production
4. Document which policies apply to which services
No subnets found for EKS cluster
How to fix "eks subnet not found" in Kubernetes
unable to compute replica count
How to fix "unable to compute replica count" in Kubernetes HPA
error: context not found
How to fix "error: context not found" in Kubernetes
default backend - 404
How to fix "default backend - 404" in Kubernetes Ingress
serviceaccount cannot list resource
How to fix "serviceaccount cannot list resource" in Kubernetes