GCP load balancer health checks fail when there is no network route between load balancer and backend services. Pods become unreachable due to firewall rules, misconfigured routes, or Service endpoint configuration issues.
This error occurs when GCP's health check probes cannot reach your Kubernetes Service endpoints. The load balancer is unable to establish a network path to the backend pods, typically due to Google Cloud firewall rules, route table misconfigurations, or incorrect Service endpoint exposure. Without valid routes, health checks fail and the load balancer removes all backends, causing complete service unavailability.
GCP load balancers use specific IP ranges for health probes. Create a firewall rule allowing these:
gcloud compute firewall-rules create allow-gcp-health-checks \
--allow tcp,udp \
--source-ranges 35.191.0.0/16,130.211.0.0/22Check existing rules:
gcloud compute firewall-rules list --filter="name~'gcp-health' OR name~'load-balancer'"Check if the Service port matches pod container ports:
kubectl get svc <service-name> -o yaml | grep -A5 ports:
kubectl get pods -o wide
kubectl logs <pod-name> | grep -i "listening on port"If ports don't match, update the Service:
spec:
ports:
- port: 80 # external port
targetPort: 8080 # container port
protocol: TCPConfigure proper health check probes in the Service:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
containers:
- name: app
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 10Create a test pod to verify networking:
kubectl run -it --rm debug --image=gcr.io/cloud-builders/kubectl --restart=Never -- bash
# Inside the pod:
curl -v http://<service-cluster-ip>:<port>/health
nc -zv <pod-ip> <port>If these succeed, load balancer configuration is the issue.
Verify GCP backend service configuration:
gcloud compute backend-services list
gcloud compute backend-services get-health <backend-service-name> --global
# Check instance groups:
gcloud compute instance-groups list
gcloud compute instance-groups get-named-ports <ig-name> --zone=<zone>Ensure instance groups match your nodes.
Check if Kubernetes Network Policies are blocking traffic:
kubectl get networkpolicies -A
kubectl describe networkpolicy <policy-name>If blocking ingress, modify to allow load balancer traffic:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-lb
spec:
podSelector:
matchLabels:
app: my-app
ingress:
- from:
- namespaceSelector: {}
ports:
- protocol: TCP
port: 8080Verify Service annotations for GCP load balancer:
kubectl get svc <service-name> -o yaml | grep -A10 annotationsFor GCP-specific configuration:
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/load-balancer-type: "Internal" # if internal LB
spec:
type: LoadBalancer
sessionAffinity: ClientIP # optionalIf configuration is correct, force GCP to re-sync:
# Delete and recreate the Service
kubectl delete svc <service-name>
kubectl apply -f service.yaml
# Or patch to force update
kubectl patch svc <service-name> -p '{"spec":{"clusterIP":"None"}}' --type merge
kubectl patch svc <service-name> -p '{"spec":{"clusterIP":"<original-ip>"}}' --type mergeThis triggers GCP to rebuild backend groups.
GCP Health Check IPs (35.191.0.0/16, 130.211.0.0/22) must be allowed in ALL firewall rules and Network Policies. Use GCP Console → Load Balancing → Backend Services → Health Checks to test connectivity directly. For internal load balancers, use different IP ranges (35.199.0.0/16). In Shared VPC environments, verify firewall rules exist in host project.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm