The Horizontal Pod Autoscaler cannot find the target Deployment, ReplicaSet, or StatefulSet specified in scaleTargetRef. This is typically caused by typos, namespace mismatches, or incorrect apiVersion.
The Horizontal Pod Autoscaler (HPA) requires a valid reference to a scalable workload (Deployment, ReplicaSet, or StatefulSet) in its scaleTargetRef field. When this error occurs, the HPA controller cannot locate the referenced resource, which completely prevents autoscaling from functioning. The scaleTargetRef specifies three critical fields: apiVersion, kind, and name. If any of these are incorrect, or if the target resource exists in a different namespace, the HPA will fail to find it. Kubernetes requires exact matches—there's no fuzzy matching or cross-namespace references allowed. This error often appears after renaming deployments, migrating between namespaces, or when copying HPA configurations between environments without updating the target reference.
Check if the deployment or workload referenced by the HPA actually exists:
kubectl get deployment <deployment-name> -n <namespace>
kubectl get replicaset <rs-name> -n <namespace>
kubectl get statefulset <sts-name> -n <namespace>List all deployments in the namespace to find the correct name:
kubectl get deployments -n <namespace>Note that names are case-sensitive.
View the current HPA configuration to identify mismatches:
kubectl get hpa <hpa-name> -n <namespace> -o yamlLook at the scaleTargetRef section:
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-deployment # Must match exactlyCompare the name field with the actual deployment name.
HPA cannot target resources in other namespaces. Verify both are in the same namespace:
kubectl get hpa <hpa-name> -n <namespace>
kubectl get deployment <deployment-name> -n <namespace>If they're in different namespaces, either:
1. Move the HPA to the same namespace as the target
2. Move the target to the same namespace as the HPA
Create the HPA in the correct namespace:
kubectl apply -f hpa.yaml -n <target-namespace>For Kubernetes 1.16+, use apps/v1 for Deployments:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1 # NOT extensions/v1beta1
kind: Deployment
name: my-deployment
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80The deprecated extensions/v1beta1 was removed in Kubernetes 1.16.
Ensure you're using the correct kind (Deployment, ReplicaSet, or StatefulSet):
# Check what type of resource you actually have
kubectl get all -n <namespace> | grep <name>Common valid kinds for HPA:
- Deployment (most common)
- ReplicaSet
- StatefulSet
Note: You cannot target a Pod directly—only controllers that manage pods.
Update the HPA with the correct scaleTargetRef:
kubectl apply -f hpa.yaml -n <namespace>Verify the HPA can now find and scale the target:
kubectl describe hpa <hpa-name> -n <namespace>Look for:
- "AbleToScale: True" in conditions
- No "FailedGetScale" events
- TARGETS showing actual utilization percentages
kubectl get hpa <hpa-name> -n <namespace>Namespace isolation for HPA is intentional—it's a security feature. Cross-namespace scaling would allow HPA in one namespace to affect workloads in another, potentially violating tenant isolation.
API version deprecation history: extensions/v1beta1 was deprecated in 1.14 and removed in 1.16. apps/v1beta1 was deprecated in 1.9. Always use apps/v1 for Deployments, ReplicaSets, and StatefulSets in modern Kubernetes.
The scale subresource: HPA doesn't modify the Deployment directly. Instead, it uses the scale subresource (GET/PUT on /scale). Not all resources support the scale subresource—custom resources need to explicitly enable it in their CRD definition.
StatefulSet scaling behavior differs from Deployments: pods are scaled sequentially (one at a time) rather than in parallel, and each pod must be Ready before the next is created or deleted.
If you have multiple HPAs accidentally targeting the same resource, they may conflict. Each scalable resource should have at most one HPA.
Failed to connect to server: connection refused (HTTP/2)
How to fix "HTTP/2 connection refused" error in Kubernetes
missing request for cpu in container
How to fix "missing request for cpu in container" in Kubernetes HPA
error: invalid configuration
How to fix "error: invalid configuration" in Kubernetes
etcdserver: cluster ID mismatch
How to fix "etcdserver: cluster ID mismatch" in Kubernetes
running with swap on is not supported
How to fix "running with swap on is not supported" in kubeadm