This error occurs when Terraform cannot establish an HTTP connection, typically because the target server is unreachable or not listening. Common causes include uninitialized infrastructure, incorrect endpoints, or network connectivity issues.
This error happens when Terraform tries to make an HTTP request (through data sources or providers) but cannot establish a connection to the specified endpoint. The "connection refused" message means either the server is not listening on that port, the host is unreachable, or firewall/network rules are blocking the connection. This frequently occurs with Kubernetes providers that attempt to reach a cluster endpoint before the cluster exists, or when provider configuration uses data sources that reference resources not yet created. Terraform cannot resolve these dependencies at plan time, causing it to default to localhost.
Check that the server you're trying to connect to is actually running and listening on the specified port. For Kubernetes clusters, verify the cluster exists and the endpoint is correct.
For local services or clusters:
# Test connection to the endpoint
curl -v http://your-endpoint:port
# For Kubernetes, verify cluster is running
kubectl cluster-infoIf the endpoint is unreachable or the service isn't running, start the service first before running Terraform.
Ensure network connectivity exists between your Terraform execution environment and the target server. Verify that firewall rules, security groups, and network ACLs allow traffic on the required port.
# Test if port is open
nc -zv endpoint-host port
# For AWS security groups, verify egress rules
# For Azure NSGs, check outbound rulesConfigure firewall and network rules to allow the connection. If running behind a proxy, configure it in the provider.
The most reliable fix for Kubernetes/cluster errors: deploy cluster creation separately from resource creation.
Instead of single apply:
# ❌ Do NOT do this - cluster and kubernetes resources in one apply
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint # Data source depends on aws_eks_cluster!
token = data.aws_eks_cluster_auth.cluster.token
}
resource "aws_eks_cluster" "cluster" { ... }
resource "kubernetes_namespace" "example" { ... }Use separate state files:
# ✅ First state: Create cluster
# terraform/cluster/main.tf
resource "aws_eks_cluster" "cluster" { ... }
# ✅ Second state: Create Kubernetes resources
# terraform/kubernetes/main.tf
provider "kubernetes" {
host = "https://actual-cluster-endpoint.eks.amazonaws.com" # Direct reference
token = data.aws_eks_cluster_auth.cluster.token
}Deploy in two separate runs:
cd terraform/cluster && terraform apply
cd ../kubernetes && terraform applyThis ensures the cluster exists before Kubernetes provider attempts connection.
When creating both infrastructure and resources in the same apply, reference the resource directly instead of using data sources:
# ❌ Problematic (data source may read before resource exists)
provider "kubernetes" {
host = data.aws_eks_cluster.default.endpoint
token = data.aws_eks_cluster_auth.default.token
}
# ✅ Better (direct resource reference)
provider "kubernetes" {
host = aws_eks_cluster.cluster.endpoint
token = data.aws_eks_cluster_auth.cluster.token
}
resource "aws_eks_cluster" "cluster" {
name = "my-cluster"
...
}This gives Terraform explicit dependency information.
Some connection refused errors are intermittent timing issues. Try reducing parallelism:
# Reduce parallel operations
terraform apply -parallelism=1Also verify no environment variables override your configuration:
# Check for conflicting KUBE* or provider-specific env vars
env | grep -i kube
env | grep -i aws
env | grep -i azureRemove any conflicting environment variables and retry.
For HTTP data source specifically, configure timeouts and retries:
data "http" "example" {
url = "http://example.com/api"
# Add timeout
request_timeout_ms = 10000 # 10 seconds
# Retry on transient failures (go-retryablehttp)
retry_policy = {
retry_on_error = true
retry_max = 3
retry_interval_ms = 1000
}
}This helps with temporary connectivity issues or slow-starting services.
This error is especially common in Kubernetes deployments because of how Terraform provider initialization works. Providers are initialized during planning phase, before resources are created. If your provider config depends on values from resources being created (like cluster endpoints), Terraform cannot satisfy this dependency and falls back to defaults (usually localhost).
The HashiCorp team strongly recommends separating infrastructure into multiple state files for production Kubernetes deployments. For simpler cases with good timing, using resource references instead of data sources can work, but it's less reliable.
For EKS specifically, use managed node groups and ensure security group egress rules allow cluster API access. For GKE, verify that client_certificate_config is properly configured. For AKS, ensure service principal has cluster access.
Error: Error installing helm release: cannot re-use a name that is still in use
How to fix "release name in use" error in Terraform with Helm
Error: Error creating GKE Cluster: BadRequest
BadRequest error creating GKE cluster in Terraform
Error: External program failed to produce valid JSON
External program failed to produce valid JSON
Error: Unsupported argument in child module call
How to fix "Unsupported argument in child module call" in Terraform
Error: network is unreachable
How to fix "network is unreachable" in Terraform