The 'Error: Unauthorized' in Terraform occurs when authentication credentials are missing, expired, or invalid when communicating with cloud APIs or Kubernetes clusters. This commonly happens with token-based authentication that expires between operations.
This error indicates that Terraform attempted to authenticate with a provider (such as Kubernetes, AWS, GCP, or Azure) but the credentials provided were either missing, malformed, expired, or didn't have the required permissions. The provider rejected the authentication attempt at the API level. This is different from a connection error—the server is responding, but it's refusing to grant access with the provided credentials.
Check that your authentication credentials are properly configured in your Terraform provider block or environment variables.
For Kubernetes:
provider "kubernetes" {
host = var.cluster_host
token = var.cluster_token
cluster_ca_certificate = base64decode(var.cluster_ca)
}For AWS/Azure/GCP:
Ensure your AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, or equivalent credentials are set:
export AWS_ACCESS_KEY_ID="your-key-id"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_DEFAULT_REGION="us-east-1"Token expiration is common when tokens have short TTLs (e.g., 1 hour). Refresh before destroying to fetch new credentials:
terraform refresh
terraform destroyThis forces Terraform to re-authenticate before attempting the destroy operation.
If using Kubernetes, ensure the service account or user has the required permissions:
kubectl auth can-i create deployments --as=system:serviceaccount:default:my-sa
kubectl auth can-i delete deployments --as=system:serviceaccount:default:my-saIf permissions are missing, create a ClusterRole and ClusterRoleBinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: terraform-admin
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: terraform-admin-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: terraform-admin
subjects:
- kind: ServiceAccount
name: terraform
namespace: defaultIf using kubeconfig file, sometimes explicitly disabling auto-loading helps:
provider "kubernetes" {
load_config_file = false
host = var.cluster_host
token = var.cluster_token
cluster_ca_certificate = base64decode(var.cluster_ca)
}This forces Terraform to use only the explicitly provided credentials instead of trying to load ~/.kube/config.
If using temporary tokens (AWS STS, Kubernetes service account tokens), verify they haven't expired:
# For AWS STS tokens
aws sts get-caller-identity
# For Kubernetes tokens
kubectl get secret my-secret -o jsonpath='{.data.token}' | base64 -dRegenerateTokens if expired:
- AWS: Use aws sts get-session-token to get fresh credentials
- Kubernetes: Create a new service account token or use long-lived tokens
- Cloud providers: Regenerate API keys from console
Avoid creating the cluster and Kubernetes resources in the same Terraform apply:
# First, apply only the cluster
terraform apply -target=aws_eks_cluster.main
# Then apply Kubernetes resources
terraform applyThis ensures the cluster exists and is fully initialized before Terraform tries to authenticate to it.
Token Lifecycle in Terraform: Terraform does not automatically refresh authentication state during long-running operations. If your token expires during a multi-resource apply or destroy, Terraform will fail partway through. Solutions:
1. Use long-lived tokens when possible (at least as long as your Terraform operations take)
2. Use exec-based auth for dynamic credential refresh (recommended for Kubernetes on EKS/GKE/AKS):
provider "kubernetes" {
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", var.cluster_name]
}
}3. Use HCP Terraform Dynamic Credentials if running in cloud:
provider "kubernetes" {
dynamic "exec" {
for_each = var.use_oidc ? [1] : []
content {
api_version = "client.authentication.k8s.io/v1beta1"
command = "terraform"
args = ["provider", "exec", "--", "kubernetes"]
}
}
}Kubernetes-Specific: The system:anonymous user often appears in destroy errors. This happens because the provider's authentication context is lost. Explicitly setting the kubeconfig context avoids this:
provider "kubernetes" {
config_context = var.kube_context # e.g., "docker-desktop"
}Azure AKS: If using Microsoft Entra ID (Azure AD) authentication, static token auth will always fail. Use the exec block with Azure CLI:
provider "kubernetes" {
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "az"
args = ["aks", "get-credentials", "--resource-group", var.resource_group, "--name", var.cluster_name, "--file", "-"]
}
}Error: Error installing helm release: cannot re-use a name that is still in use
How to fix "release name in use" error in Terraform with Helm
Error: Error creating GKE Cluster: BadRequest
BadRequest error creating GKE cluster in Terraform
Error: External program failed to produce valid JSON
External program failed to produce valid JSON
Error: Unsupported argument in child module call
How to fix "Unsupported argument in child module call" in Terraform
Error: network is unreachable
How to fix "network is unreachable" in Terraform