This error occurs when Terraform attempts to create a Kubernetes ConfigMap that already exists in the cluster. Common with aws-auth ConfigMaps in EKS or when re-applying configurations. Resolving it requires either importing the existing resource into Terraform state or removing the existing ConfigMap first.
When Terraform tries to create a new kubernetes_config_map resource, it checks if that ConfigMap already exists in the target Kubernetes cluster. If it does, the Kubernetes API returns an "already exists" error because the resource cannot be created twice. This often happens with system-managed ConfigMaps like the aws-auth ConfigMap in EKS clusters. AWS automatically creates this ConfigMap when you provision the cluster with managed node groups, but Terraform doesn't know about it and tries to create its own copy, causing a conflict. Another common scenario is when you re-run Terraform apply after a previous failed attempt, or when ConfigMaps are created outside of Terraform (manually via kubectl) but then Terraform tries to manage them.
First, confirm that the ConfigMap actually exists in Kubernetes:
kubectl get configmap <configmap-name> -n <namespace>
kubectl describe configmap <configmap-name> -n <namespace>Replace <configmap-name> and <namespace> with your actual values. If the ConfigMap is found, this confirms the conflict is real and you need to sync Terraform state or remove the resource.
If the ConfigMap should be managed by Terraform, import it into your Terraform state file:
terraform import kubernetes_config_map.example <namespace>/<configmap-name>Or for kubernetes_config_map_v1:
terraform import kubernetes_config_map_v1.example <namespace>/<configmap-name>For the aws-auth ConfigMap specifically:
terraform import kubernetes_config_map.aws_auth kube-system/aws-authThis tells Terraform "this ConfigMap already exists and I'm taking over management of it". The import command updates your Terraform state file without recreating the resource.
If the ConfigMap is not critical or contains only generated data, delete it from the cluster and let Terraform recreate it:
kubectl delete configmap <configmap-name> -n <namespace>
terraform applyWARNING: Only use this approach if:
- The ConfigMap data is not critical or can be regenerated
- You're certain no other systems depend on it
- You have backups if needed
For the aws-auth ConfigMap in EKS, do NOT delete it unless you know what you're doing - losing it can make nodes unable to join the cluster.
Instead of creating the ConfigMap with kubernetes_config_map resource, use a local-exec provisioner with kubectl apply for idempotent operations:
resource "null_resource" "configmap" {
provisioner "local-exec" {
command = "kubectl apply -f - <<EOF\n${file(${path.module}/configmap.yaml)}\nEOF"
}
triggers = {
manifest = file("${path.module}/configmap.yaml")
}
}The kubectl apply command is idempotent - it updates if the resource exists or creates it if it doesn't, avoiding the "already exists" error.
If dealing with the aws-auth ConfigMap in EKS, consider managing it in a separate Terraform module or module phase:
1. Create the EKS cluster and node groups first (without the kubernetes provider managing aws-auth)
2. In a second apply, use a separate module with the kubernetes provider to manage aws-auth
3. Ensure the kubernetes provider credentials depend on the cluster being fully created first
This separation prevents the timing/order issues that cause "already exists" errors:
# First module: Create cluster
module "eks_cluster" {
source = "./modules/eks-cluster"
}
# Second module: Manage aws-auth (depends on cluster being ready)
module "eks_auth" {
source = "./modules/eks-auth"
depends_on = [module.eks_cluster]
cluster_id = module.eks_cluster.id
}After resolving the error, verify that Terraform state matches your cluster:
terraform planThe plan should show no changes. If it still shows the ConfigMap as needing to be created, the import or deletion didn't fully resolve the state mismatch.
You can also inspect the state file directly:
terraform state show kubernetes_config_map.exampleThis shows what Terraform thinks the state of this resource is.
For EKS-specific scenarios: The aws-auth ConfigMap is a special case because it's created by AWS automatically when you provision managed node groups. This creates a fundamental conflict with Terraform trying to manage it as a resource.
The recommended approach is to either:
1. Never let Terraform create aws-auth - import it and only manage updates
2. Create the ConfigMap first (before node groups), then don't recreate it
3. Use separate Terraform modules/phases: one for infrastructure, another for Kubernetes resource management
This prevents the provider dependencies and timing issues that cause "already exists" errors.
For system-managed ConfigMaps in general (like those created by cluster operators or other tools), always consider whether Terraform should manage them or if they should be left alone. Shared responsibility for the same resource is a common source of conflicts.
Error: Error installing helm release: cannot re-use a name that is still in use
How to fix "release name in use" error in Terraform with Helm
Error: Error creating GKE Cluster: BadRequest
BadRequest error creating GKE cluster in Terraform
Error: External program failed to produce valid JSON
External program failed to produce valid JSON
Error: Unsupported argument in child module call
How to fix "Unsupported argument in child module call" in Terraform
Error: network is unreachable
How to fix "network is unreachable" in Terraform